repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
---|---|---|---|---|---|
thnee/ansible | lib/ansible/modules/network/f5/bigip_firewall_log_profile.py | 23 | 28678 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2019, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = r'''
---
module: bigip_firewall_log_profile
short_description: Manages AFM logging profiles configured in the system
description:
- Manages AFM logging profiles configured in the system along with basic information about each profile.
version_added: 2.9
options:
name:
description:
- Specifies the name of the log profile.
type: str
required: True
description:
description:
- Description of the log profile.
type: str
dos_protection:
description:
- Configures DoS related settings of the log profile.
suboptions:
dns_publisher:
description:
- Specifies the name of the log publisher used for DNS DoS events.
- To specify the log_publisher on a different partition from the AFM log profile, specify the name in fullpath
format, e.g. C(/Foobar/log-publisher), otherwise the partition for log publisher
is inferred from C(partition) module parameter.
type: str
sip_publisher:
description:
- Specifies the name of the log publisher used for SIP DoS events.
- To specify the log_publisher on a different partition from the AFM log profile, specify the name in fullpath
format, e.g. C(/Foobar/log-publisher), otherwise the partition for log publisher
is inferred from C(partition) module parameter.
type: str
network_publisher:
description:
- Specifies the name of the log publisher used for DoS Network events.
- To specify the log_publisher on a different partition from the AFM log profile, specify the name in fullpath
format, e.g. C(/Foobar/log-publisher), otherwise the partition for log publisher
is inferred from C(partition) module parameter.
type: str
type: dict
ip_intelligence:
description:
- Configures IP Intelligence related settings of the log profile.
suboptions:
log_publisher:
description:
- Specifies the name of the log publisher used for IP Intelligence events.
- To specify the log_publisher on a different partition from the AFM log profile, specify the name in fullpath
format, e.g. C(/Foobar/log-publisher), otherwise the partition for log publisher
is inferred from C(partition) module parameter.
type: str
rate_limit:
description:
- Defines a rate limit for all combined IP intelligence log messages per second. Beyond this rate limit,
log messages are not logged until the threshold drops below the specified rate.
- To specify an indefinite rate, use the value C(indefinite).
- If specifying a numeric rate, the value must be between C(1) and C(4294967295).
type: str
log_rtbh:
description:
- Specifies, when C(yes), that remotely triggered blackholing events are logged.
type: bool
log_shun:
description:
- Specifies, when C(yes), that IP Intelligence shun list events are logged.
- This option can only be set on C(global-network) built-in profile
type: bool
log_translation_fields:
description:
- This option is used to enable or disable the logging of translated (i.e server side) fields in IP
Intelligence log messages.
- Translated fields include (but are not limited to) source address/port, destination address/port,
IP protocol, route domain, and VLAN.
type: bool
type: dict
port_misuse:
description:
- Port Misuse log configuration.
suboptions:
log_publisher:
description:
- Specifies the name of the log publisher used for Port Misuse events.
- To specify the log_publisher on a different partition from the AFM log profile, specify the name in fullpath
format, e.g. C(/Foobar/log-publisher), otherwise the partition for log publisher
is inferred from C(partition) module parameter.
type: str
rate_limit:
description:
- Defines a rate limit for all combined port misuse log messages per second. Beyond this rate limit,
log messages are not logged until the threshold drops below the specified rate.
- To specify an indefinite rate, use the value C(indefinite).
- If specifying a numeric rate, the value must be between C(1) and C(4294967295).
type: str
type: dict
partition:
description:
- Device partition to create log profile on.
- Parameter also used when specifying names for log publishers, unless log publisher names are in fullpath format.
type: str
default: Common
state:
description:
- When C(state) is C(present), ensures the resource exists.
- When C(state) is C(absent), ensures that resource is removed. Attempts to remove built-in system profiles are
ignored and no change is returned.
type: str
choices:
- present
- absent
default: present
extends_documentation_fragment: f5
author:
- Wojciech Wypior (@wojtek0806)
'''
EXAMPLES = r'''
- name: Create a basic log profile with port misuse
bigip_firewall_log_profile:
name: barbaz
port_misuse:
rate_limit: 30000
log_publisher: local-db-pub
provider:
password: secret
server: lb.mydomain.com
user: admin
delegate_to: localhost
- name: Change ip_intelligence settings, publisher on different partition, remove port misuse
bigip_firewall_log_profile:
name: barbaz
ip_intelligence:
rate_limit: 400000
log_translation_fields: yes
log_rtbh: yes
log_publisher: "/foobar/non-local-db"
port_misuse:
log_publisher: ""
provider:
password: secret
server: lb.mydomain.com
user: admin
delegate_to: localhost
- name: Create a log profile with dos protection, different partition
bigip_firewall_log_profile:
name: foobar
partition: foobar
dos_protection:
dns_publisher: "/Common/local-db-pub"
sip_publisher: "non-local-db"
network_publisher: "/Common/local-db-pub"
provider:
password: secret
server: lb.mydomain.com
user: admin
delegate_to: localhost
- name: Remove log profile
bigip_firewall_log_profile:
name: barbaz
partition: Common
state: absent
provider:
password: secret
server: lb.mydomain.com
user: admin
delegate_to: localhost
'''
RETURN = r'''
description:
description: New description of the AFM log profile.
returned: changed
type: str
sample: This is my description
dos_protection:
description: Log publishers used in DoS related settings of the log profile.
type: complex
returned: changed
contains:
dns_publisher:
description: The name of the log publisher used for DNS DoS events.
returned: changed
type: str
sample: "/Common/local-db-publisher"
sip_publisher:
description: The name of the log publisher used for SIP DoS events.
returned: changed
type: str
sample: "/Common/local-db-publisher"
network_publisher:
description: The name of the log publisher used for DoS Network events.
returned: changed
type: str
sample: "/Common/local-db-publisher"
sample: hash/dictionary of values
ip_intelligence:
description: IP Intelligence related settings of the log profile.
type: complex
returned: changed
contains:
log_publisher:
description: The name of the log publisher used for IP Intelligence events.
returned: changed
type: str
sample: "/Common/local-db-publisher"
rate_limit:
description: The rate limit for all combined IP intelligence log messages per second.
returned: changed
type: str
sample: "indefinite"
log_rtbh:
description: Logging of remotely triggered blackholing events.
returned: changed
type: bool
sample: yes
log_shun:
description: Logging of IP Intelligence shun list events.
returned: changed
type: bool
sample: no
log_translation_fields:
description: Logging of translated fields in IP Intelligence log messages.
returned: changed
type: bool
sample: no
sample: hash/dictionary of values
port_misuse:
description: Port Misuse related settings of the log profile.
type: complex
returned: changed
contains:
log_publisher:
description: The name of the log publisher used for Port Misuse events.
returned: changed
type: str
sample: "/Common/local-db-publisher"
rate_limit:
description: The rate limit for all combined Port Misuse log messages per second.
returned: changed
type: str
sample: "indefinite"
sample: hash/dictionary of values
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import env_fallback
try:
from library.module_utils.network.f5.bigip import F5RestClient
from library.module_utils.network.f5.common import F5ModuleError
from library.module_utils.network.f5.common import AnsibleF5Parameters
from library.module_utils.network.f5.common import fq_name
from library.module_utils.network.f5.common import transform_name
from library.module_utils.network.f5.common import f5_argument_spec
from library.module_utils.network.f5.common import flatten_boolean
from library.module_utils.network.f5.compare import compare_dictionary
except ImportError:
from ansible.module_utils.network.f5.bigip import F5RestClient
from ansible.module_utils.network.f5.common import F5ModuleError
from ansible.module_utils.network.f5.common import AnsibleF5Parameters
from ansible.module_utils.network.f5.common import fq_name
from ansible.module_utils.network.f5.common import transform_name
from ansible.module_utils.network.f5.common import f5_argument_spec
from ansible.module_utils.network.f5.common import flatten_boolean
from ansible.module_utils.network.f5.compare import compare_dictionary
class Parameters(AnsibleF5Parameters):
api_map = {
'ipIntelligence': 'ip_intelligence',
'portMisuse': 'port_misuse',
'protocolDnsDosPublisher': 'dns_publisher',
'protocolSipDosPublisher': 'sip_publisher',
'dosNetworkPublisher': 'network_publisher',
}
api_attributes = [
'description',
'ipIntelligence',
'portMisuse',
'dosNetworkPublisher',
'protocolDnsDosPublisher',
'protocolSipDosPublisher',
]
returnables = [
'ip_intelligence',
'dns_publisher',
'sip_publisher',
'network_publisher',
'port_misuse',
'description',
'ip_log_publisher',
'ip_rate_limit',
'ip_log_rthb',
'ip_log_shun',
'ip_log_translation_fields',
'port_rate_limit',
'port_log_publisher',
]
updatables = [
'dns_publisher',
'sip_publisher',
'network_publisher',
'description',
'ip_log_publisher',
'ip_rate_limit',
'ip_log_rthb',
'ip_log_shun',
'ip_log_translation_fields',
'port_rate_limit',
'port_log_publisher',
]
class ApiParameters(Parameters):
@property
def ip_log_publisher(self):
result = self._values['ip_intelligence'].get('logPublisher', None)
return result
@property
def ip_rate_limit(self):
return self._values['ip_intelligence']['aggregateRate']
@property
def port_rate_limit(self):
return self._values['port_misuse']['aggregateRate']
@property
def port_log_publisher(self):
result = self._values['port_misuse'].get('logPublisher', None)
return result
@property
def ip_log_rtbh(self):
return self._values['ip_intelligence']['logRtbh']
@property
def ip_log_shun(self):
if self._values['name'] != 'global-network':
return None
return self._values['ip_intelligence']['logShun']
@property
def ip_log_translation_fields(self):
return self._values['ip_intelligence']['logTranslationFields']
class ModuleParameters(Parameters):
def _transform_log_publisher(self, log_publisher):
if log_publisher is None:
return None
if log_publisher in ['', 'none']:
return {}
return fq_name(self.partition, log_publisher)
def _validate_rate_limit(self, rate_limit):
if rate_limit is None:
return None
if rate_limit == 'indefinite':
return 4294967295
if 0 <= int(rate_limit) <= 4294967295:
return int(rate_limit)
raise F5ModuleError(
"Valid 'maximum_age' must be in range 0 - 4294967295, or 'indefinite'."
)
@property
def ip_log_rtbh(self):
if self._values['ip_intelligence'] is None:
return None
result = flatten_boolean(self._values['ip_intelligence']['log_rtbh'])
if result == 'yes':
return 'enabled'
if result == 'no':
return 'disabled'
return result
@property
def ip_log_shun(self):
if self._values['ip_intelligence'] is None:
return None
if 'global-network' not in self._values['name']:
return None
result = flatten_boolean(self._values['ip_intelligence']['log_shun'])
if result == 'yes':
return 'enabled'
if result == 'no':
return 'disabled'
return result
@property
def ip_log_translation_fields(self):
if self._values['ip_intelligence'] is None:
return None
result = flatten_boolean(self._values['ip_intelligence']['log_translation_fields'])
if result == 'yes':
return 'enabled'
if result == 'no':
return 'disabled'
return result
@property
def ip_log_publisher(self):
if self._values['ip_intelligence'] is None:
return None
result = self._transform_log_publisher(self._values['ip_intelligence']['log_publisher'])
return result
@property
def ip_rate_limit(self):
if self._values['ip_intelligence'] is None:
return None
return self._validate_rate_limit(self._values['ip_intelligence']['rate_limit'])
@property
def port_rate_limit(self):
if self._values['port_misuse'] is None:
return None
return self._validate_rate_limit(self._values['port_misuse']['rate_limit'])
@property
def port_log_publisher(self):
if self._values['port_misuse'] is None:
return None
result = self._transform_log_publisher(self._values['port_misuse']['log_publisher'])
return result
@property
def dns_publisher(self):
if self._values['dos_protection'] is None:
return None
result = self._transform_log_publisher(self._values['dos_protection']['dns_publisher'])
return result
@property
def sip_publisher(self):
if self._values['dos_protection'] is None:
return None
result = self._transform_log_publisher(self._values['dos_protection']['sip_publisher'])
return result
@property
def network_publisher(self):
if self._values['dos_protection'] is None:
return None
result = self._transform_log_publisher(self._values['dos_protection']['network_publisher'])
return result
class Changes(Parameters):
def to_return(self):
result = {}
try:
for returnable in self.returnables:
result[returnable] = getattr(self, returnable)
result = self._filter_params(result)
except Exception:
pass
return result
class UsableChanges(Changes):
@property
def ip_intelligence(self):
to_filter = dict(
logPublisher=self._values['ip_log_publisher'],
aggregateRate=self._values['ip_rate_limit'],
logRtbh=self._values['ip_log_rtbh'],
logShun=self._values['ip_log_shun'],
logTranslationFields=self._values['ip_log_translation_fields']
)
result = self._filter_params(to_filter)
if result:
return result
@property
def port_misuse(self):
to_filter = dict(
logPublisher=self._values['port_log_publisher'],
aggregateRate=self._values['port_rate_limit']
)
result = self._filter_params(to_filter)
if result:
return result
class ReportableChanges(Changes):
returnables = [
'ip_intelligence',
'port_misuse',
'description',
'dos_protection',
]
def _change_rate_limit_value(self, value):
if value == 4294967295:
return 'indefinite'
else:
return value
@property
def ip_log_rthb(self):
result = flatten_boolean(self._values['ip_log_rtbh'])
return result
@property
def ip_log_shun(self):
result = flatten_boolean(self._values['ip_log_shun'])
return result
@property
def ip_log_translation_fields(self):
result = flatten_boolean(self._values['ip_log_translation_fields'])
return result
@property
def ip_intelligence(self):
if self._values['ip_intelligence'] is None:
return None
to_filter = dict(
log_publisher=self._values['ip_log_publisher'],
rate_limit=self._change_rate_limit_value(self._values['ip_rate_limit']),
log_rtbh=self.ip_log_rtbh,
log_shun=self.ip_log_shun,
log_translation_fields=self.ip_log_translation_fields
)
result = self._filter_params(to_filter)
if result:
return result
@property
def port_misuse(self):
if self._values['port_misuse'] is None:
return None
to_filter = dict(
log_publisher=self._values['port_log_publisher'],
rate_limit=self._change_rate_limit_value(self._values['port_rate_limit']),
)
result = self._filter_params(to_filter)
if result:
return result
@property
def dos_protection(self):
to_filter = dict(
dns_publisher=self._values['dns_publisher'],
sip_publisher=self._values['sip_publisher'],
network_publisher=self._values['network_publisher'],
)
result = self._filter_params(to_filter)
return result
class Difference(object):
def __init__(self, want, have=None):
self.want = want
self.have = have
def compare(self, param):
try:
result = getattr(self, param)
return result
except AttributeError:
return self.__default(param)
def __default(self, param):
attr1 = getattr(self.want, param)
try:
attr2 = getattr(self.have, param)
if attr1 != attr2:
return attr1
except AttributeError:
return attr1
@property
def ip_log_publisher(self):
result = compare_dictionary(self.want.ip_log_publisher, self.have.ip_log_publisher)
return result
@property
def port_log_publisher(self):
result = compare_dictionary(self.want.port_log_publisher, self.have.port_log_publisher)
return result
@property
def dns_publisher(self):
result = compare_dictionary(self.want.dns_publisher, self.have.dns_publisher)
return result
@property
def sip_publisher(self):
result = compare_dictionary(self.want.sip_publisher, self.have.sip_publisher)
return result
@property
def network_publisher(self):
result = compare_dictionary(self.want.network_publisher, self.have.network_publisher)
return result
class ModuleManager(object):
def __init__(self, *args, **kwargs):
self.module = kwargs.get('module', None)
self.client = F5RestClient(**self.module.params)
self.want = ModuleParameters(params=self.module.params)
self.have = ApiParameters()
self.changes = UsableChanges()
def _set_changed_options(self):
changed = {}
for key in Parameters.returnables:
if getattr(self.want, key) is not None:
changed[key] = getattr(self.want, key)
if changed:
self.changes = UsableChanges(params=changed)
def _update_changed_options(self):
diff = Difference(self.want, self.have)
updatables = Parameters.updatables
changed = dict()
for k in updatables:
change = diff.compare(k)
if change is None:
continue
else:
if isinstance(change, dict):
changed.update(change)
else:
changed[k] = change
if changed:
self.changes = UsableChanges(params=changed)
return True
return False
def _announce_deprecations(self, result):
warnings = result.pop('__warnings', [])
for warning in warnings:
self.client.module.deprecate(
msg=warning['msg'],
version=warning['version']
)
def exec_module(self):
changed = False
result = dict()
state = self.want.state
if state == "present":
changed = self.present()
elif state == "absent":
changed = self.absent()
reportable = ReportableChanges(params=self.changes.to_return())
changes = reportable.to_return()
result.update(**changes)
result.update(dict(changed=changed))
self._announce_deprecations(result)
return result
def present(self):
if self.exists():
return self.update()
else:
return self.create()
def absent(self):
# Built-in profiles cannot be removed
built_ins = [
'Log all requests', 'Log illegal requests',
'global-network', 'local-dos'
]
if self.want.name in built_ins:
return False
if self.exists():
return self.remove()
return False
def should_update(self):
result = self._update_changed_options()
if result:
return True
return False
def update(self):
self.have = self.read_current_from_device()
if not self.should_update():
return False
if self.module.check_mode:
return True
self.update_on_device()
return True
def remove(self):
if self.module.check_mode:
return True
self.remove_from_device()
if self.exists():
raise F5ModuleError("Failed to delete the resource.")
return True
def create(self):
self._set_changed_options()
if self.module.check_mode:
return True
self.create_on_device()
return True
def exists(self):
uri = "https://{0}:{1}/mgmt/tm/security/log/profile/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError:
return False
if resp.status == 404 or 'code' in response and response['code'] == 404:
return False
return True
def create_on_device(self):
params = self.changes.api_params()
params['name'] = self.want.name
params['partition'] = self.want.partition
uri = "https://{0}:{1}/mgmt/tm/security/log/profile/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.post(uri, json=params)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] in [400, 404, 409]:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
return True
def update_on_device(self):
params = self.changes.api_params()
uri = "https://{0}:{1}/mgmt/tm/security/log/profile/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.patch(uri, json=params)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] in [400, 404, 409]:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
def remove_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/security/log/profile/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
response = self.client.api.delete(uri)
if response.status == 200:
return True
raise F5ModuleError(response.content)
def read_current_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/security/log/profile/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] in [400, 404, 409]:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
return ApiParameters(params=response)
class ArgumentSpec(object):
def __init__(self):
self.supports_check_mode = True
argument_spec = dict(
name=dict(
required=True
),
description=dict(),
dos_protection=dict(
type='dict',
options=dict(
dns_publisher=dict(),
sip_publisher=dict(),
network_publisher=dict()
)
),
ip_intelligence=dict(
type='dict',
options=dict(
log_publisher=dict(),
log_translation_fields=dict(type='bool'),
rate_limit=dict(),
log_rtbh=dict(type='bool'),
log_shun=dict(type='bool')
)
),
port_misuse=dict(
type='dict',
options=dict(
log_publisher=dict(),
rate_limit=dict()
)
),
partition=dict(
default='Common',
fallback=(env_fallback, ['F5_PARTITION'])
),
state=dict(
default='present',
choices=['present', 'absent']
)
)
self.argument_spec = {}
self.argument_spec.update(f5_argument_spec)
self.argument_spec.update(argument_spec)
def main():
spec = ArgumentSpec()
module = AnsibleModule(
argument_spec=spec.argument_spec,
supports_check_mode=spec.supports_check_mode,
)
try:
mm = ModuleManager(module=module)
results = mm.exec_module()
module.exit_json(**results)
except F5ModuleError as ex:
module.fail_json(msg=str(ex))
if __name__ == '__main__':
main()
| gpl-3.0 |
luogangyi/bcec-nova | nova/compute/task_states.py | 96 | 3443 | # Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Possible task states for instances.
Compute instance task states represent what is happening to the instance at the
current moment. These tasks can be generic, such as 'spawning', or specific,
such as 'block_device_mapping'. These task states allow for a better view into
what an instance is doing and should be displayed to users/administrators as
necessary.
"""
# possible task states during create()
SCHEDULING = 'scheduling'
BLOCK_DEVICE_MAPPING = 'block_device_mapping'
NETWORKING = 'networking'
SPAWNING = 'spawning'
# possible task states during snapshot()
IMAGE_SNAPSHOT = 'image_snapshot'
IMAGE_SNAPSHOT_PENDING = 'image_snapshot_pending'
IMAGE_PENDING_UPLOAD = 'image_pending_upload'
IMAGE_UPLOADING = 'image_uploading'
# possible task states during backup()
IMAGE_BACKUP = 'image_backup'
# possible task states during set_admin_password()
UPDATING_PASSWORD = 'updating_password'
# possible task states during resize()
RESIZE_PREP = 'resize_prep'
RESIZE_MIGRATING = 'resize_migrating'
RESIZE_MIGRATED = 'resize_migrated'
RESIZE_FINISH = 'resize_finish'
# possible task states during revert_resize()
RESIZE_REVERTING = 'resize_reverting'
# possible task states during confirm_resize()
RESIZE_CONFIRMING = 'resize_confirming'
# possible task states during reboot()
REBOOTING = 'rebooting'
REBOOT_PENDING = 'reboot_pending'
REBOOT_STARTED = 'reboot_started'
REBOOTING_HARD = 'rebooting_hard'
REBOOT_PENDING_HARD = 'reboot_pending_hard'
REBOOT_STARTED_HARD = 'reboot_started_hard'
# possible task states during pause()
PAUSING = 'pausing'
# possible task states during unpause()
UNPAUSING = 'unpausing'
# possible task states during suspend()
SUSPENDING = 'suspending'
# possible task states during resume()
RESUMING = 'resuming'
# possible task states during power_off()
POWERING_OFF = 'powering-off'
# possible task states during power_on()
POWERING_ON = 'powering-on'
# possible task states during rescue()
RESCUING = 'rescuing'
# possible task states during unrescue()
UNRESCUING = 'unrescuing'
# possible task states during rebuild()
REBUILDING = 'rebuilding'
REBUILD_BLOCK_DEVICE_MAPPING = "rebuild_block_device_mapping"
REBUILD_SPAWNING = 'rebuild_spawning'
# possible task states during live_migrate()
MIGRATING = "migrating"
# possible task states during delete()
DELETING = 'deleting'
# possible task states during soft_delete()
SOFT_DELETING = 'soft-deleting'
# possible task states during restore()
RESTORING = 'restoring'
# possible task states during shelve()
SHELVING = 'shelving'
SHELVING_IMAGE_PENDING_UPLOAD = 'shelving_image_pending_upload'
SHELVING_IMAGE_UPLOADING = 'shelving_image_uploading'
# possible task states during shelve_offload()
SHELVING_OFFLOADING = 'shelving_offloading'
# possible task states during unshelve()
UNSHELVING = 'unshelving'
| apache-2.0 |
trianam/tkLayoutTests | TestRouting/test22/conf/xml/longbarrel_cmsIdealGeometryXML_cff.py | 43 | 6122 | import FWCore.ParameterSet.Config as cms
from Geometry.CMSCommonData.cmsIdealGeometryXML_cfi import *
XMLIdealGeometryESSource.geomXMLFiles = cms.vstring(
'SLHCUpgradeSimulations/Geometry/data/longbarrel/materials.xml',
'Geometry/CMSCommonData/data/rotations.xml',
'Geometry/CMSCommonData/data/normal/cmsextent.xml',
'Geometry/CMSCommonData/data/cms.xml',
'Geometry/CMSCommonData/data/cmsMother.xml',
'Geometry/CMSCommonData/data/cmsTracker.xml',
'Geometry/CMSCommonData/data/caloBase.xml',
'Geometry/CMSCommonData/data/cmsCalo.xml',
'Geometry/CMSCommonData/data/muonBase.xml',
'Geometry/CMSCommonData/data/cmsMuon.xml',
'Geometry/CMSCommonData/data/mgnt.xml',
'Geometry/CMSCommonData/data/beampipe.xml',
'Geometry/CMSCommonData/data/cmsBeam.xml',
'Geometry/CMSCommonData/data/muonMB.xml',
'Geometry/CMSCommonData/data/muonMagnet.xml',
'Geometry/TrackerCommonData/data/pixfwdMaterials.xml',
'Geometry/TrackerCommonData/data/pixfwdCommon.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq1x2.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq1x5.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq2x3.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq2x4.xml',
'Geometry/TrackerCommonData/data/pixfwdPlaq2x5.xml',
'Geometry/TrackerCommonData/data/pixfwdPanelBase.xml',
'Geometry/TrackerCommonData/data/pixfwdPanel.xml',
'Geometry/TrackerCommonData/data/pixfwdBlade.xml',
'Geometry/TrackerCommonData/data/pixfwdNipple.xml',
'Geometry/TrackerCommonData/data/pixfwdDisk.xml',
'Geometry/TrackerCommonData/data/pixfwdCylinder.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixfwd.xml',
'Geometry/TrackerCommonData/data/pixbarmaterial.xml',
'Geometry/TrackerCommonData/data/pixbarladder.xml',
'Geometry/TrackerCommonData/data/pixbarladderfull.xml',
'Geometry/TrackerCommonData/data/pixbarladderhalf.xml',
'Geometry/TrackerCommonData/data/pixbarlayer.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixbarlayer0.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixbarlayer1.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixbarlayer2.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixbarlayer3.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/pixbar.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/newtracker.xml',
'Geometry/TrackerCommonData/data/trackermaterial.xml',
'Geometry/TrackerCommonData/data/tracker.xml',
'Geometry/TrackerCommonData/data/trackerpixbar.xml',
'Geometry/TrackerCommonData/data/trackerpixfwd.xml',
'Geometry/TrackerCommonData/data/trackerother.xml',
'Geometry/EcalCommonData/data/eregalgo.xml',
'Geometry/EcalCommonData/data/ebalgo.xml',
'Geometry/EcalCommonData/data/ebcon.xml',
'Geometry/EcalCommonData/data/ebrot.xml',
'Geometry/EcalCommonData/data/eecon.xml',
'Geometry/EcalCommonData/data/eefixed.xml',
'Geometry/EcalCommonData/data/eehier.xml',
'Geometry/EcalCommonData/data/eealgo.xml',
'Geometry/EcalCommonData/data/escon.xml',
'Geometry/EcalCommonData/data/esalgo.xml',
'Geometry/EcalCommonData/data/eeF.xml',
'Geometry/EcalCommonData/data/eeB.xml',
'Geometry/HcalCommonData/data/hcalrotations.xml',
'Geometry/HcalCommonData/data/hcalalgo.xml',
'Geometry/HcalCommonData/data/hcalbarrelalgo.xml',
'Geometry/HcalCommonData/data/hcalendcapalgo.xml',
'Geometry/HcalCommonData/data/hcalouteralgo.xml',
'Geometry/HcalCommonData/data/hcalforwardalgo.xml',
'Geometry/HcalCommonData/data/hcalforwardfibre.xml',
'Geometry/HcalCommonData/data/hcalforwardmaterial.xml',
'Geometry/MuonCommonData/data/mbCommon.xml',
'Geometry/MuonCommonData/data/mb1.xml',
'Geometry/MuonCommonData/data/mb2.xml',
'Geometry/MuonCommonData/data/mb3.xml',
'Geometry/MuonCommonData/data/mb4.xml',
'Geometry/MuonCommonData/data/muonYoke.xml',
'Geometry/MuonCommonData/data/mf.xml',
'Geometry/ForwardCommonData/data/forward.xml',
'Geometry/ForwardCommonData/data/forwardshield.xml',
'Geometry/ForwardCommonData/data/brmrotations.xml',
'Geometry/ForwardCommonData/data/brm.xml',
'Geometry/ForwardCommonData/data/totemMaterials.xml',
'Geometry/ForwardCommonData/data/totemRotations.xml',
'Geometry/ForwardCommonData/data/totemt1.xml',
'Geometry/ForwardCommonData/data/totemt2.xml',
'Geometry/ForwardCommonData/data/ionpump.xml',
'Geometry/MuonCommonData/data/muonNumbering.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/trackerStructureTopology.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/trackersens.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/trackerRecoMaterial.xml',
'Geometry/EcalSimData/data/ecalsens.xml',
'Geometry/HcalCommonData/data/hcalsens.xml',
'Geometry/HcalSimData/data/CaloUtil.xml',
'Geometry/MuonSimData/data/muonSens.xml',
'Geometry/DTGeometryBuilder/data/dtSpecsFilter.xml',
'Geometry/CSCGeometryBuilder/data/cscSpecsFilter.xml',
'Geometry/CSCGeometryBuilder/data/cscSpecs.xml',
'Geometry/RPCGeometryBuilder/data/RPCSpecs.xml',
'Geometry/ForwardCommonData/data/brmsens.xml',
'Geometry/HcalSimData/data/HcalProdCuts.xml',
'Geometry/EcalSimData/data/EcalProdCuts.xml',
'SLHCUpgradeSimulations/Geometry/data/longbarrel/trackerProdCuts.xml',
'Geometry/TrackerSimData/data/trackerProdCutsBEAM.xml',
'Geometry/MuonSimData/data/muonProdCuts.xml',
'Geometry/CMSCommonData/data/FieldParameters.xml'
)
| gpl-2.0 |
idem2lyon/persomov | couchpotato/core/media/movie/providers/trailer/youtube_dl/extractor/groupon.py | 132 | 1903 | from __future__ import unicode_literals
from .common import InfoExtractor
class GrouponIE(InfoExtractor):
_VALID_URL = r'https?://www\.groupon\.com/deals/(?P<id>[^?#]+)'
_TEST = {
'url': 'https://www.groupon.com/deals/bikram-yoga-huntington-beach-2#ooid=tubGNycTo_9Uxg82uESj4i61EYX8nyuf',
'info_dict': {
'id': 'bikram-yoga-huntington-beach-2',
'title': '$49 for 10 Yoga Classes or One Month of Unlimited Classes at Bikram Yoga Huntington Beach ($180 Value)',
'description': 'Studio kept at 105 degrees and 40% humidity with anti-microbial and anti-slip Flotex flooring; certified instructors',
},
'playlist': [{
'info_dict': {
'id': 'tubGNycTo_9Uxg82uESj4i61EYX8nyuf',
'ext': 'mp4',
'title': 'Bikram Yoga Huntington Beach | Orange County',
},
}],
'params': {
'skip_download': 'HLS',
}
}
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
payload = self._parse_json(self._search_regex(
r'var\s+payload\s*=\s*(.*?);\n', webpage, 'payload'), playlist_id)
videos = payload['carousel'].get('dealVideos', [])
entries = []
for v in videos:
if v.get('provider') != 'OOYALA':
self.report_warning(
'%s: Unsupported video provider %s, skipping video' %
(playlist_id, v.get('provider')))
continue
entries.append(self.url_result('ooyala:%s' % v['media']))
return {
'_type': 'playlist',
'id': playlist_id,
'entries': entries,
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
}
| gpl-3.0 |
xapp-le/kernel | tools/perf/python/twatch.py | 7370 | 1334 | #! /usr/bin/python
# -*- python -*-
# -*- coding: utf-8 -*-
# twatch - Experimental use of the perf python interface
# Copyright (C) 2011 Arnaldo Carvalho de Melo <[email protected]>
#
# This application is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; version 2.
#
# This application is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
import perf
def main():
cpus = perf.cpu_map()
threads = perf.thread_map()
evsel = perf.evsel(task = 1, comm = 1, mmap = 0,
wakeup_events = 1, watermark = 1,
sample_id_all = 1,
sample_type = perf.SAMPLE_PERIOD | perf.SAMPLE_TID | perf.SAMPLE_CPU | perf.SAMPLE_TID)
evsel.open(cpus = cpus, threads = threads);
evlist = perf.evlist(cpus, threads)
evlist.add(evsel)
evlist.mmap()
while True:
evlist.poll(timeout = -1)
for cpu in cpus:
event = evlist.read_on_cpu(cpu)
if not event:
continue
print "cpu: %2d, pid: %4d, tid: %4d" % (event.sample_cpu,
event.sample_pid,
event.sample_tid),
print event
if __name__ == '__main__':
main()
| gpl-2.0 |
ropik/chromium | third_party/closure_linter/closure_linter/common/erroraccumulator.py | 264 | 1306 | #!/usr/bin/env python
#
# Copyright 2008 The Closure Linter Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Linter error handler class that accumulates an array of errors."""
__author__ = ('[email protected] (Robert Walker)',
'[email protected] (Andy Perelson)')
from closure_linter.common import errorhandler
class ErrorAccumulator(errorhandler.ErrorHandler):
"""Error handler object that accumulates errors in a list."""
def __init__(self):
self._errors = []
def HandleError(self, error):
"""Append the error to the list.
Args:
error: The error object
"""
self._errors.append(error)
def GetErrors(self):
"""Returns the accumulated errors.
Returns:
A sequence of errors.
"""
return self._errors
| bsd-3-clause |
walchko/pygecko | retired/old_version/original/tests/test_zmq.py | 1 | 2249 | import numpy as np
from pygecko import ZmqClass as zmq
from pygecko.Messages import Image, Vector, dict_to_class
# from pygecko.lib.Messages import serialize, deserialize
# import simplejson as json
from pygecko import Messages as Msgs
def test_pub_sub():
tcp = ('127.0.0.1', 9000)
pub = zmq.Pub(tcp)
sub = zmq.Sub(['test'], tcp)
# tmsg = {'a': 1, 'b': 2}
tmsg = Vector()
tmsg.set(2, 3, 4)
while True:
pub.pub('test', tmsg)
topic, msg = sub.recv()
if msg:
assert msg == tmsg
assert topic == b'test'
break
def test_pub_sub_msgs():
tcp = ('127.0.0.1', 9001)
pub = zmq.Pub(tcp)
sub = zmq.Sub(['test'], tcp)
msgs = [
Msgs.Vector(),
Msgs.Quaternion(),
Msgs.Array(),
Msgs.IMU(),
Msgs.Dictionary(),
Msgs.Odom(),
Msgs.Joystick(),
Msgs.Twist(),
Msgs.Wrench()
]
for tmsg in msgs:
while True:
print(tmsg)
pub.pub('test', tmsg)
topic, msg = sub.recv()
if msg:
assert msg == tmsg
assert topic == b'test'
break
def test_pub_sub_vector():
tcp = ('127.0.0.1', 9001)
pub = zmq.Pub(tcp)
sub = zmq.Sub(['test'], tcp)
d = {'Class': 'Vector', 'x': 1.0, 'z': 2.0}
tmsg = dict_to_class(d)
for _ in range(10):
pub.pub('test', tmsg)
topic, msg = sub.recv()
if msg:
assert msg == tmsg
assert topic == b'test'
# break
def test_pub_sub_b64():
tcp = ('127.0.0.1', 9002)
pub = zmq.Pub(tcp)
sub = zmq.Sub(['test'], tcp)
im = np.random.rand(100, 100)
tmsg = Image()
tmsg.img = im
# print(tmsg['size'], tmsg['depth'])
while True:
pub.pub('test', tmsg)
topic, msg = sub.recv()
print('topic?', topic)
if msg:
if tmsg.b64:
tmsg.decodeB64()
assert msg.img.shape == tmsg.img.shape
assert msg.img.all() == tmsg.img.all()
assert topic == b'test'
break
# def test_serivce():
#
# ans = {'a': 1, 'b': 2}
#
# class tServer(mp.Process):
# def __init__(self):
# mp.Process.__init__(self)
#
# def run(self):
# tcp = ('127.0.0.1', 9000)
# serv = zmq.ServiceProvider(tcp)
# serv.listen(self.callback)
# return 0
#
# def callback(self, msg):
# return msg
#
# s = tServer()
# s.start()
#
# tcp = ('127.0.0.1', 9000)
# client = zmq.ServiceClient(tcp)
# msg = client.get(ans)
# assert msg == ans
#
# s.terminate()
# s.join()
| mit |
ppwwyyxx/tensorflow | tensorflow/python/distribute/collective_all_reduce_strategy.py | 3 | 23796 | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Class CollectiveAllReduceStrategy implementing DistributionStrategy."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
from tensorflow.core.protobuf import config_pb2
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.core.protobuf import tensorflow_server_pb2
from tensorflow.python.distribute import cross_device_ops as cross_device_ops_lib
from tensorflow.python.distribute import cross_device_utils
from tensorflow.python.distribute import device_util
from tensorflow.python.distribute import distribute_lib
from tensorflow.python.distribute import input_lib
from tensorflow.python.distribute import mirrored_strategy
from tensorflow.python.distribute import multi_worker_util
from tensorflow.python.distribute import numpy_dataset
from tensorflow.python.distribute import reduce_util
from tensorflow.python.distribute import values
from tensorflow.python.distribute.cluster_resolver import SimpleClusterResolver
from tensorflow.python.distribute.cluster_resolver import TFConfigClusterResolver
from tensorflow.python.eager import context
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import collective_ops
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import tf_export
# TODO(yuefengz): support in-graph replication.
@tf_export("distribute.experimental.MultiWorkerMirroredStrategy", v1=[])
class CollectiveAllReduceStrategy(distribute_lib.Strategy):
"""A distribution strategy for synchronous training on multiple workers.
This strategy implements synchronous distributed training across multiple
workers, each with potentially multiple GPUs. Similar to
`tf.distribute.MirroredStrategy`, it creates copies of all variables in the
model on each device across all workers.
It uses CollectiveOps's implementation of multi-worker all-reduce to
to keep variables in sync. A collective op is a single op in the
TensorFlow graph which can automatically choose an all-reduce algorithm in
the TensorFlow runtime according to hardware, network topology and tensor
sizes.
By default it uses all local GPUs or CPU for single-worker training.
When 'TF_CONFIG' environment variable is set, it parses cluster_spec,
task_type and task_id from 'TF_CONFIG' and turns into a multi-worker strategy
which mirrores models on GPUs of all machines in a cluster. In the current
implementation, it uses all GPUs in a cluster and it assumes all workers have
the same number of GPUs.
You can also pass a `distribute.cluster_resolver.ClusterResolver` instance
when instantiating the strategy. The task_type, task_id etc. will be parsed
from the resolver instance instead of from the `TF_CONFIG` env var.
It supports both eager mode and graph mode. However, for eager mode, it has to
set up the eager context in its constructor and therefore all ops in eager
mode have to run after the strategy object is created.
"""
# TODO(anjalisridhar): Update our guides with examples showing how we can use
# the cluster_resolver argument.
def __init__(
self,
communication=cross_device_ops_lib.CollectiveCommunication.AUTO,
cluster_resolver=None):
"""Creates the strategy.
Args:
communication: optional Enum of type
`distribute.experimental.CollectiveCommunication`. This provides a way
for the user to override the choice of collective op communication.
Possible values include `AUTO`, `RING`, and `NCCL`.
cluster_resolver: optional `distribute.cluster_resolver.ClusterResolver`
object. The default ClusterResolver that is used is the
TFConfigClusterResolver which is instantiated from the TF_CONFIG env
var.
"""
super(CollectiveAllReduceStrategy, self).__init__(
CollectiveAllReduceExtended(
self,
communication=communication,
cluster_resolver=cluster_resolver))
distribute_lib.distribution_strategy_gauge.get_cell("V2").set(
"MultiWorkerMirroredStrategy")
# pylint: disable=protected-access
distribute_lib.distribution_strategy_replica_gauge.get_cell(
"num_workers").set(self.extended._num_workers)
distribute_lib.distribution_strategy_replica_gauge.get_cell(
"num_replicas_per_worker").set(self.extended._num_gpus_per_worker)
@classmethod
def _from_local_devices(cls, devices):
"""A convenience method to create an obejct with a list of devices."""
obj = cls()
obj.extended._initialize_local(TFConfigClusterResolver(), devices=devices) # pylint: disable=protected-access
return obj
def scope(self): # pylint: disable=useless-super-delegation
"""Returns a context manager selecting this Strategy as current.
Inside a `with strategy.scope():` code block, this thread
will use a variable creator set by `strategy`, and will
enter its "cross-replica context".
In `MultiWorkerMirroredStrategy`, all variables created inside
`strategy.scope() will be mirrored on all replicas of each worker.
Moreover, it also sets a default device scope so that ops without
specified devices will end up on the correct worker.
Returns:
A context manager to use for creating variables with this strategy.
"""
return super(CollectiveAllReduceStrategy, self).scope()
@tf_export(v1=["distribute.experimental.MultiWorkerMirroredStrategy"]) # pylint: disable=missing-docstring
class CollectiveAllReduceStrategyV1(distribute_lib.StrategyV1):
__doc__ = CollectiveAllReduceStrategy.__doc__
def __init__(
self,
communication=cross_device_ops_lib.CollectiveCommunication.AUTO,
cluster_resolver=None):
"""Initializes the object."""
super(CollectiveAllReduceStrategyV1, self).__init__(
CollectiveAllReduceExtended(
self,
communication=communication,
cluster_resolver=cluster_resolver))
distribute_lib.distribution_strategy_gauge.get_cell("V1").set(
"MultiWorkerMirroredStrategy")
# pylint: disable=protected-access
distribute_lib.distribution_strategy_replica_gauge.get_cell(
"num_workers").set(self.extended._num_workers)
distribute_lib.distribution_strategy_replica_gauge.get_cell(
"num_gpu_per_worker").set(self.extended._num_gpus_per_worker)
class CollectiveAllReduceExtended(mirrored_strategy.MirroredExtended):
"""Implementation of CollectiveAllReduceStrategy."""
def __init__(self,
container_strategy,
communication,
cluster_resolver):
cluster_resolver = cluster_resolver or TFConfigClusterResolver()
distribute_lib.StrategyExtendedV1.__init__(self, container_strategy)
assert isinstance(
communication,
cross_device_ops_lib.CollectiveCommunication)
self._communication = communication
self._initialize_strategy(cluster_resolver)
assert isinstance(self._get_cross_device_ops(),
cross_device_ops_lib.CollectiveAllReduce)
def _initialize_strategy(self, cluster_resolver):
if cluster_resolver.cluster_spec().as_dict():
self._initialize_multi_worker(cluster_resolver)
else:
self._initialize_local(cluster_resolver)
def _initialize_local(self, cluster_resolver, devices=None):
"""Initializes the object for local training."""
self._is_chief = True
self._num_workers = 1
if ops.executing_eagerly_outside_functions():
try:
context.context().configure_collective_ops(
scoped_allocator_enabled_ops=("CollectiveReduce",))
except RuntimeError:
logging.warning("Collective ops is not configured at program startup. "
"Some performance features may not be enabled.")
self._collective_ops_configured = True
# TODO(b/126786766): TFConfigClusterResolver returns wrong number of GPUs in
# some cases.
if isinstance(cluster_resolver, TFConfigClusterResolver):
num_gpus = context.num_gpus()
else:
num_gpus = cluster_resolver.num_accelerators().get("GPU", 0)
if devices:
local_devices = devices
else:
if num_gpus:
local_devices = tuple("/device:GPU:%d" % i for i in range(num_gpus))
else:
local_devices = ("/device:CPU:0",)
self._worker_device = device_util.canonicalize("/device:CPU:0")
self._host_input_device = numpy_dataset.SingleDevice(self._worker_device)
self._collective_keys = cross_device_utils.CollectiveKeys()
# TODO(yuefengz): remove num_gpus_per_worker from CollectiveAllReduce.
self._cross_device_ops = cross_device_ops_lib.CollectiveAllReduce(
num_workers=self._num_workers,
num_gpus_per_worker=num_gpus,
collective_keys=self._collective_keys,
communication=self._communication)
super(CollectiveAllReduceExtended, self)._initialize_single_worker(
local_devices)
self._cluster_spec = None
self._task_type = None
self._task_id = None
# This is a mark to tell whether we are running with standalone client or
# independent worker. Right now with standalone client, strategy object is
# created as local strategy and then turn into multi-worker strategy via
# configure call.
self._local_or_standalone_client_mode = True
# Save the num_gpus_per_worker and rpc_layer for configure method.
self._num_gpus_per_worker = num_gpus
self._rpc_layer = cluster_resolver.rpc_layer
self._warn_nccl_no_gpu()
logging.info("Single-worker MultiWorkerMirroredStrategy with local_devices "
"= %r, communication = %s", local_devices, self._communication)
def _initialize_multi_worker(self, cluster_resolver):
"""Initializes the object for multi-worker training."""
cluster_spec = multi_worker_util.normalize_cluster_spec(
cluster_resolver.cluster_spec())
task_type = cluster_resolver.task_type
task_id = cluster_resolver.task_id
if task_type is None or task_id is None:
raise ValueError("When `cluster_spec` is given, you must also specify "
"`task_type` and `task_id`.")
self._cluster_spec = cluster_spec
self._task_type = task_type
self._task_id = task_id
self._num_workers = multi_worker_util.worker_count(cluster_spec, task_type)
if not self._num_workers:
raise ValueError("No `worker`, `chief` or `evaluator` tasks can be found "
"in `cluster_spec`.")
self._is_chief = multi_worker_util.is_chief(cluster_spec, task_type,
task_id)
self._worker_device = "/job:%s/task:%d" % (task_type, task_id)
self._host_input_device = numpy_dataset.SingleDevice(self._worker_device)
if (ops.executing_eagerly_outside_functions() and
not getattr(self, "_local_or_standalone_client_mode", False)):
context.context().configure_collective_ops(
collective_leader=multi_worker_util.collective_leader(
cluster_spec, task_type, task_id),
scoped_allocator_enabled_ops=("CollectiveReduce",),
device_filters=("/job:%s/task:%d" % (task_type, task_id),))
self._collective_ops_configured = True
# Starting a std server in eager mode and in independent worker mode.
if (context.executing_eagerly() and
not getattr(self, "_std_server_started", False) and
not getattr(self, "_local_or_standalone_client_mode", False)):
# Checking _local_or_standalone_client_mode as well because we should not
# create the std server in standalone client mode.
config_proto = config_pb2.ConfigProto()
config_proto = self._update_config_proto(config_proto)
if hasattr(cluster_resolver, "port"):
port = cluster_resolver.port
else:
port = 0
server_def = tensorflow_server_pb2.ServerDef(
cluster=cluster_spec.as_cluster_def(),
default_session_config=config_proto,
job_name=task_type,
task_index=task_id,
protocol=cluster_resolver.rpc_layer or "grpc",
port=port)
context.context().enable_collective_ops(server_def)
self._std_server_started = True
# The `ensure_initialized` is needed before calling
# `context.context().devices()`.
context.context().ensure_initialized()
logging.info(
"Enabled multi-worker collective ops with available devices: %r",
context.context().devices())
# TODO(yuefengz): The `num_gpus` is only for this particular task. It
# assumes all workers have the same number of GPUs. We should remove this
# assumption by querying all tasks for their numbers of GPUs.
# TODO(b/126786766): TFConfigClusterResolver returns wrong number of GPUs in
# some cases.
if isinstance(cluster_resolver, TFConfigClusterResolver):
num_gpus = context.num_gpus()
else:
num_gpus = cluster_resolver.num_accelerators().get("GPU", 0)
if num_gpus:
local_devices = tuple("%s/device:GPU:%d" % (self._worker_device, i)
for i in range(num_gpus))
else:
local_devices = (self._worker_device,)
self._collective_keys = cross_device_utils.CollectiveKeys()
self._cross_device_ops = cross_device_ops_lib.CollectiveAllReduce(
num_workers=self._num_workers,
num_gpus_per_worker=num_gpus,
collective_keys=self._collective_keys,
communication=self._communication)
super(CollectiveAllReduceExtended, self)._initialize_single_worker(
local_devices)
self._input_workers = input_lib.InputWorkers(
self._device_map, [(self._worker_device, self.worker_devices)])
# Add a default device so that ops without specified devices will not end up
# on other workers.
self._default_device = "/job:%s/task:%d" % (task_type, task_id)
# Save the num_gpus_per_worker and rpc_layer for configure method.
self._num_gpus_per_worker = num_gpus
self._rpc_layer = cluster_resolver.rpc_layer
self._warn_nccl_no_gpu()
logging.info(
"MultiWorkerMirroredStrategy with cluster_spec = %r, task_type = %r, "
"task_id = %r, num_workers = %r, local_devices = %r, "
"communication = %s", cluster_spec.as_dict(), task_type,
task_id, self._num_workers, local_devices,
self._communication)
def _get_variable_creator_initial_value(self,
replica_id,
device,
primary_var,
**kwargs):
if replica_id == 0: # First replica on each worker.
assert device is not None
assert primary_var is None
def initial_value_fn(): # pylint: disable=g-missing-docstring
# Only the first device participates in the broadcast of initial values.
group_key = self._collective_keys.get_group_key([device])
group_size = self._num_workers
collective_instance_key = (
self._collective_keys.get_variable_instance_key())
with ops.device(device):
initial_value = kwargs["initial_value"]
if callable(initial_value):
initial_value = initial_value()
assert not callable(initial_value)
initial_value = ops.convert_to_tensor(
initial_value, dtype=kwargs.get("dtype", None))
if self._num_workers > 1:
if self._is_chief:
bcast_send = collective_ops.broadcast_send(
initial_value, initial_value.shape, initial_value.dtype,
group_size, group_key, collective_instance_key)
with ops.control_dependencies([bcast_send]):
return array_ops.identity(initial_value)
else:
return collective_ops.broadcast_recv(initial_value.shape,
initial_value.dtype,
group_size, group_key,
collective_instance_key)
return initial_value
return initial_value_fn
else:
return super(CollectiveAllReduceExtended,
self)._get_variable_creator_initial_value(
replica_id=replica_id,
device=device,
primary_var=primary_var,
**kwargs)
def _make_input_context(self):
if self._cluster_spec is None:
input_pipeline_id = 0
else:
input_pipeline_id = multi_worker_util.id_in_cluster(
self._cluster_spec, self._task_type, self._task_id)
input_context = distribute_lib.InputContext(
num_input_pipelines=self._num_workers,
input_pipeline_id=input_pipeline_id,
num_replicas_in_sync=self._num_replicas_in_sync)
return input_context
def _experimental_distribute_dataset(self, dataset):
input_context = self._make_input_context()
return input_lib.get_distributed_dataset(
dataset,
self._input_workers,
self._container_strategy(),
split_batch_by=self._num_replicas_in_sync,
input_context=input_context)
def _make_dataset_iterator(self, dataset):
"""Distributes the dataset to each local GPU."""
input_context = self._make_input_context()
return input_lib.DatasetIterator(
dataset,
self._input_workers,
self._container_strategy(),
split_batch_by=self._num_replicas_in_sync,
input_context=input_context)
def _make_input_fn_iterator(
self,
input_fn,
replication_mode=distribute_lib.InputReplicationMode.PER_WORKER):
"""Distributes the input function to each local GPU."""
input_context = self._make_input_context()
return input_lib.InputFunctionIterator(input_fn, self._input_workers,
[input_context],
self._container_strategy())
def _configure(self,
session_config=None,
cluster_spec=None,
task_type=None,
task_id=None):
"""Configures the object.
Args:
session_config: a `tf.compat.v1.ConfigProto`
cluster_spec: a dict, ClusterDef or ClusterSpec object specifying the
cluster configurations.
task_type: the current task type, such as "worker".
task_id: the current task id.
Raises:
ValueError: if `task_type` is not in the `cluster_spec`.
"""
if cluster_spec:
# Use the num_gpus_per_worker recorded in constructor since _configure
# doesn't take num_gpus.
cluster_resolver = SimpleClusterResolver(
cluster_spec=multi_worker_util.normalize_cluster_spec(cluster_spec),
task_type=task_type,
task_id=task_id,
num_accelerators={"GPU": self._num_gpus_per_worker},
rpc_layer=self._rpc_layer)
self._initialize_multi_worker(cluster_resolver)
assert isinstance(self._get_cross_device_ops(),
cross_device_ops_lib.CollectiveAllReduce)
if session_config:
session_config.CopyFrom(self._update_config_proto(session_config))
def _update_config_proto(self, config_proto):
updated_config = copy.deepcopy(config_proto)
# Enable the scoped allocator optimization for CollectiveOps. This
# optimization converts many small all-reduces into fewer larger
# all-reduces.
rewrite_options = updated_config.graph_options.rewrite_options
rewrite_options.scoped_allocator_optimization = (
rewriter_config_pb2.RewriterConfig.ON)
# We turn on ScopedAllocator only for CollectiveReduce op, i.e. enable_op =
# ["CollectiveReduce"]. Since we can't assign to a repeated proto field, we
# clear and then append.
del rewrite_options.scoped_allocator_opts.enable_op[:]
rewrite_options.scoped_allocator_opts.enable_op.append("CollectiveReduce")
if (not ops.executing_eagerly_outside_functions() and
self._communication ==
cross_device_ops_lib.CollectiveCommunication.NCCL):
updated_config.experimental.collective_nccl = True
if not self._cluster_spec:
return updated_config
assert self._task_type
assert self._task_id is not None
# Collective group leader is needed for collective ops to coordinate
# workers.
updated_config.experimental.collective_group_leader = (
multi_worker_util.collective_leader(self._cluster_spec, self._task_type,
self._task_id))
# The device filters prevent communication between workers.
del updated_config.device_filters[:]
updated_config.device_filters.append(
"/job:%s/task:%d" % (self._task_type, self._task_id))
return updated_config
def _reduce_to(self, reduce_op, value, destinations):
if (isinstance(value, values.Mirrored) and
reduce_op == reduce_util.ReduceOp.MEAN):
return value
assert not isinstance(value, values.Mirrored)
if (isinstance(value, values.DistributedValues) and
len(self.worker_devices) == 1):
value = value.values[0]
# When there are multiple workers, we need to reduce across workers using
# collective ops.
if (not isinstance(value, values.DistributedValues) and
self._num_workers == 1):
# This function handles reducing values that are not PerReplica or
# Mirrored values. For example, the same value could be present on all
# replicas in which case `value` would be a single value or value could
# be 0.
return cross_device_ops_lib.reduce_non_distributed_value(
reduce_op, self._device_map, value, destinations)
return self._get_cross_device_ops().reduce(
reduce_op, value, destinations=destinations)
def _warn_nccl_no_gpu(self):
if ((self._communication ==
cross_device_ops_lib.CollectiveCommunication.NCCL) and
self._num_gpus_per_worker == 0):
logging.warning("Enabled NCCL communication but no GPUs detected/"
"specified.")
def _in_multi_worker_mode(self):
"""Whether this strategy indicates working in multi-worker settings."""
return self._num_workers > 1
@property
def experimental_between_graph(self):
return True
@property
def experimental_should_init(self):
return True
@property
def should_checkpoint(self):
return self._is_chief
@property
def should_save_summary(self):
return self._is_chief
@property
def _num_replicas_in_sync(self):
return len(self.worker_devices) * self._num_workers
# TODO(priyag): Delete this once all strategies use global batch size.
@property
def _global_batch_size(self):
"""`make_dataset_iterator` and `make_numpy_iterator` use global batch size.
`make_input_fn_iterator` assumes per-replica batching.
Returns:
Boolean.
"""
return True
| apache-2.0 |
resmo/ansible | lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py | 13 | 51281 | #!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
# Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: fortios_wireless_controller_wtp
short_description: Configure Wireless Termination Points (WTPs), that is, FortiAPs or APs to be managed by FortiGate in Fortinet's FortiOS and FortiGate.
description:
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the
user to set and modify wireless_controller feature and wtp category.
Examples include all parameters and values need to be adjusted to datasources before usage.
Tested with FOS v6.0.5
version_added: "2.8"
author:
- Miguel Angel Munoz (@mamunozgonzalez)
- Nicolas Thomas (@thomnico)
notes:
- Requires fortiosapi library developed by Fortinet
- Run as a local_action in your playbook
requirements:
- fortiosapi>=0.9.8
options:
host:
description:
- FortiOS or FortiGate IP address.
type: str
required: false
username:
description:
- FortiOS or FortiGate username.
type: str
required: false
password:
description:
- FortiOS or FortiGate password.
type: str
default: ""
vdom:
description:
- Virtual domain, among those defined previously. A vdom is a
virtual instance of the FortiGate that can be configured and
used as a different unit.
type: str
default: root
https:
description:
- Indicates if the requests towards FortiGate must use HTTPS protocol.
type: bool
default: true
ssl_verify:
description:
- Ensures FortiGate certificate must be verified by a proper CA.
type: bool
default: true
version_added: 2.9
state:
description:
- Indicates whether to create or remove the object.
This attribute was present already in previous version in a deeper level.
It has been moved out to this outer level.
type: str
required: false
choices:
- present
- absent
version_added: 2.9
wireless_controller_wtp:
description:
- Configure Wireless Termination Points (WTPs), that is, FortiAPs or APs to be managed by FortiGate.
default: null
type: dict
suboptions:
state:
description:
- B(Deprecated)
- Starting with Ansible 2.9 we recommend using the top-level 'state' parameter.
- HORIZONTALLINE
- Indicates whether to create or remove the object.
type: str
required: false
choices:
- present
- absent
admin:
description:
- Configure how the FortiGate operating as a wireless controller discovers and manages this WTP, AP or FortiAP.
type: str
choices:
- discovered
- disable
- enable
allowaccess:
description:
- Control management access to the managed WTP, FortiAP, or AP. Separate entries with a space.
type: str
choices:
- telnet
- http
- https
- ssh
bonjour_profile:
description:
- Bonjour profile name. Source wireless-controller.bonjour-profile.name.
type: str
coordinate_enable:
description:
- Enable/disable WTP coordinates (X,Y axis).
type: str
choices:
- enable
- disable
coordinate_latitude:
description:
- WTP latitude coordinate.
type: str
coordinate_longitude:
description:
- WTP longitude coordinate.
type: str
coordinate_x:
description:
- X axis coordinate.
type: str
coordinate_y:
description:
- Y axis coordinate.
type: str
image_download:
description:
- Enable/disable WTP image download.
type: str
choices:
- enable
- disable
index:
description:
- Index (0 - 4294967295).
type: int
ip_fragment_preventing:
description:
- Method by which IP fragmentation is prevented for CAPWAP tunneled control and data packets .
type: str
choices:
- tcp-mss-adjust
- icmp-unreachable
lan:
description:
- WTP LAN port mapping.
type: dict
suboptions:
port_mode:
description:
- LAN port mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port_ssid:
description:
- Bridge LAN port to SSID. Source wireless-controller.vap.name.
type: str
port1_mode:
description:
- LAN port 1 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port1_ssid:
description:
- Bridge LAN port 1 to SSID. Source wireless-controller.vap.name.
type: str
port2_mode:
description:
- LAN port 2 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port2_ssid:
description:
- Bridge LAN port 2 to SSID. Source wireless-controller.vap.name.
type: str
port3_mode:
description:
- LAN port 3 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port3_ssid:
description:
- Bridge LAN port 3 to SSID. Source wireless-controller.vap.name.
type: str
port4_mode:
description:
- LAN port 4 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port4_ssid:
description:
- Bridge LAN port 4 to SSID. Source wireless-controller.vap.name.
type: str
port5_mode:
description:
- LAN port 5 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port5_ssid:
description:
- Bridge LAN port 5 to SSID. Source wireless-controller.vap.name.
type: str
port6_mode:
description:
- LAN port 6 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port6_ssid:
description:
- Bridge LAN port 6 to SSID. Source wireless-controller.vap.name.
type: str
port7_mode:
description:
- LAN port 7 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port7_ssid:
description:
- Bridge LAN port 7 to SSID. Source wireless-controller.vap.name.
type: str
port8_mode:
description:
- LAN port 8 mode.
type: str
choices:
- offline
- nat-to-wan
- bridge-to-wan
- bridge-to-ssid
port8_ssid:
description:
- Bridge LAN port 8 to SSID. Source wireless-controller.vap.name.
type: str
led_state:
description:
- Enable to allow the FortiAPs LEDs to light. Disable to keep the LEDs off. You may want to keep the LEDs off so they are not distracting
in low light areas etc.
type: str
choices:
- enable
- disable
location:
description:
- Field for describing the physical location of the WTP, AP or FortiAP.
type: str
login_passwd:
description:
- Set the managed WTP, FortiAP, or AP's administrator password.
type: str
login_passwd_change:
description:
- Change or reset the administrator password of a managed WTP, FortiAP or AP (yes, default, or no).
type: str
choices:
- yes
- default
- no
mesh_bridge_enable:
description:
- Enable/disable mesh Ethernet bridge when WTP is configured as a mesh branch/leaf AP.
type: str
choices:
- default
- enable
- disable
name:
description:
- WTP, AP or FortiAP configuration name.
type: str
override_allowaccess:
description:
- Enable to override the WTP profile management access configuration.
type: str
choices:
- enable
- disable
override_ip_fragment:
description:
- Enable/disable overriding the WTP profile IP fragment prevention setting.
type: str
choices:
- enable
- disable
override_lan:
description:
- Enable to override the WTP profile LAN port setting.
type: str
choices:
- enable
- disable
override_led_state:
description:
- Enable to override the profile LED state setting for this FortiAP. You must enable this option to use the led-state command to turn off
the FortiAP's LEDs.
type: str
choices:
- enable
- disable
override_login_passwd_change:
description:
- Enable to override the WTP profile login-password (administrator password) setting.
type: str
choices:
- enable
- disable
override_split_tunnel:
description:
- Enable/disable overriding the WTP profile split tunneling setting.
type: str
choices:
- enable
- disable
override_wan_port_mode:
description:
- Enable/disable overriding the wan-port-mode in the WTP profile.
type: str
choices:
- enable
- disable
radio_1:
description:
- Configuration options for radio 1.
type: dict
suboptions:
auto_power_high:
description:
- Automatic transmission power high limit in decibels (dB) of the measured power referenced to one milliwatt (mW), or dBm (10 - 17
dBm).
type: int
auto_power_level:
description:
- Enable/disable automatic power-level adjustment to prevent co-channel interference .
type: str
choices:
- enable
- disable
auto_power_low:
description:
- Automatic transmission power low limit in dBm (the actual range of transmit power depends on the AP platform type).
type: int
band:
description:
- WiFi band that Radio 1 operates on.
type: str
choices:
- 802.11a
- 802.11b
- 802.11g
- 802.11n
- 802.11n-5G
- 802.11n,g-only
- 802.11g-only
- 802.11n-only
- 802.11n-5G-only
- 802.11ac
- 802.11ac,n-only
- 802.11ac-only
channel:
description:
- Selected list of wireless radio channels.
type: list
suboptions:
chan:
description:
- Channel number.
required: true
type: str
override_analysis:
description:
- Enable to override the WTP profile spectrum analysis configuration.
type: str
choices:
- enable
- disable
override_band:
description:
- Enable to override the WTP profile band setting.
type: str
choices:
- enable
- disable
override_channel:
description:
- Enable to override WTP profile channel settings.
type: str
choices:
- enable
- disable
override_txpower:
description:
- Enable to override the WTP profile power level configuration.
type: str
choices:
- enable
- disable
override_vaps:
description:
- Enable to override WTP profile Virtual Access Point (VAP) settings.
type: str
choices:
- enable
- disable
power_level:
description:
- Radio power level as a percentage of the maximum transmit power (0 - 100).
type: int
radio_id:
description:
- radio-id
type: int
spectrum_analysis:
description:
- Enable/disable spectrum analysis to find interference that would negatively impact wireless performance.
type: str
choices:
- enable
- disable
vap_all:
description:
- Enable/disable the automatic inheritance of all Virtual Access Points (VAPs) .
type: str
choices:
- enable
- disable
vaps:
description:
- Manually selected list of Virtual Access Points (VAPs).
type: list
suboptions:
name:
description:
- Virtual Access Point (VAP) name. Source wireless-controller.vap-group.name wireless-controller.vap.name.
required: true
type: str
radio_2:
description:
- Configuration options for radio 2.
type: dict
suboptions:
auto_power_high:
description:
- Automatic transmission power high limit in decibels (dB) of the measured power referenced to one milliwatt (mW), or dBm (10 - 17
dBm).
type: int
auto_power_level:
description:
- Enable/disable automatic power-level adjustment to prevent co-channel interference .
type: str
choices:
- enable
- disable
auto_power_low:
description:
- Automatic transmission power low limit in dBm (the actual range of transmit power depends on the AP platform type).
type: int
band:
description:
- WiFi band that Radio 1 operates on.
type: str
choices:
- 802.11a
- 802.11b
- 802.11g
- 802.11n
- 802.11n-5G
- 802.11n,g-only
- 802.11g-only
- 802.11n-only
- 802.11n-5G-only
- 802.11ac
- 802.11ac,n-only
- 802.11ac-only
channel:
description:
- Selected list of wireless radio channels.
type: list
suboptions:
chan:
description:
- Channel number.
required: true
type: str
override_analysis:
description:
- Enable to override the WTP profile spectrum analysis configuration.
type: str
choices:
- enable
- disable
override_band:
description:
- Enable to override the WTP profile band setting.
type: str
choices:
- enable
- disable
override_channel:
description:
- Enable to override WTP profile channel settings.
type: str
choices:
- enable
- disable
override_txpower:
description:
- Enable to override the WTP profile power level configuration.
type: str
choices:
- enable
- disable
override_vaps:
description:
- Enable to override WTP profile Virtual Access Point (VAP) settings.
type: str
choices:
- enable
- disable
power_level:
description:
- Radio power level as a percentage of the maximum transmit power (0 - 100).
type: int
radio_id:
description:
- radio-id
type: int
spectrum_analysis:
description:
- Enable/disable spectrum analysis to find interference that would negatively impact wireless performance.
type: str
choices:
- enable
- disable
vap_all:
description:
- Enable/disable the automatic inheritance of all Virtual Access Points (VAPs) .
type: str
choices:
- enable
- disable
vaps:
description:
- Manually selected list of Virtual Access Points (VAPs).
type: list
suboptions:
name:
description:
- Virtual Access Point (VAP) name. Source wireless-controller.vap-group.name wireless-controller.vap.name.
required: true
type: str
split_tunneling_acl:
description:
- Split tunneling ACL filter list.
type: list
suboptions:
dest_ip:
description:
- Destination IP and mask for the split-tunneling subnet.
type: str
id:
description:
- ID.
required: true
type: int
split_tunneling_acl_local_ap_subnet:
description:
- Enable/disable automatically adding local subnetwork of FortiAP to split-tunneling ACL .
type: str
choices:
- enable
- disable
split_tunneling_acl_path:
description:
- Split tunneling ACL path is local/tunnel.
type: str
choices:
- tunnel
- local
tun_mtu_downlink:
description:
- Downlink tunnel MTU in octets. Set the value to either 0 (by default), 576, or 1500.
type: int
tun_mtu_uplink:
description:
- Uplink tunnel maximum transmission unit (MTU) in octets (eight-bit bytes). Set the value to either 0 (by default), 576, or 1500.
type: int
wan_port_mode:
description:
- Enable/disable using the FortiAP WAN port as a LAN port.
type: str
choices:
- wan-lan
- wan-only
wtp_id:
description:
- WTP ID.
type: str
wtp_mode:
description:
- WTP, AP, or FortiAP operating mode; normal (by default) or remote. A tunnel mode SSID can be assigned to an AP in normal mode but not
remote mode, while a local-bridge mode SSID can be assigned to an AP in either normal mode or remote mode.
type: str
choices:
- normal
- remote
wtp_profile:
description:
- WTP profile name to apply to this WTP, AP or FortiAP. Source wireless-controller.wtp-profile.name.
type: str
'''
EXAMPLES = '''
- hosts: localhost
vars:
host: "192.168.122.40"
username: "admin"
password: ""
vdom: "root"
ssl_verify: "False"
tasks:
- name: Configure Wireless Termination Points (WTPs), that is, FortiAPs or APs to be managed by FortiGate.
fortios_wireless_controller_wtp:
host: "{{ host }}"
username: "{{ username }}"
password: "{{ password }}"
vdom: "{{ vdom }}"
https: "False"
state: "present"
wireless_controller_wtp:
admin: "discovered"
allowaccess: "telnet"
bonjour_profile: "<your_own_value> (source wireless-controller.bonjour-profile.name)"
coordinate_enable: "enable"
coordinate_latitude: "<your_own_value>"
coordinate_longitude: "<your_own_value>"
coordinate_x: "<your_own_value>"
coordinate_y: "<your_own_value>"
image_download: "enable"
index: "12"
ip_fragment_preventing: "tcp-mss-adjust"
lan:
port_mode: "offline"
port_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port1_mode: "offline"
port1_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port2_mode: "offline"
port2_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port3_mode: "offline"
port3_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port4_mode: "offline"
port4_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port5_mode: "offline"
port5_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port6_mode: "offline"
port6_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port7_mode: "offline"
port7_ssid: "<your_own_value> (source wireless-controller.vap.name)"
port8_mode: "offline"
port8_ssid: "<your_own_value> (source wireless-controller.vap.name)"
led_state: "enable"
location: "<your_own_value>"
login_passwd: "<your_own_value>"
login_passwd_change: "yes"
mesh_bridge_enable: "default"
name: "default_name_38"
override_allowaccess: "enable"
override_ip_fragment: "enable"
override_lan: "enable"
override_led_state: "enable"
override_login_passwd_change: "enable"
override_split_tunnel: "enable"
override_wan_port_mode: "enable"
radio_1:
auto_power_high: "47"
auto_power_level: "enable"
auto_power_low: "49"
band: "802.11a"
channel:
-
chan: "<your_own_value>"
override_analysis: "enable"
override_band: "enable"
override_channel: "enable"
override_txpower: "enable"
override_vaps: "enable"
power_level: "58"
radio_id: "59"
spectrum_analysis: "enable"
vap_all: "enable"
vaps:
-
name: "default_name_63 (source wireless-controller.vap-group.name wireless-controller.vap.name)"
radio_2:
auto_power_high: "65"
auto_power_level: "enable"
auto_power_low: "67"
band: "802.11a"
channel:
-
chan: "<your_own_value>"
override_analysis: "enable"
override_band: "enable"
override_channel: "enable"
override_txpower: "enable"
override_vaps: "enable"
power_level: "76"
radio_id: "77"
spectrum_analysis: "enable"
vap_all: "enable"
vaps:
-
name: "default_name_81 (source wireless-controller.vap-group.name wireless-controller.vap.name)"
split_tunneling_acl:
-
dest_ip: "<your_own_value>"
id: "84"
split_tunneling_acl_local_ap_subnet: "enable"
split_tunneling_acl_path: "tunnel"
tun_mtu_downlink: "87"
tun_mtu_uplink: "88"
wan_port_mode: "wan-lan"
wtp_id: "<your_own_value>"
wtp_mode: "normal"
wtp_profile: "<your_own_value> (source wireless-controller.wtp-profile.name)"
'''
RETURN = '''
build:
description: Build number of the fortigate image
returned: always
type: str
sample: '1547'
http_method:
description: Last method used to provision the content into FortiGate
returned: always
type: str
sample: 'PUT'
http_status:
description: Last result given by FortiGate on last operation applied
returned: always
type: str
sample: "200"
mkey:
description: Master key (id) used in the last call to FortiGate
returned: success
type: str
sample: "id"
name:
description: Name of the table used to fulfill the request
returned: always
type: str
sample: "urlfilter"
path:
description: Path of the table used to fulfill the request
returned: always
type: str
sample: "webfilter"
revision:
description: Internal revision number
returned: always
type: str
sample: "17.0.2.10658"
serial:
description: Serial number of the unit
returned: always
type: str
sample: "FGVMEVYYQT3AB5352"
status:
description: Indication of the operation's result
returned: always
type: str
sample: "success"
vdom:
description: Virtual domain used
returned: always
type: str
sample: "root"
version:
description: Version of the FortiGate
returned: always
type: str
sample: "v5.6.3"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
from ansible.module_utils.network.fortimanager.common import FAIL_SOCKET_MSG
def login(data, fos):
host = data['host']
username = data['username']
password = data['password']
ssl_verify = data['ssl_verify']
fos.debug('on')
if 'https' in data and not data['https']:
fos.https('off')
else:
fos.https('on')
fos.login(host, username, password, verify=ssl_verify)
def filter_wireless_controller_wtp_data(json):
option_list = ['admin', 'allowaccess', 'bonjour_profile',
'coordinate_enable', 'coordinate_latitude', 'coordinate_longitude',
'coordinate_x', 'coordinate_y', 'image_download',
'index', 'ip_fragment_preventing', 'lan',
'led_state', 'location', 'login_passwd',
'login_passwd_change', 'mesh_bridge_enable', 'name',
'override_allowaccess', 'override_ip_fragment', 'override_lan',
'override_led_state', 'override_login_passwd_change', 'override_split_tunnel',
'override_wan_port_mode', 'radio_1', 'radio_2',
'split_tunneling_acl', 'split_tunneling_acl_local_ap_subnet', 'split_tunneling_acl_path',
'tun_mtu_downlink', 'tun_mtu_uplink', 'wan_port_mode',
'wtp_id', 'wtp_mode', 'wtp_profile']
dictionary = {}
for attribute in option_list:
if attribute in json and json[attribute] is not None:
dictionary[attribute] = json[attribute]
return dictionary
def underscore_to_hyphen(data):
if isinstance(data, list):
for elem in data:
elem = underscore_to_hyphen(elem)
elif isinstance(data, dict):
new_data = {}
for k, v in data.items():
new_data[k.replace('_', '-')] = underscore_to_hyphen(v)
data = new_data
return data
def wireless_controller_wtp(data, fos):
vdom = data['vdom']
if 'state' in data and data['state']:
state = data['state']
elif 'state' in data['wireless_controller_wtp'] and data['wireless_controller_wtp']:
state = data['wireless_controller_wtp']['state']
else:
state = True
wireless_controller_wtp_data = data['wireless_controller_wtp']
filtered_data = underscore_to_hyphen(filter_wireless_controller_wtp_data(wireless_controller_wtp_data))
if state == "present":
return fos.set('wireless-controller',
'wtp',
data=filtered_data,
vdom=vdom)
elif state == "absent":
return fos.delete('wireless-controller',
'wtp',
mkey=filtered_data['wtp-id'],
vdom=vdom)
def is_successful_status(status):
return status['status'] == "success" or \
status['http_method'] == "DELETE" and status['http_status'] == 404
def fortios_wireless_controller(data, fos):
if data['wireless_controller_wtp']:
resp = wireless_controller_wtp(data, fos)
return not is_successful_status(resp), \
resp['status'] == "success", \
resp
def main():
fields = {
"host": {"required": False, "type": "str"},
"username": {"required": False, "type": "str"},
"password": {"required": False, "type": "str", "default": "", "no_log": True},
"vdom": {"required": False, "type": "str", "default": "root"},
"https": {"required": False, "type": "bool", "default": True},
"ssl_verify": {"required": False, "type": "bool", "default": True},
"state": {"required": False, "type": "str",
"choices": ["present", "absent"]},
"wireless_controller_wtp": {
"required": False, "type": "dict", "default": None,
"options": {
"state": {"required": False, "type": "str",
"choices": ["present", "absent"]},
"admin": {"required": False, "type": "str",
"choices": ["discovered", "disable", "enable"]},
"allowaccess": {"required": False, "type": "str",
"choices": ["telnet", "http", "https",
"ssh"]},
"bonjour_profile": {"required": False, "type": "str"},
"coordinate_enable": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"coordinate_latitude": {"required": False, "type": "str"},
"coordinate_longitude": {"required": False, "type": "str"},
"coordinate_x": {"required": False, "type": "str"},
"coordinate_y": {"required": False, "type": "str"},
"image_download": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"index": {"required": False, "type": "int"},
"ip_fragment_preventing": {"required": False, "type": "str",
"choices": ["tcp-mss-adjust", "icmp-unreachable"]},
"lan": {"required": False, "type": "dict",
"options": {
"port_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port_ssid": {"required": False, "type": "str"},
"port1_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port1_ssid": {"required": False, "type": "str"},
"port2_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port2_ssid": {"required": False, "type": "str"},
"port3_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port3_ssid": {"required": False, "type": "str"},
"port4_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port4_ssid": {"required": False, "type": "str"},
"port5_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port5_ssid": {"required": False, "type": "str"},
"port6_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port6_ssid": {"required": False, "type": "str"},
"port7_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port7_ssid": {"required": False, "type": "str"},
"port8_mode": {"required": False, "type": "str",
"choices": ["offline", "nat-to-wan", "bridge-to-wan",
"bridge-to-ssid"]},
"port8_ssid": {"required": False, "type": "str"}
}},
"led_state": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"location": {"required": False, "type": "str"},
"login_passwd": {"required": False, "type": "str"},
"login_passwd_change": {"required": False, "type": "str",
"choices": ["yes", "default", "no"]},
"mesh_bridge_enable": {"required": False, "type": "str",
"choices": ["default", "enable", "disable"]},
"name": {"required": False, "type": "str"},
"override_allowaccess": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_ip_fragment": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_lan": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_led_state": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_login_passwd_change": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_split_tunnel": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_wan_port_mode": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"radio_1": {"required": False, "type": "dict",
"options": {
"auto_power_high": {"required": False, "type": "int"},
"auto_power_level": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"auto_power_low": {"required": False, "type": "int"},
"band": {"required": False, "type": "str",
"choices": ["802.11a", "802.11b", "802.11g",
"802.11n", "802.11n-5G", "802.11n,g-only",
"802.11g-only", "802.11n-only", "802.11n-5G-only",
"802.11ac", "802.11ac,n-only", "802.11ac-only"]},
"channel": {"required": False, "type": "list",
"options": {
"chan": {"required": True, "type": "str"}
}},
"override_analysis": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_band": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_channel": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_txpower": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_vaps": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"power_level": {"required": False, "type": "int"},
"radio_id": {"required": False, "type": "int"},
"spectrum_analysis": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"vap_all": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"vaps": {"required": False, "type": "list",
"options": {
"name": {"required": True, "type": "str"}
}}
}},
"radio_2": {"required": False, "type": "dict",
"options": {
"auto_power_high": {"required": False, "type": "int"},
"auto_power_level": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"auto_power_low": {"required": False, "type": "int"},
"band": {"required": False, "type": "str",
"choices": ["802.11a", "802.11b", "802.11g",
"802.11n", "802.11n-5G", "802.11n,g-only",
"802.11g-only", "802.11n-only", "802.11n-5G-only",
"802.11ac", "802.11ac,n-only", "802.11ac-only"]},
"channel": {"required": False, "type": "list",
"options": {
"chan": {"required": True, "type": "str"}
}},
"override_analysis": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_band": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_channel": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_txpower": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"override_vaps": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"power_level": {"required": False, "type": "int"},
"radio_id": {"required": False, "type": "int"},
"spectrum_analysis": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"vap_all": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"vaps": {"required": False, "type": "list",
"options": {
"name": {"required": True, "type": "str"}
}}
}},
"split_tunneling_acl": {"required": False, "type": "list",
"options": {
"dest_ip": {"required": False, "type": "str"},
"id": {"required": True, "type": "int"}
}},
"split_tunneling_acl_local_ap_subnet": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"split_tunneling_acl_path": {"required": False, "type": "str",
"choices": ["tunnel", "local"]},
"tun_mtu_downlink": {"required": False, "type": "int"},
"tun_mtu_uplink": {"required": False, "type": "int"},
"wan_port_mode": {"required": False, "type": "str",
"choices": ["wan-lan", "wan-only"]},
"wtp_id": {"required": False, "type": "str"},
"wtp_mode": {"required": False, "type": "str",
"choices": ["normal", "remote"]},
"wtp_profile": {"required": False, "type": "str"}
}
}
}
module = AnsibleModule(argument_spec=fields,
supports_check_mode=False)
# legacy_mode refers to using fortiosapi instead of HTTPAPI
legacy_mode = 'host' in module.params and module.params['host'] is not None and \
'username' in module.params and module.params['username'] is not None and \
'password' in module.params and module.params['password'] is not None
if not legacy_mode:
if module._socket_path:
connection = Connection(module._socket_path)
fos = FortiOSHandler(connection)
is_error, has_changed, result = fortios_wireless_controller(module.params, fos)
else:
module.fail_json(**FAIL_SOCKET_MSG)
else:
try:
from fortiosapi import FortiOSAPI
except ImportError:
module.fail_json(msg="fortiosapi module is required")
fos = FortiOSAPI()
login(module.params, fos)
is_error, has_changed, result = fortios_wireless_controller(module.params, fos)
fos.logout()
if not is_error:
module.exit_json(changed=has_changed, meta=result)
else:
module.fail_json(msg="Error in repo", meta=result)
if __name__ == '__main__':
main()
| gpl-3.0 |
hyperized/ansible | lib/ansible/modules/network/ovs/openvswitch_db.py | 5 | 6771 | #!/usr/bin/python
# coding: utf-8 -*-
#
# (c) 2015, Mark Hamilton <[email protected]>
# Portions copyright @ 2015 VMware, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: openvswitch_db
author: "Mark Hamilton (@markleehamilton) <[email protected]>"
version_added: 2.0
short_description: Configure open vswitch database.
requirements: [ "ovs-vsctl >= 2.3.3" ]
description:
- Set column values in record in database table.
options:
state:
required: false
description:
- Configures the state of the key. When set
to I(present), the I(key) and I(value) pair will be set
on the I(record) and when set to I(absent) the I(key)
will not be set.
default: present
choices: ['present', 'absent']
version_added: "2.4"
table:
required: true
description:
- Identifies the table in the database.
record:
required: true
description:
- Identifies the recoard in the table.
col:
required: true
description:
- Identifies the column in the record.
key:
required: false
description:
- Identifies the key in the record column, when the column is a map
type.
value:
required: true
description:
- Expected value for the table, record, column and key.
timeout:
required: false
default: 5
description:
- How long to wait for ovs-vswitchd to respond
"""
EXAMPLES = '''
# Increase the maximum idle time to 50 seconds before pruning unused kernel
# rules.
- openvswitch_db:
table: open_vswitch
record: .
col: other_config
key: max-idle
value: 50000
# Disable in band copy
- openvswitch_db:
table: Bridge
record: br-int
col: other_config
key: disable-in-band
value: true
# Remove in band key
- openvswitch_db:
state: present
table: Bridge
record: br-int
col: other_config
key: disable-in-band
# Mark port with tag 10
- openvswitch_db:
table: Port
record: port0
col: tag
value: 10
'''
import re
from ansible.module_utils.basic import AnsibleModule
# Regular expression for map type, must not be empty
NON_EMPTY_MAP_RE = re.compile(r'{.+}')
# Regular expression for a map column type
MAP_RE = re.compile(r'{.*}')
def map_obj_to_commands(want, have, module):
""" Define ovs-vsctl command to meet desired state """
commands = list()
if module.params['state'] == 'absent':
if 'key' in have.keys():
templatized_command = "%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s " \
"%(col)s %(key)s=%(value)s"
commands.append(templatized_command % module.params)
elif module.params['key'] is None:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s " \
"%(col)s"
commands.append(templatized_command % module.params)
else:
if want == have:
# Nothing to commit
return commands
if module.params['key'] is None:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s=%(value)s"
commands.append(templatized_command % module.params)
else:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s:%(key)s=%(value)s"
commands.append(templatized_command % module.params)
return commands
def map_config_to_obj(module):
templatized_command = "%(ovs-vsctl)s -t %(timeout)s list %(table)s %(record)s"
command = templatized_command % module.params
rc, out, err = module.run_command(command, check_rc=True)
if rc != 0:
module.fail_json(msg=err)
match = re.search(r'^' + module.params['col'] + r'(\s+):(\s+)(.*)$', out, re.M)
col_value = match.group(3)
# Map types require key argument
has_key = module.params['key'] is not None
is_map = MAP_RE.match(col_value)
if is_map and not has_key:
module.fail_json(
msg="missing required arguments: key for map type of column")
col_value_to_dict = {}
if NON_EMPTY_MAP_RE.match(col_value):
for kv in col_value[1:-1].split(', '):
k, v = kv.split('=')
col_value_to_dict[k.strip()] = v.strip('\"')
obj = {
'table': module.params['table'],
'record': module.params['record'],
'col': module.params['col'],
}
if has_key and is_map:
if module.params['key'] in col_value_to_dict:
obj['key'] = module.params['key']
obj['value'] = col_value_to_dict[module.params['key']]
else:
obj['value'] = str(col_value.strip())
return obj
def map_params_to_obj(module):
if module.params['value'] in ['True', 'False']:
module.params['value'] = module.params['value'].lower()
obj = {
'table': module.params['table'],
'record': module.params['record'],
'col': module.params['col'],
'value': module.params['value']
}
key = module.params['key']
if key is not None:
obj['key'] = key
return obj
def main():
""" Entry point for ansible module. """
argument_spec = {
'state': {'default': 'present', 'choices': ['present', 'absent']},
'table': {'required': True},
'record': {'required': True},
'col': {'required': True},
'key': {'required': False},
'value': {'required': True, 'type': 'str'},
'timeout': {'default': 5, 'type': 'int'},
}
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
result = {'changed': False}
# We add ovs-vsctl to module_params to later build up templatized commands
module.params["ovs-vsctl"] = module.get_bin_path("ovs-vsctl", True)
want = map_params_to_obj(module)
have = map_config_to_obj(module)
commands = map_obj_to_commands(want, have, module)
result['commands'] = commands
if commands:
if not module.check_mode:
for c in commands:
module.run_command(c, check_rc=True)
result['changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
jgurtowski/ectools | pb_correct.py | 1 | 7907 | #!/usr/bin/env python
import sys
from nucio import *
from seqio import fastaIterator
from operator import itemgetter
from itertools import groupby, repeat, izip_longest, imap, count, chain
from collections import namedtuple
from cov import getMarkedRanges
from misc import create_enum
import copy
if not len(sys.argv) == 7:
print "pb_correct.py in.fa in.snps in.showcoords clr_id(float) min_read_length out_prefix"
sys.exit(1)
CLR_ID_CUTOFF = float(sys.argv[4])
MIN_READ_LENGTH = int(sys.argv[5])
PileupEntry = namedtuple("PileupEntry", ["index","base","snps","utg","clr"])
CovStat = {"COVERED":"COVERED", "UNCOVERED":"UNCOVERED", "JOINED":"JOINED"}
class CoverageRange:
def __init__(self, b, e, pid, covstat):
self.begin = b
self.end = e
self.pctid = pid
self.covstat = covstat
def __repr__(self):
return "CoverageRange(%d,%d,%f,%s)" % (self.begin,self.end,
self.pctid, self.covstat)
def __eq__(self,other):
return (self.begin == other.begin and self.end == other.end
and self.pctid == other.pctid and self.covstat == other.covstat)
#correction logic
def correct_base(pentry):
'''Takes a pileup entry and returns corrected base(s)
With any warnings
('bases','warnings','clr_range')
'''
#filter snps
filt_snps = filter(lambda s: s.qname == pentry.utg,
[] if pentry.snps == None else pentry.snps)
#nothing
if len(filt_snps) == 0:
return (pentry.base, None, pentry.clr)
ssnp = filt_snps[0]
if len(filt_snps) > 1:
#better all be insertions
if all(map(lambda p: p.sbase == '.', filt_snps)):
#show-snps is strange, on reverse alignments
#it outputs indels in the forward direction
if ssnp.r2 == -1:
filt_snps.reverse()
return (pentry.base+"".join(map(lambda f: f.qbase,filt_snps)), None, pentry.clr)
else:
#not everything is an insertion, add the insertions and
#return warning
return (pentry.base+
"".join(map(lambda f: f.qbase if f.sbase == "." else "",filt_snps)),
"Multiple SNPs, Not all were Insertions", pentry.clr)
elif ssnp.sbase == '.': #single insertion
return (pentry.base+ssnp.qbase, None,pentry.clr)
elif ssnp.qbase == '.': #Deletion
return ("", None if ssnp.sbase == pentry.base else "Mismatched Bases", pentry.clr)
else: #Mismatch
return (ssnp.qbase, None if ssnp.sbase == pentry.base else "Mismatched Bases", pentry.clr)
def range_size(arange):
return arange.end - arange.begin
def get_contiguous_ranges(ranges):
'''Gets Contiguous Ranges from a list of CoverageRanges
Returns a new list of CoverageRanges updated with contiguous
ranges and their weighted pct_id
'''
if len(ranges) == 0:
return []
out = [copy.deepcopy(ranges[0])]
for i in range(1,len(ranges)):
if ranges[i].begin - ranges[i-1].end == 1:
sp = range_size(out[-1])
sc = range_size(ranges[i])
out[-1].pctid = ((sp * out[-1].pctid) +
(sc * ranges[i].pctid)) / (sp+sc)
out[-1].end = ranges[i].end
out[-1].covstat = CovStat["JOINED"]
else:
out.append(copy.deepcopy(ranges[i]))
return out
rfh = open(sys.argv[1])
sfh = open(sys.argv[2])
afh = open(sys.argv[3])
pout = open(sys.argv[6] +".cor.pileup", "w")
corout = open(sys.argv[6] +".cor.fa", "w")
alignment_it = lineRecordIterator(afh, NucRecord, NucRecordTypes)
snp_it = lineRecordIterator(sfh, NucSNPRecord, NucSNPRecordTypes)
reads = dict(map(lambda r : (str(r.name), str(r.seq)), fastaIterator(rfh)))
alignments = dict(map(lambda (n,a): (n,list(a)),
groupby(alignment_it, lambda x: x.sname)))
for pbname, snp_entries in groupby(snp_it, lambda x: x.sname):
warnings = []
pblen = len(reads[pbname])
##no alignments for this pb read
if pbname not in alignments:
continue
##create ranges of accepted alignments
accept_alignment_ranges = [None] * pblen
#alignments[pbname].sort(key=lambda a: (a.send-a.sstart) * pow(a.pctid/100.0,2))
alignments[pbname].sort(key=lambda a: (a.send-a.sstart))
for alignment in alignments[pbname]:
for p in range(alignment.sstart-1,alignment.send):
accept_alignment_ranges[p] = alignment.qname
##
##find clr ranges
##
#find ranges
covered_ranges = map(lambda (s,e): CoverageRange(s,e,1.0,CovStat["COVERED"]),
getMarkedRanges(map(lambda c: 1 if not c == None else 0 , accept_alignment_ranges)))
uncovered_ranges = map(lambda (s,e): CoverageRange(s,e,0.7,CovStat["UNCOVERED"]),
getMarkedRanges(map(lambda c: 1 if c == None else 0 , accept_alignment_ranges)))
#remove uncorrected ends
uncovered_ranges = filter(lambda x: not (x.begin == 0 or x.end == pblen-1),uncovered_ranges)
joined_ranges = sorted(covered_ranges + uncovered_ranges, key=lambda x: x.begin)
#find the clr ranges
while True:
clr_ranges = get_contiguous_ranges(joined_ranges)
if( all(map(lambda y: y.pctid > CLR_ID_CUTOFF,clr_ranges))):
break
for cr in clr_ranges:
#skip clr ranges that are ok
if cr.pctid > CLR_ID_CUTOFF:
continue
#get uncorrected subranges for larger clr range
subranges = filter(lambda x: x.covstat == CovStat["UNCOVERED"]
and x.begin >= cr.begin and x.end <= cr.end , joined_ranges)
del joined_ranges[joined_ranges.index(max(subranges, key=lambda y: y.end - y.begin))]
clr_ranges = filter(lambda c: range_size(c) > MIN_READ_LENGTH, clr_ranges)
#mark clr ranges in array
clr_range_array = [None] * pblen
for clr_range in clr_ranges:
for p in range(clr_range.begin, clr_range.end+1):
clr_range_array[p] = str("%d_%d" % (clr_range.begin,clr_range.end))
#build a list of snps
merged_snps = [None] * pblen
for pos, snps in groupby(snp_entries, lambda y: y.spos):
merged_snps[pos-1] = list(snps)
#build the pileup
pileup = map(PileupEntry._make,
izip(count(),
reads[pbname],
merged_snps,
accept_alignment_ranges,
clr_range_array))
#correct the bases
corrected_data = map(correct_base, pileup)
#how to print the snps (str format)
snp_str = lambda f : "None" if f == None else "%d,%s,%s,%s" % (f.spos,f.sbase,f.qbase,f.qname)
#build pileup string for debugging
pileup_str_list = map(lambda x: "\t".join([
str(x.index), x.base, str(x.utg),
"|".join(
map(snp_str, [None] if x.snps == None else x.snps))]),pileup)
#add warnings to pileup
pileup_str_list = map(lambda z : "\t".join(map(str,z)),
izip(pileup_str_list,
imap(itemgetter(1), corrected_data),
imap(itemgetter(0), corrected_data)
))
pbname_corrected_base = pbname + "_corrected2"
for clr_name, clr_group in groupby(corrected_data, itemgetter(2)):
#skip non clear ranges
if clr_name == None:
continue
pbname_corrected = pbname_corrected_base + "/" + clr_name
corout.write( ">%s\n%s\n" % (pbname_corrected,"".join(imap(itemgetter(0), clr_group))))
pout.write( ">%s\n%s\n" % (pbname_corrected_base,"\n".join(pileup_str_list)))
rfh.close()
sfh.close()
afh.close()
corout.close()
pout.close()
| bsd-3-clause |
40423105/2017springcd_hw | plugin/summary/summary.py | 317 | 2852 | """
Summary
-------
This plugin allows easy, variable length summaries directly embedded into the
body of your articles.
"""
from __future__ import unicode_literals
from pelican import signals
from pelican.generators import ArticlesGenerator, StaticGenerator, PagesGenerator
def initialized(pelican):
from pelican.settings import DEFAULT_CONFIG
DEFAULT_CONFIG.setdefault('SUMMARY_BEGIN_MARKER',
'<!-- PELICAN_BEGIN_SUMMARY -->')
DEFAULT_CONFIG.setdefault('SUMMARY_END_MARKER',
'<!-- PELICAN_END_SUMMARY -->')
if pelican:
pelican.settings.setdefault('SUMMARY_BEGIN_MARKER',
'<!-- PELICAN_BEGIN_SUMMARY -->')
pelican.settings.setdefault('SUMMARY_END_MARKER',
'<!-- PELICAN_END_SUMMARY -->')
def extract_summary(instance):
# if summary is already specified, use it
# if there is no content, there's nothing to do
if hasattr(instance, '_summary'):
instance.has_summary = True
return
if not instance._content:
instance.has_summary = False
return
begin_marker = instance.settings['SUMMARY_BEGIN_MARKER']
end_marker = instance.settings['SUMMARY_END_MARKER']
content = instance._content
begin_summary = -1
end_summary = -1
if begin_marker:
begin_summary = content.find(begin_marker)
if end_marker:
end_summary = content.find(end_marker)
if begin_summary == -1 and end_summary == -1:
instance.has_summary = False
return
# skip over the begin marker, if present
if begin_summary == -1:
begin_summary = 0
else:
begin_summary = begin_summary + len(begin_marker)
if end_summary == -1:
end_summary = None
summary = content[begin_summary:end_summary]
# remove the markers from the content
if begin_summary:
content = content.replace(begin_marker, '', 1)
if end_summary:
content = content.replace(end_marker, '', 1)
instance._content = content
instance._summary = summary
instance.has_summary = True
def run_plugin(generators):
for generator in generators:
if isinstance(generator, ArticlesGenerator):
for article in generator.articles:
extract_summary(article)
elif isinstance(generator, PagesGenerator):
for page in generator.pages:
extract_summary(page)
def register():
signals.initialized.connect(initialized)
try:
signals.all_generators_finalized.connect(run_plugin)
except AttributeError:
# NOTE: This results in #314 so shouldn't really be relied on
# https://github.com/getpelican/pelican-plugins/issues/314
signals.content_object_init.connect(extract_summary)
| agpl-3.0 |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/end_to_end_ml/labs/serving/application/lib/pyasn1_modules/rfc5940.py | 13 | 1613 | #
# This file is part of pyasn1-modules software.
#
# Created by Russ Housley with assistance from asn1ate v.0.6.0.
# Modified by Russ Housley to add map for use with opentypes.
#
# Copyright (c) 2019, Vigil Security, LLC
# License: http://snmplabs.com/pyasn1/license.html
#
# Additional CMS Revocation Information Choices
#
# ASN.1 source from:
# https://www.rfc-editor.org/rfc/rfc5940.txt
#
from pyasn1.type import namedtype
from pyasn1.type import tag
from pyasn1.type import univ
from pyasn1_modules import rfc2560
from pyasn1_modules import rfc5652
# RevocationInfoChoice for OCSP response:
# The OID is included in otherRevInfoFormat, and
# signed OCSPResponse is included in otherRevInfo
id_ri_ocsp_response = univ.ObjectIdentifier('1.3.6.1.5.5.7.16.2')
OCSPResponse = rfc2560.OCSPResponse
# RevocationInfoChoice for SCVP request/response:
# The OID is included in otherRevInfoFormat, and
# SCVPReqRes is included in otherRevInfo
id_ri_scvp = univ.ObjectIdentifier('1.3.6.1.5.5.7.16.4')
ContentInfo = rfc5652.ContentInfo
class SCVPReqRes(univ.Sequence):
pass
SCVPReqRes.componentType = namedtype.NamedTypes(
namedtype.OptionalNamedType('request',
ContentInfo().subtype(explicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatSimple, 0))),
namedtype.NamedType('response', ContentInfo())
)
# Map of Revocation Info Format OIDs to Revocation Info Format
# is added to the ones that are in rfc5652.py
_otherRevInfoFormatMapUpdate = {
id_ri_ocsp_response: OCSPResponse(),
id_ri_scvp: SCVPReqRes(),
}
rfc5652.otherRevInfoFormatMap.update(_otherRevInfoFormatMapUpdate)
| apache-2.0 |
MoritzS/django | tests/model_fields/test_foreignkey.py | 44 | 3486 | from decimal import Decimal
from django.apps import apps
from django.core import checks
from django.db import models
from django.test import TestCase, skipIfDBFeature
from django.test.utils import isolate_apps
from .models import Bar, FkToChar, Foo, PrimaryKeyCharModel
class ForeignKeyTests(TestCase):
def test_callable_default(self):
"""A lazy callable may be used for ForeignKey.default."""
a = Foo.objects.create(id=1, a='abc', d=Decimal('12.34'))
b = Bar.objects.create(b='bcd')
self.assertEqual(b.a, a)
@skipIfDBFeature('interprets_empty_strings_as_nulls')
def test_empty_string_fk(self):
"""
Empty strings foreign key values don't get converted to None (#19299).
"""
char_model_empty = PrimaryKeyCharModel.objects.create(string='')
fk_model_empty = FkToChar.objects.create(out=char_model_empty)
fk_model_empty = FkToChar.objects.select_related('out').get(id=fk_model_empty.pk)
self.assertEqual(fk_model_empty.out, char_model_empty)
@isolate_apps('model_fields')
def test_warning_when_unique_true_on_fk(self):
class Foo(models.Model):
pass
class FKUniqueTrue(models.Model):
fk_field = models.ForeignKey(Foo, models.CASCADE, unique=True)
model = FKUniqueTrue()
expected_warnings = [
checks.Warning(
'Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.',
hint='ForeignKey(unique=True) is usually better served by a OneToOneField.',
obj=FKUniqueTrue.fk_field.field,
id='fields.W342',
)
]
warnings = model.check()
self.assertEqual(warnings, expected_warnings)
def test_related_name_converted_to_text(self):
rel_name = Bar._meta.get_field('a').remote_field.related_name
self.assertIsInstance(rel_name, str)
def test_abstract_model_pending_operations(self):
"""
Foreign key fields declared on abstract models should not add lazy
relations to resolve relationship declared as string (#24215).
"""
pending_ops_before = list(apps._pending_operations.items())
class AbstractForeignKeyModel(models.Model):
fk = models.ForeignKey('missing.FK', models.CASCADE)
class Meta:
abstract = True
self.assertIs(AbstractForeignKeyModel._meta.apps, apps)
self.assertEqual(
pending_ops_before,
list(apps._pending_operations.items()),
'Pending lookup added for a foreign key on an abstract model'
)
@isolate_apps('model_fields', 'model_fields.tests')
def test_abstract_model_app_relative_foreign_key(self):
class AbstractReferent(models.Model):
reference = models.ForeignKey('Referred', on_delete=models.CASCADE)
class Meta:
app_label = 'model_fields'
abstract = True
def assert_app_model_resolved(label):
class Referred(models.Model):
class Meta:
app_label = label
class ConcreteReferent(AbstractReferent):
class Meta:
app_label = label
self.assertEqual(ConcreteReferent._meta.get_field('reference').related_model, Referred)
assert_app_model_resolved('model_fields')
assert_app_model_resolved('tests')
| bsd-3-clause |
modsim/redicon-tools | tools/analyze-xyzp.py | 1 | 8976 | #!/usr/bin/env python2
#
# analyze-xyz.py 2013 S Kondrat aka Valiska <[email protected]>
# Pradeep Burla <[email protected]>
#
# Read the XYZ file and output geo centre position, displaysment,
# and mean-square displacemnt, and the 'diffusion coefficient' as a function of time
#
# Usage: analyze-xyz.py -f FILE ...
# Output: <time> <geometrical center position = 3 floats> <dr> <dr2 (=msd)> <Deff>
#
import os
import sys
import fileinput
import operator
import math
import string
import time
import datetime
import numpy as np
# Store value for cmd line parser
def store_value(option, opt_str, value, parser):
setattr(parser.values, option.dest, value)
# Check if we leave the box of size H, and
# update/shift the box along the coresponding
# axis; ra and rb are the two subsequent points
# before and after a move
def check_box (H, ra, rb, box):
d = [0 for col in range(3)]
for i in range(3):
# null means infinity
if H[i] == 0.:
continue;
d[i] = 2. * (ra[i] - rb[i])
if H[i] < math.fabs(d[i]):
if debug:
print 'Switching from box %i along %i' % (box[i], i)
box[i] = box[i] + d[i] / math.fabs(d[i])
if debug:
print 'To box %i' % (box[i])
if debug:
print 'done check_box()'
return;
# Command line parser
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", help="file name with data for eta", metavar="FILE", type="string", dest="filename")
parser.add_option("-k", "--line-from", help="track atoms from given line (0,1,...)", metavar="VAL", type="int", dest="k")
parser.add_option("-n", "--number-of-lines", help="track given number of lines", metavar="VAL", type="int", dest="n")
parser.add_option("-l", "--lines", help="track atoms from comma-separated lines", metavar="VAL", type="int", dest="l")
parser.add_option("-H","--box-size", action="callback", callback=store_value, metavar="Hx Hy Hz", type="int", nargs=3, dest="H")
parser.add_option("-a","--all",action="store_true",help="whether to make a track file for each particle",dest="do_all")
# Grab options
(options, args) = parser.parse_args()
if options.filename:
filename = options.filename
else:
print "The file name is missing (-f/--file)"
sys.exit()
if options.do_all:
do_all=True
else:
do_all=False
if options.k:
k = options.k
else:
k = 0;
if options.n:
n = options.n
else:
n = 0;
if options.l:
print "Option not impelemnted, try -k and -n."
sys.exit()
# Box size
if options.H:
H = options.H
else:
# zero mens infinity
H = [0 for col in range(3)]
##########################
### Script starts here ###
##########################
debug = False
#debug = True
# Runnig vars
nline=iline = itime = size = natoms = 0
xo = yo = zo = 0.0;
#to calculate number of itimes for the dimesion of matrix
for line in fileinput.input(filename):
nline=nline+1
ntime=(nline)/(natoms+2)
# Time and centre of a 'molecule' (natoms beads) position
time = [];
rx = [];
ry = [];
rz = [];
# Position sof Tracking atoms
# set all to zero
Rx = [[0 for col in range(ntime)] for row in range(n)]
Ry = [[0 for col in range(ntime)] for row in range(n)]
Rz = [[0 for col in range(ntime)] for row in range(n)]
# Integer denoting the box whith the traced atoms
# Start from 0,0,0 and shift if leaving the box
# in one of three directions
box = [[0 for col in range(3)] for row in range(n)]
# Check if we have PBC
if H[0] != 0 or H[1] != 0 or H[2] != 0:
PBC = True
else:
PBC = False
if do_all:
arr_data=np.zeros((nline, 5)) #this is a bit larger than necessary
arrcnt=0
print >> sys.stderr, "set up numpy array w. shape: ", arr_data.shape
# Parse the input file
for line in fileinput.input(filename):
if debug:
print '***************************'
print 'line %i: %s' % (iline, line)
print 'itime = %i, size=%i,' % (itime, size)
print ' => %i, %i' % (iline/(size+2), iline%(size+2))
# if a line with the number of atoms
if iline % (size + 2) == 0:
tokens = line.split()
if iline == 0:
size = int(tokens[0])
if n == 0:
n = size;
else:
# FIXME: do same for all splits etc?
size_read = 0
try:
size_read = int(tokens[0])
except ValueError:
print '# *** Exception ***'
print '# Error reading line %i' % (iline)
print '# Cannot convert to integer, I give up reading the file'
print '# The data is analyzed only up to the above line'
print '# *** End of exception message ***'
print '#'
break
if size_read != size:
print 'Error reading line %i, wrong number of atoms, %i, expected %s.' % (iline, size, tokens[0])
sys.exit()
# if a comment line, has time in ps if produced by BD_BOX
elif iline % (size + 2) == 1 or iline == 1:
if debug:
print 'Averaging: iline =%i, itime = %i, size=%i => %i' % (iline, itime, size, iline/(size+2))
if do_all==False and iline != 1:
if natoms != n:
print 'Internal error, number of atoms read %i and expected %i are not the same :(' % (natoms, n)
sys.exit()
rx.append (xo/(natoms));
ry.append (yo/(natoms));
rz.append (zo/(natoms));
itime = itime + 1
natoms = 0;
xo = yo = zo = 0.0;
tokens = line.split();
if len(tokens) > 3:
# this is if created by BD_BOX
time.append ( (float(tokens[3])) )
else:
time.append ( float(itime) )
elif do_all:
if iline%1000000==0:
print >> sys.stderr, "parsing do_all line %d"%(iline)
tokens=line.split()
arr_data[arrcnt][3]=time[-1]
arr_data[arrcnt][4]=float((iline % (size+2))-2)
for ii in range(3):
arr_data[arrcnt][ii]=float(tokens[ii+1])
arrcnt+=1
# if atom's position == lines of interest
elif do_all==False and (iline - itime * (size + 2) - 2 >= k and iline - itime * (size + 2) - 2 < k + n):
natoms = natoms + 1
if debug:
print 'line %i, atom %i' % (iline, natoms)
tokens = line.split()
Rx[natoms-1][itime] = float(tokens[1])
Ry[natoms-1][itime] = float(tokens[2])
Rz[natoms-1][itime] = float(tokens[3])
if itime == 0:
xo += float(tokens[1])
yo += float(tokens[2])
zo += float(tokens[3])
elif itime > 0:
r1 = [Rx[natoms-1][itime-1], Ry[natoms-1][itime-1], Rz[natoms-1][itime-1]]
r2 = [Rx[natoms-1][itime], Ry[natoms-1][itime], Rz[natoms-1][itime]]
b = box[natoms-1]
if debug:
print box
check_box(H, r1, r2, b)
box[natoms-1] = b;
if debug:
print box
print
xo += b[0] * H[0] + float(tokens[1])
yo += b[1] * H[1] + float(tokens[2])
zo += b[2] * H[2] + float(tokens[3])
iline += 1
b=np.zeros(3)
if do_all==True:
print >> sys.stderr, "now doing do_all loop"
for ii in range(size):
print >> sys.stderr, "processing file %d of %d"%(ii+1,size)
fn_out=filename.replace(".xyz","")+".mol%d.dat"%(ii)
fn_out=os.path.basename(fn_out)
arr_ii=arr_data[arr_data[:,4]==float(ii)]
for row_idx,row in enumerate(arr_ii[:]):
if row_idx>0:
r1=r2[:]
r2=row[:3]
check_box(H, r1,r2,b)
r2+=b*H
else:
r2=row[:3]
ref=r2[:]
dr2=np.sum((arr_ii[:,:3]-ref)**2,axis=1)
dr=np.sqrt(dr2)
Deff=10. * dr2 / (6. * arr_ii[:,3])
Deff[0]=0. # fixes the none from divide by zero
f_out=open(fn_out,"w")
t = datetime.datetime.now()
print >> f_out, '# Created by analyze-xyz.py on %s at %s from %s' % (t.strftime("%d-%m-%Y"), t.strftime("%H:%M:%S %Z"), filename)
print >> f_out, '# System sconsists of %i atoms' % (size)
if PBC:
print >> f_out, '# Periodic boundary applied with box size (%g, %g, %g)' % (H[0], H[1], H[2])
print >> f_out, '# Tracking %i bead(s) starting from %ith' % (1, ii)
print >> f_out, '# time (ps) rx (A) ry (A) rz (A) dr (A) dr2 (A^2) D (mum^2/s)'
for row,dr_i,dr2_i,Deff_i in zip(arr_ii[:],dr,dr2,Deff):
print >> f_out, ' %f %e %e %e %e %e %e' % (row[3], row[0], row[1], row[2], dr_i, dr2_i, Deff_i )
f_out.close()
sys.exit()
# Add the last step to positions
if natoms != n:
print 'Internal error, number of atoms read %i and expected %i are not the same :(' % (natoms, n)
sys.exit()
else:
rx.append (xo/(n));
ry.append (yo/(n));
rz.append (zo/(n));
# reference point (t = itime = 0)
x0 = rx[0]
y0 = ry[0]
z0 = rz[0]
# Save info lines to the file
t = datetime.datetime.now()
print '# Created by analyze-xyz.py on %s at %s from %s' % (t.strftime("%d-%m-%Y"), t.strftime("%H:%M:%S %Z"), filename)
print '# System sconsists of %i atoms' % (size)
if PBC:
print '# Periodic boundary applied with box size (%g, %g, %g)' % (H[0], H[1], H[2])
print '# Tracking %i bead(s) starting from %ith' % (n, k)
print '# time (ps) rx (A) ry (A) rz (A) dr (A) dr2 (A^2) D (mum^2/s)'
# run over time and calculate dr, msd, etc
for j in range(0, len(time), 1):
dr2 = math.pow (x0 - rx[j], 2) + math.pow (y0 - ry[j], 2) + math.pow (z0 - rz[j], 2)
dr = math.sqrt (dr2)
if j != 0:
Deff = 10. * dr2 / (6. * time[j])
else:
Deff = 0.0;
print ' %f %e %e %e %e %e %e' % (time[j], rx[j], ry[j], rz[j], dr, dr2, Deff )
| gpl-3.0 |
caesar2164/edx-platform | openedx/core/djangoapps/credit/utils.py | 130 | 1202 | """
Utilities for the credit app.
"""
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.django import modulestore
def get_course_blocks(course_key, category):
"""
Retrieve all XBlocks in the course for a particular category.
Returns only XBlocks that are published and haven't been deleted.
"""
# Note: we need to check if found components have been orphaned
# due to a bug in split modulestore (PLAT-799). Once that bug
# is resolved, we can skip the `_is_in_course_tree()` check entirely.
return [
block for block in modulestore().get_items(
course_key,
qualifiers={"category": category},
revision=ModuleStoreEnum.RevisionOption.published_only,
)
if _is_in_course_tree(block)
]
def _is_in_course_tree(block):
"""
Check that the XBlock is in the course tree.
It's possible that the XBlock is not in the course tree
if its parent has been deleted and is now an orphan.
"""
ancestor = block.get_parent()
while ancestor is not None and ancestor.location.category != "course":
ancestor = ancestor.get_parent()
return ancestor is not None
| agpl-3.0 |
cetic/ansible | lib/ansible/module_utils/facts/virtual/freebsd.py | 135 | 1525 | # This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible.module_utils.facts.virtual.base import Virtual, VirtualCollector
class FreeBSDVirtual(Virtual):
"""
This is a FreeBSD-specific subclass of Virtual. It defines
- virtualization_type
- virtualization_role
"""
platform = 'FreeBSD'
def get_virtual_facts(self):
virtual_facts = {}
# Set empty values as default
virtual_facts['virtualization_type'] = ''
virtual_facts['virtualization_role'] = ''
if os.path.exists('/dev/xen/xenstore'):
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
class FreeBSDVirtualCollector(VirtualCollector):
_fact_class = FreeBSDVirtual
_platform = 'FreeBSD'
| gpl-3.0 |
LedgerHQ/blue-loader-python | ledgerblue/commTCP.py | 1 | 2615 | """
*******************************************************************************
* Ledger Blue
* (c) 2019 Ledger
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
********************************************************************************
"""
from .commException import CommException
from binascii import hexlify
import socket
import struct
class DongleServer(object):
def __init__(self, server, port, debug=False):
self.server = server
self.port = port
self.debug = debug
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.opened = True
try:
self.socket.connect((self.server, self.port))
except:
raise CommException("Proxy connection failed")
def exchange(self, apdu, timeout=20000):
def send_apdu(apdu):
if self.debug:
print("=> %s" % hexlify(apdu))
self.socket.send(struct.pack(">I", len(apdu)))
self.socket.send(apdu)
def get_data():
size = struct.unpack(">I", self.socket.recv(4))[0]
response = self.socket.recv(size)
sw = struct.unpack(">H", self.socket.recv(2))[0]
if self.debug:
print("<= %s%.2x" % (hexlify(response), sw))
return (sw, response)
send_apdu(apdu)
(sw, response) = get_data()
if sw == 0x9000:
return bytearray(response)
else:
# handle the get response case:
# When more data is available, the chip sends 0x61XX
# So 0x61xx as a SW must not be interpreted as an error
if (sw & 0xFF00) != 0x6100:
raise CommException("Invalid status %04x" % sw, sw)
else:
while (sw & 0xFF00) == 0x6100:
send_apdu(bytes.fromhex("00c0000000")) # GET RESPONSE
(sw, data) = get_data()
response += data
# Check that the last received SW is indeed 0x9000
if sw == 0x9000:
return bytearray(response)
# In any other case return an exception
raise CommException("Invalid status %04x" % sw, sw)
def apduMaxDataSize(self):
return 240
def close(self):
try:
self.socket.close()
self.socket = None
except:
pass
self.opened = False
def getDongle(server="127.0.0.1", port=9999, debug=False):
return DongleServer(server, port, debug)
| apache-2.0 |
CSC301H-Fall2013/JuakStore | Storefront/juakstore/juakregister/admin.py | 1 | 1634 | from django.contrib import admin
from django.contrib.sites.models import RequestSite
from django.contrib.sites.models import Site
from django.utils.translation import ugettext_lazy as _
from juakstore.juakregister.models import RegistrationProfile
class RegistrationAdmin(admin.ModelAdmin):
actions = ['activate_users', 'resend_activation_email']
list_display = ('user', 'activation_key_expired')
raw_id_fields = ['user']
search_fields = ('user__username', 'user__first_name', 'user__last_name')
def activate_users(self, request, queryset):
"""
Activates the selected users, if they are not alrady
activated.
"""
for profile in queryset:
RegistrationProfile.objects.activate_user(profile.activation_key)
activate_users.short_description = _("Activate users")
def resend_activation_email(self, request, queryset):
"""
Re-sends activation emails for the selected users.
Note that this will *only* send activation emails for users
who are eligible to activate; emails will not be sent to users
whose activation keys have expired or who have already
activated.
"""
if Site._meta.installed:
site = Site.objects.get_current()
else:
site = RequestSite(request)
for profile in queryset:
if not profile.activation_key_expired():
profile.send_activation_email(site)
resend_activation_email.short_description = _("Re-send activation emails")
admin.site.register(RegistrationProfile, RegistrationAdmin)
| mit |
carsongee/edx-platform | cms/djangoapps/contentstore/views/public.py | 41 | 2264 | """
Public views
"""
from django_future.csrf import ensure_csrf_cookie
from django.core.context_processors import csrf
from django.core.urlresolvers import reverse
from django.shortcuts import redirect
from django.conf import settings
from edxmako.shortcuts import render_to_response
from external_auth.views import (ssl_login_shortcut, ssl_get_cert_from_request,
redirect_with_get)
from microsite_configuration import microsite
__all__ = ['signup', 'login_page', 'howitworks']
@ensure_csrf_cookie
def signup(request):
"""
Display the signup form.
"""
csrf_token = csrf(request)['csrf_token']
if request.user.is_authenticated():
return redirect('/course/')
if settings.FEATURES.get('AUTH_USE_CERTIFICATES_IMMEDIATE_SIGNUP'):
# Redirect to course to login to process their certificate if SSL is enabled
# and registration is disabled.
return redirect_with_get('login', request.GET, False)
return render_to_response('register.html', {'csrf': csrf_token})
@ssl_login_shortcut
@ensure_csrf_cookie
def login_page(request):
"""
Display the login form.
"""
csrf_token = csrf(request)['csrf_token']
if (settings.FEATURES['AUTH_USE_CERTIFICATES'] and
ssl_get_cert_from_request(request)):
# SSL login doesn't require a login view, so redirect
# to course now that the user is authenticated via
# the decorator.
next_url = request.GET.get('next')
if next_url:
return redirect(next_url)
else:
return redirect('/course/')
if settings.FEATURES.get('AUTH_USE_CAS'):
# If CAS is enabled, redirect auth handling to there
return redirect(reverse('cas-login'))
return render_to_response(
'login.html',
{
'csrf': csrf_token,
'forgot_password_link': "//{base}/login#forgot-password-modal".format(base=settings.LMS_BASE),
'platform_name': microsite.get_value('platform_name', settings.PLATFORM_NAME),
}
)
def howitworks(request):
"Proxy view"
if request.user.is_authenticated():
return redirect('/course/')
else:
return render_to_response('howitworks.html', {})
| agpl-3.0 |
chiamingyen/pygroup | wsgi/static/Brython2.2.0rc0-20140913-093500/Lib/unittest/test/test_program.py | 738 | 10833 | import io
import os
import sys
import unittest
class Test_TestProgram(unittest.TestCase):
def test_discovery_from_dotted_path(self):
loader = unittest.TestLoader()
tests = [self]
expectedPath = os.path.abspath(os.path.dirname(unittest.test.__file__))
self.wasRun = False
def _find_tests(start_dir, pattern):
self.wasRun = True
self.assertEqual(start_dir, expectedPath)
return tests
loader._find_tests = _find_tests
suite = loader.discover('unittest.test')
self.assertTrue(self.wasRun)
self.assertEqual(suite._tests, tests)
# Horrible white box test
def testNoExit(self):
result = object()
test = object()
class FakeRunner(object):
def run(self, test):
self.test = test
return result
runner = FakeRunner()
oldParseArgs = unittest.TestProgram.parseArgs
def restoreParseArgs():
unittest.TestProgram.parseArgs = oldParseArgs
unittest.TestProgram.parseArgs = lambda *args: None
self.addCleanup(restoreParseArgs)
def removeTest():
del unittest.TestProgram.test
unittest.TestProgram.test = test
self.addCleanup(removeTest)
program = unittest.TestProgram(testRunner=runner, exit=False, verbosity=2)
self.assertEqual(program.result, result)
self.assertEqual(runner.test, test)
self.assertEqual(program.verbosity, 2)
class FooBar(unittest.TestCase):
def testPass(self):
assert True
def testFail(self):
assert False
class FooBarLoader(unittest.TestLoader):
"""Test loader that returns a suite containing FooBar."""
def loadTestsFromModule(self, module):
return self.suiteClass(
[self.loadTestsFromTestCase(Test_TestProgram.FooBar)])
def test_NonExit(self):
program = unittest.main(exit=False,
argv=["foobar"],
testRunner=unittest.TextTestRunner(stream=io.StringIO()),
testLoader=self.FooBarLoader())
self.assertTrue(hasattr(program, 'result'))
def test_Exit(self):
self.assertRaises(
SystemExit,
unittest.main,
argv=["foobar"],
testRunner=unittest.TextTestRunner(stream=io.StringIO()),
exit=True,
testLoader=self.FooBarLoader())
def test_ExitAsDefault(self):
self.assertRaises(
SystemExit,
unittest.main,
argv=["foobar"],
testRunner=unittest.TextTestRunner(stream=io.StringIO()),
testLoader=self.FooBarLoader())
class InitialisableProgram(unittest.TestProgram):
exit = False
result = None
verbosity = 1
defaultTest = None
testRunner = None
testLoader = unittest.defaultTestLoader
module = '__main__'
progName = 'test'
test = 'test'
def __init__(self, *args):
pass
RESULT = object()
class FakeRunner(object):
initArgs = None
test = None
raiseError = False
def __init__(self, **kwargs):
FakeRunner.initArgs = kwargs
if FakeRunner.raiseError:
FakeRunner.raiseError = False
raise TypeError
def run(self, test):
FakeRunner.test = test
return RESULT
class TestCommandLineArgs(unittest.TestCase):
def setUp(self):
self.program = InitialisableProgram()
self.program.createTests = lambda: None
FakeRunner.initArgs = None
FakeRunner.test = None
FakeRunner.raiseError = False
def testVerbosity(self):
program = self.program
for opt in '-q', '--quiet':
program.verbosity = 1
program.parseArgs([None, opt])
self.assertEqual(program.verbosity, 0)
for opt in '-v', '--verbose':
program.verbosity = 1
program.parseArgs([None, opt])
self.assertEqual(program.verbosity, 2)
def testBufferCatchFailfast(self):
program = self.program
for arg, attr in (('buffer', 'buffer'), ('failfast', 'failfast'),
('catch', 'catchbreak')):
if attr == 'catch' and not hasInstallHandler:
continue
short_opt = '-%s' % arg[0]
long_opt = '--%s' % arg
for opt in short_opt, long_opt:
setattr(program, attr, None)
program.parseArgs([None, opt])
self.assertTrue(getattr(program, attr))
for opt in short_opt, long_opt:
not_none = object()
setattr(program, attr, not_none)
program.parseArgs([None, opt])
self.assertEqual(getattr(program, attr), not_none)
def testWarning(self):
"""Test the warnings argument"""
# see #10535
class FakeTP(unittest.TestProgram):
def parseArgs(self, *args, **kw): pass
def runTests(self, *args, **kw): pass
warnoptions = sys.warnoptions[:]
try:
sys.warnoptions[:] = []
# no warn options, no arg -> default
self.assertEqual(FakeTP().warnings, 'default')
# no warn options, w/ arg -> arg value
self.assertEqual(FakeTP(warnings='ignore').warnings, 'ignore')
sys.warnoptions[:] = ['somevalue']
# warn options, no arg -> None
# warn options, w/ arg -> arg value
self.assertEqual(FakeTP().warnings, None)
self.assertEqual(FakeTP(warnings='ignore').warnings, 'ignore')
finally:
sys.warnoptions[:] = warnoptions
def testRunTestsRunnerClass(self):
program = self.program
program.testRunner = FakeRunner
program.verbosity = 'verbosity'
program.failfast = 'failfast'
program.buffer = 'buffer'
program.warnings = 'warnings'
program.runTests()
self.assertEqual(FakeRunner.initArgs, {'verbosity': 'verbosity',
'failfast': 'failfast',
'buffer': 'buffer',
'warnings': 'warnings'})
self.assertEqual(FakeRunner.test, 'test')
self.assertIs(program.result, RESULT)
def testRunTestsRunnerInstance(self):
program = self.program
program.testRunner = FakeRunner()
FakeRunner.initArgs = None
program.runTests()
# A new FakeRunner should not have been instantiated
self.assertIsNone(FakeRunner.initArgs)
self.assertEqual(FakeRunner.test, 'test')
self.assertIs(program.result, RESULT)
def testRunTestsOldRunnerClass(self):
program = self.program
FakeRunner.raiseError = True
program.testRunner = FakeRunner
program.verbosity = 'verbosity'
program.failfast = 'failfast'
program.buffer = 'buffer'
program.test = 'test'
program.runTests()
# If initialising raises a type error it should be retried
# without the new keyword arguments
self.assertEqual(FakeRunner.initArgs, {})
self.assertEqual(FakeRunner.test, 'test')
self.assertIs(program.result, RESULT)
def testCatchBreakInstallsHandler(self):
module = sys.modules['unittest.main']
original = module.installHandler
def restore():
module.installHandler = original
self.addCleanup(restore)
self.installed = False
def fakeInstallHandler():
self.installed = True
module.installHandler = fakeInstallHandler
program = self.program
program.catchbreak = True
program.testRunner = FakeRunner
program.runTests()
self.assertTrue(self.installed)
def _patch_isfile(self, names, exists=True):
def isfile(path):
return path in names
original = os.path.isfile
os.path.isfile = isfile
def restore():
os.path.isfile = original
self.addCleanup(restore)
def testParseArgsFileNames(self):
# running tests with filenames instead of module names
program = self.program
argv = ['progname', 'foo.py', 'bar.Py', 'baz.PY', 'wing.txt']
self._patch_isfile(argv)
program.createTests = lambda: None
program.parseArgs(argv)
# note that 'wing.txt' is not a Python file so the name should
# *not* be converted to a module name
expected = ['foo', 'bar', 'baz', 'wing.txt']
self.assertEqual(program.testNames, expected)
def testParseArgsFilePaths(self):
program = self.program
argv = ['progname', 'foo/bar/baz.py', 'green\\red.py']
self._patch_isfile(argv)
program.createTests = lambda: None
program.parseArgs(argv)
expected = ['foo.bar.baz', 'green.red']
self.assertEqual(program.testNames, expected)
def testParseArgsNonExistentFiles(self):
program = self.program
argv = ['progname', 'foo/bar/baz.py', 'green\\red.py']
self._patch_isfile([])
program.createTests = lambda: None
program.parseArgs(argv)
self.assertEqual(program.testNames, argv[1:])
def testParseArgsAbsolutePathsThatCanBeConverted(self):
cur_dir = os.getcwd()
program = self.program
def _join(name):
return os.path.join(cur_dir, name)
argv = ['progname', _join('foo/bar/baz.py'), _join('green\\red.py')]
self._patch_isfile(argv)
program.createTests = lambda: None
program.parseArgs(argv)
expected = ['foo.bar.baz', 'green.red']
self.assertEqual(program.testNames, expected)
def testParseArgsAbsolutePathsThatCannotBeConverted(self):
program = self.program
# even on Windows '/...' is considered absolute by os.path.abspath
argv = ['progname', '/foo/bar/baz.py', '/green/red.py']
self._patch_isfile(argv)
program.createTests = lambda: None
program.parseArgs(argv)
self.assertEqual(program.testNames, argv[1:])
# it may be better to use platform specific functions to normalise paths
# rather than accepting '.PY' and '\' as file seprator on Linux / Mac
# it would also be better to check that a filename is a valid module
# identifier (we have a regex for this in loader.py)
# for invalid filenames should we raise a useful error rather than
# leaving the current error message (import of filename fails) in place?
if __name__ == '__main__':
unittest.main()
| gpl-2.0 |
antsmc2/mics | survey/views/location_weights.py | 2 | 3484 | from datetime import datetime
from django.utils.timezone import utc
from django.contrib import messages
from django.contrib.auth.decorators import permission_required, login_required
from django.http import HttpResponseRedirect
from django.shortcuts import render
from rapidsms.contrib.locations.models import LocationType, Location
from survey.forms.upload_csv_file import UploadWeightsForm
from survey.models import LocationWeight, LocationTypeDetails, UploadErrorLog, Survey
from survey.tasks import upload_task
from survey.views.location_widget import LocationWidget
from survey.utils.views_helper import contains_key
@permission_required('auth.can_view_batches')
def upload(request):
upload_form = UploadWeightsForm()
if request.method == 'POST':
upload_form = UploadWeightsForm(request.POST, request.FILES)
if upload_form.is_valid():
upload_task.delay(upload_form)
messages.warning(request, "Upload in progress. This could take a while.")
return HttpResponseRedirect('/locations/weights/upload/')
context = {'button_label': 'Upload', 'id': 'upload-location-weights-form',
'upload_form': upload_form, 'location_types': LocationType.objects.all(), 'range': range(3)}
return render(request, 'locations/weights/upload.html', context)
@login_required
@permission_required('auth.can_view_batches')
def list_weights(request):
location_weights = LocationWeight.objects.all()
surveys = Survey.objects.all()
survey = None
selected_location = None
params = request.GET
if contains_key(params, 'survey'):
survey = Survey.objects.get(id=params['survey'])
location_weights = location_weights.filter(survey=survey)
if contains_key(params, 'location'):
selected_location = Location.objects.get(id=params['location'])
location_weights = location_weights.filter(location=selected_location)
location_types = LocationTypeDetails.get_ordered_types().exclude(name__iexact="country")
context = {'location_weights': location_weights,
'location_types': location_types,
'location_data': LocationWidget(selected_location),
'surveys': surveys,
'selected_survey': survey,
'action': 'list_weights_page',
'request': request}
return render(request, 'locations/weights/index.html', context)
@permission_required('auth.can_view_batches')
def error_logs(request):
location_weights_error_logs = UploadErrorLog.objects.filter(model='WEIGHTS')
today = datetime.now().replace(tzinfo=utc).strftime('%Y-%m-%d')
selected_from_date = today
selected_to_date = today
params = request.GET
if params.get('from_date', None) and params.get('to_date', None):
selected_from_date = datetime.strptime(params['from_date']+ " 00:00", '%Y-%m-%d %H:%M').replace(tzinfo=utc)
selected_to_date = datetime.strptime(params['to_date']+ " 23:59", '%Y-%m-%d %H:%M').replace(tzinfo=utc)
location_weights_error_logs = location_weights_error_logs.filter(created__range=[selected_from_date,
selected_to_date])
context = {'error_logs': location_weights_error_logs, 'request': request,
'selected_from_date':selected_from_date, 'selected_to_date':selected_to_date}
return render(request, 'locations/weights/error_logs.html', context) | bsd-3-clause |
aarticianpc/greenpointtrees | src/oscar/apps/voucher/admin.py | 30 | 1031 | from django.contrib import admin
from oscar.core.loading import get_model
Voucher = get_model('voucher', 'Voucher')
VoucherApplication = get_model('voucher', 'VoucherApplication')
class VoucherAdmin(admin.ModelAdmin):
list_display = ('name', 'code', 'usage', 'num_basket_additions',
'num_orders', 'total_discount')
readonly_fields = ('num_basket_additions', 'num_orders', 'total_discount')
fieldsets = (
(None, {
'fields': ('name', 'code', 'usage', 'start_datetime',
'end_datetime')}),
('Benefit', {
'fields': ('offers',)}),
('Usage', {
'fields': ('num_basket_additions', 'num_orders',
'total_discount')}),
)
class VoucherApplicationAdmin(admin.ModelAdmin):
list_display = ('voucher', 'user', 'order', 'date_created')
readonly_fields = ('voucher', 'user', 'order')
admin.site.register(Voucher, VoucherAdmin)
admin.site.register(VoucherApplication, VoucherApplicationAdmin)
| mit |
Cressidai/robotframework-selenium2library | test/unit/locators/test_tableelementfinder.py | 71 | 6873 | import unittest
from Selenium2Library.locators import TableElementFinder
from mockito import *
class ElementFinderTests(unittest.TestCase):
def test_find_with_implicit_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1").thenReturn([])
finder.find(browser, "test1")
verify(browser).find_elements_by_css_selector("table#test1")
def test_find_with_css_selector(self):
finder = TableElementFinder()
browser = mock()
elements = self._make_mock_elements('table', 'table', 'table')
when(browser).find_elements_by_css_selector("table#test1").thenReturn(elements)
self.assertEqual(
finder.find(browser, "css=table#test1"),
elements[0])
verify(browser).find_elements_by_css_selector("table#test1")
def test_find_with_xpath_selector(self):
finder = TableElementFinder()
browser = mock()
elements = self._make_mock_elements('table', 'table', 'table')
when(browser).find_elements_by_xpath("//table[@id='test1']").thenReturn(elements)
self.assertEqual(
finder.find(browser, "xpath=//table[@id='test1']"),
elements[0])
verify(browser).find_elements_by_xpath("//table[@id='test1']")
def test_find_with_content_constraint(self):
finder = TableElementFinder()
browser = mock()
elements = self._make_mock_elements('td', 'td', 'td')
elements[1].text = 'hi'
when(browser).find_elements_by_css_selector("table#test1").thenReturn(elements)
self.assertEqual(
finder.find_by_content(browser, "test1", 'hi'),
elements[1])
verify(browser).find_elements_by_css_selector("table#test1")
def test_find_with_null_content_constraint(self):
finder = TableElementFinder()
browser = mock()
elements = self._make_mock_elements('td', 'td', 'td')
elements[1].text = 'hi'
when(browser).find_elements_by_css_selector("table#test1").thenReturn(elements)
self.assertEqual(
finder.find_by_content(browser, "test1", None),
elements[0])
verify(browser).find_elements_by_css_selector("table#test1")
def test_find_by_content_with_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1").thenReturn([])
finder.find_by_content(browser, "css=table#test1", 'hi')
verify(browser).find_elements_by_css_selector("table#test1")
def test_find_by_content_with_xpath_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_xpath("//table[@id='test1']//*").thenReturn([])
finder.find_by_content(browser, "xpath=//table[@id='test1']", 'hi')
verify(browser).find_elements_by_xpath("//table[@id='test1']//*")
def test_find_by_header_with_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1 th").thenReturn([])
finder.find_by_header(browser, "css=table#test1", 'hi')
verify(browser).find_elements_by_css_selector("table#test1 th")
def test_find_by_header_with_xpath_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_xpath("//table[@id='test1']//th").thenReturn([])
finder.find_by_header(browser, "xpath=//table[@id='test1']", 'hi')
verify(browser).find_elements_by_xpath("//table[@id='test1']//th")
def test_find_by_footer_with_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1 tfoot td").thenReturn([])
finder.find_by_footer(browser, "css=table#test1", 'hi')
verify(browser).find_elements_by_css_selector("table#test1 tfoot td")
def test_find_by_footer_with_xpath_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_xpath("//table[@id='test1']//tfoot//td").thenReturn([])
finder.find_by_footer(browser, "xpath=//table[@id='test1']", 'hi')
verify(browser).find_elements_by_xpath("//table[@id='test1']//tfoot//td")
def test_find_by_row_with_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1 tr:nth-child(2)").thenReturn([])
finder.find_by_row(browser, "css=table#test1", 2, 'hi')
verify(browser).find_elements_by_css_selector("table#test1 tr:nth-child(2)")
def test_find_by_row_with_xpath_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_xpath("//table[@id='test1']//tr[2]//*").thenReturn([])
finder.find_by_row(browser, "xpath=//table[@id='test1']", 2, 'hi')
verify(browser).find_elements_by_xpath("//table[@id='test1']//tr[2]//*")
def test_find_by_col_with_css_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_css_selector("table#test1 tr td:nth-child(2)").thenReturn([])
when(browser).find_elements_by_css_selector("table#test1 tr th:nth-child(2)").thenReturn([])
finder.find_by_col(browser, "css=table#test1", 2, 'hi')
verify(browser).find_elements_by_css_selector("table#test1 tr td:nth-child(2)")
verify(browser).find_elements_by_css_selector("table#test1 tr th:nth-child(2)")
def test_find_by_col_with_xpath_locator(self):
finder = TableElementFinder()
browser = mock()
when(browser).find_elements_by_xpath("//table[@id='test1']//tr//*[self::td or self::th][2]").thenReturn([])
finder.find_by_col(browser, "xpath=//table[@id='test1']", 2, 'hi')
verify(browser).find_elements_by_xpath("//table[@id='test1']//tr//*[self::td or self::th][2]")
def _make_mock_elements(self, *tags):
elements = []
for tag in tags:
element = self._make_mock_element(tag)
elements.append(element)
return elements
def _make_mock_element(self, tag):
element = mock()
element.tag_name = tag
element.attributes = {}
element.text = None
def set_attribute(name, value):
element.attributes[name] = value
element.set_attribute = set_attribute
def get_attribute(name):
return element.attributes[name]
element.get_attribute = get_attribute
return element
| apache-2.0 |
songfj/calibre | src/calibre/gui2/store/stores/mobileread/store_dialog.py | 15 | 3670 | # -*- coding: utf-8 -*-
from __future__ import (unicode_literals, division, absolute_import, print_function)
__license__ = 'GPL 3'
__copyright__ = '2011, John Schember <[email protected]>'
__docformat__ = 'restructuredtext en'
from PyQt5.Qt import (Qt, QDialog, QIcon, QComboBox)
from calibre.gui2.store.stores.mobileread.adv_search_builder import AdvSearchBuilderDialog
from calibre.gui2.store.stores.mobileread.models import BooksModel
from calibre.gui2.store.stores.mobileread.store_dialog_ui import Ui_Dialog
class MobileReadStoreDialog(QDialog, Ui_Dialog):
def __init__(self, plugin, *args):
QDialog.__init__(self, *args)
self.setupUi(self)
self.plugin = plugin
self.search_query.initialize('store_mobileread_search')
self.search_query.setSizeAdjustPolicy(QComboBox.AdjustToMinimumContentsLengthWithIcon)
self.search_query.setMinimumContentsLength(25)
self.adv_search_button.setIcon(QIcon(I('search.png')))
self._model = BooksModel(self.plugin.get_book_list())
self.results_view.setModel(self._model)
self.total.setText('%s' % self.results_view.model().rowCount())
self.search_button.clicked.connect(self.do_search)
self.adv_search_button.clicked.connect(self.build_adv_search)
self.results_view.activated.connect(self.open_store)
self.results_view.model().total_changed.connect(self.update_book_total)
self.finished.connect(self.dialog_closed)
self.restore_state()
def do_search(self):
self.results_view.model().search(unicode(self.search_query.text()))
def open_store(self, index):
result = self.results_view.model().get_book(index)
if result:
self.plugin.open(self, result.detail_item)
def update_book_total(self, total):
self.total.setText('%s' % total)
def build_adv_search(self):
adv = AdvSearchBuilderDialog(self)
if adv.exec_() == QDialog.Accepted:
self.search_query.setText(adv.search_string())
def restore_state(self):
geometry = self.plugin.config.get('dialog_geometry', None)
if geometry:
self.restoreGeometry(geometry)
results_cwidth = self.plugin.config.get('dialog_results_view_column_width')
if results_cwidth:
for i, x in enumerate(results_cwidth):
if i >= self.results_view.model().columnCount():
break
self.results_view.setColumnWidth(i, x)
else:
for i in xrange(self.results_view.model().columnCount()):
self.results_view.resizeColumnToContents(i)
self.results_view.model().sort_col = self.plugin.config.get('dialog_sort_col', 0)
self.results_view.model().sort_order = self.plugin.config.get('dialog_sort_order', Qt.AscendingOrder)
self.results_view.model().sort(self.results_view.model().sort_col, self.results_view.model().sort_order)
self.results_view.header().setSortIndicator(self.results_view.model().sort_col, self.results_view.model().sort_order)
def save_state(self):
self.plugin.config['dialog_geometry'] = bytearray(self.saveGeometry())
self.plugin.config['dialog_results_view_column_width'] = [self.results_view.columnWidth(i) for i in range(self.results_view.model().columnCount())]
self.plugin.config['dialog_sort_col'] = self.results_view.model().sort_col
self.plugin.config['dialog_sort_order'] = self.results_view.model().sort_order
def dialog_closed(self, result):
self.save_state()
| gpl-3.0 |
andymckay/zamboni | apps/amo/decorators.py | 1 | 5996 | import datetime
import functools
import json
from django import http
from django.conf import settings
from django.core.exceptions import PermissionDenied
import commonware.log
from . import models as context
from .utils import JSONEncoder, redirect_for_login
from amo import get_user, set_user
from mkt.users.utils import get_task_user
task_log = commonware.log.getLogger('z.task')
def login_required(f=None, redirect=True):
"""
Like Django's login_required, but with to= instead of next=.
If redirect=False then we return 401 instead of redirecting to the
login page. That's nice for ajax views.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(request, *args, **kw):
if request.user.is_authenticated():
return func(request, *args, **kw)
else:
if redirect:
return redirect_for_login(request)
else:
return http.HttpResponse(status=401)
return wrapper
if f:
return decorator(f)
else:
return decorator
def post_required(f):
@functools.wraps(f)
def wrapper(request, *args, **kw):
if request.method != 'POST':
return http.HttpResponseNotAllowed(['POST'])
else:
return f(request, *args, **kw)
return wrapper
def permission_required(app, action):
def decorator(f):
@functools.wraps(f)
@login_required
def wrapper(request, *args, **kw):
from mkt.access import acl
if acl.action_allowed(request, app, action):
return f(request, *args, **kw)
else:
raise PermissionDenied
return wrapper
return decorator
def any_permission_required(pairs):
"""
If any permission passes, call the function. Otherwise raise 403.
"""
def decorator(f):
@functools.wraps(f)
@login_required
def wrapper(request, *args, **kw):
from mkt.access import acl
for app, action in pairs:
if acl.action_allowed(request, app, action):
return f(request, *args, **kw)
raise PermissionDenied
return wrapper
return decorator
def json_response(response, has_trans=False, status_code=200):
"""
Return a response as JSON. If you are just wrapping a view,
then use the json_view decorator.
"""
if has_trans:
response = json.dumps(response, cls=JSONEncoder)
else:
response = json.dumps(response)
return http.HttpResponse(response,
content_type='application/json',
status=status_code)
def json_view(f=None, has_trans=False, status_code=200):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kw):
response = func(*args, **kw)
if isinstance(response, http.HttpResponse):
return response
else:
return json_response(response, has_trans=has_trans,
status_code=status_code)
return wrapper
if f:
return decorator(f)
else:
return decorator
json_view.error = lambda s: http.HttpResponseBadRequest(
json.dumps(s), content_type='application/json')
class skip_cache(object):
def __init__(self, f):
self.f = f
functools.update_wrapper(self, f)
def __call__(self, *args, **kw):
with context.skip_cache():
return self.f(*args, **kw)
def __repr__(self):
"<SkipCache %s>" % (self.f,)
def __get__(self, obj, typ=None):
return skip_cache(self.f.__get__(obj, typ))
def use_master(f):
@functools.wraps(f)
def wrapper(*args, **kw):
with context.use_master():
return f(*args, **kw)
return wrapper
def write(f):
return use_master(skip_cache(f))
def set_modified_on(f):
"""
Will update the modified timestamp on the provided objects when the wrapped
function exits sucessfully (returns a truthy value). If the function
returns a dict, it will also use that dict as additional keyword arguments
to update on the provided objects.
Looks up objects defined in the set_modified_on kwarg.
"""
from amo.tasks import set_modified_on_object
@functools.wraps(f)
def wrapper(*args, **kw):
objs = kw.pop('set_modified_on', None)
result = f(*args, **kw)
if objs and result:
extra_kwargs = result if isinstance(result, dict) else {}
for obj in objs:
task_log.info('Delaying setting modified on object: %s, %s' %
(obj.__class__.__name__, obj.pk))
set_modified_on_object.apply_async(
args=[obj], kwargs=extra_kwargs,
eta=datetime.datetime.now() +
datetime.timedelta(seconds=settings.NFS_LAG_DELAY))
return result
return wrapper
def allow_cross_site_request(f):
"""Allow other sites to access this resource, see
https://developer.mozilla.org/en/HTTP_access_control."""
@functools.wraps(f)
def wrapper(request, *args, **kw):
response = f(request, *args, **kw)
"""If Access-Control-Allow-Credentials isn't set, the browser won't
return data required cookies to see. This is a good thing, let's keep
it that way."""
response['Access-Control-Allow-Origin'] = '*'
response['Access-Control-Allow-Methods'] = 'GET, OPTIONS'
return response
return wrapper
def set_task_user(f):
"""Sets the user to be the task user, then unsets it."""
@functools.wraps(f)
def wrapper(*args, **kw):
old_user = get_user()
set_user(get_task_user())
try:
result = f(*args, **kw)
finally:
set_user(old_user)
return result
return wrapper
| bsd-3-clause |
Icenowy/MissionPlanner | Lib/code.py | 62 | 10499 | """Utilities needed to emulate Python's interactive interpreter.
"""
# Inspired by similar code by Jeff Epler and Fredrik Lundh.
import sys
import traceback
from codeop import CommandCompiler, compile_command
__all__ = ["InteractiveInterpreter", "InteractiveConsole", "interact",
"compile_command"]
def softspace(file, newvalue):
oldvalue = 0
try:
oldvalue = file.softspace
except AttributeError:
pass
try:
file.softspace = newvalue
except (AttributeError, TypeError):
# "attribute-less object" or "read-only attributes"
pass
return oldvalue
class InteractiveInterpreter:
"""Base class for InteractiveConsole.
This class deals with parsing and interpreter state (the user's
namespace); it doesn't deal with input buffering or prompting or
input file naming (the filename is always passed in explicitly).
"""
def __init__(self, locals=None):
"""Constructor.
The optional 'locals' argument specifies the dictionary in
which code will be executed; it defaults to a newly created
dictionary with key "__name__" set to "__console__" and key
"__doc__" set to None.
"""
if locals is None:
locals = {"__name__": "__console__", "__doc__": None}
self.locals = locals
self.compile = CommandCompiler()
def runsource(self, source, filename="<input>", symbol="single"):
"""Compile and run some source in the interpreter.
Arguments are as for compile_command().
One several things can happen:
1) The input is incorrect; compile_command() raised an
exception (SyntaxError or OverflowError). A syntax traceback
will be printed by calling the showsyntaxerror() method.
2) The input is incomplete, and more input is required;
compile_command() returned None. Nothing happens.
3) The input is complete; compile_command() returned a code
object. The code is executed by calling self.runcode() (which
also handles run-time exceptions, except for SystemExit).
The return value is True in case 2, False in the other cases (unless
an exception is raised). The return value can be used to
decide whether to use sys.ps1 or sys.ps2 to prompt the next
line.
"""
try:
code = self.compile(source, filename, symbol)
except (OverflowError, SyntaxError, ValueError):
# Case 1
self.showsyntaxerror(filename)
return False
if code is None:
# Case 2
return True
# Case 3
self.runcode(code)
return False
def runcode(self, code):
"""Execute a code object.
When an exception occurs, self.showtraceback() is called to
display a traceback. All exceptions are caught except
SystemExit, which is reraised.
A note about KeyboardInterrupt: this exception may occur
elsewhere in this code, and may not always be caught. The
caller should be prepared to deal with it.
"""
try:
exec code in self.locals
except SystemExit:
raise
except:
self.showtraceback()
else:
if softspace(sys.stdout, 0):
print
def showsyntaxerror(self, filename=None):
"""Display the syntax error that just occurred.
This doesn't display a stack trace because there isn't one.
If a filename is given, it is stuffed in the exception instead
of what was there before (because Python's parser always uses
"<string>" when reading from a string).
The output is written by self.write(), below.
"""
type, value, sys.last_traceback = sys.exc_info()
sys.last_type = type
sys.last_value = value
if filename and type is SyntaxError:
# Work hard to stuff the correct filename in the exception
try:
msg, (dummy_filename, lineno, offset, line) = value
except:
# Not the format we expect; leave it alone
pass
else:
# Stuff in the right filename
value = SyntaxError(msg, (filename, lineno, offset, line))
sys.last_value = value
list = traceback.format_exception_only(type, value)
map(self.write, list)
def showtraceback(self):
"""Display the exception that just occurred.
We remove the first stack item because it is our own code.
The output is written by self.write(), below.
"""
try:
type, value, tb = sys.exc_info()
sys.last_type = type
sys.last_value = value
sys.last_traceback = tb
tblist = traceback.extract_tb(tb)
del tblist[:1]
list = traceback.format_list(tblist)
if list:
list.insert(0, "Traceback (most recent call last):\n")
list[len(list):] = traceback.format_exception_only(type, value)
finally:
tblist = tb = None
map(self.write, list)
def write(self, data):
"""Write a string.
The base implementation writes to sys.stderr; a subclass may
replace this with a different implementation.
"""
sys.stderr.write(data)
class InteractiveConsole(InteractiveInterpreter):
"""Closely emulate the behavior of the interactive Python interpreter.
This class builds on InteractiveInterpreter and adds prompting
using the familiar sys.ps1 and sys.ps2, and input buffering.
"""
def __init__(self, locals=None, filename="<console>"):
"""Constructor.
The optional locals argument will be passed to the
InteractiveInterpreter base class.
The optional filename argument should specify the (file)name
of the input stream; it will show up in tracebacks.
"""
InteractiveInterpreter.__init__(self, locals)
self.filename = filename
self.resetbuffer()
def resetbuffer(self):
"""Reset the input buffer."""
self.buffer = []
def interact(self, banner=None):
"""Closely emulate the interactive Python console.
The optional banner argument specify the banner to print
before the first interaction; by default it prints a banner
similar to the one printed by the real Python interpreter,
followed by the current class name in parentheses (so as not
to confuse this with the real interpreter -- since it's so
close!).
"""
try:
sys.ps1
except AttributeError:
sys.ps1 = ">>> "
try:
sys.ps2
except AttributeError:
sys.ps2 = "... "
cprt = 'Type "help", "copyright", "credits" or "license" for more information.'
if banner is None:
self.write("Python %s on %s\n%s\n(%s)\n" %
(sys.version, sys.platform, cprt,
self.__class__.__name__))
else:
self.write("%s\n" % str(banner))
more = 0
while 1:
try:
if more:
prompt = sys.ps2
else:
prompt = sys.ps1
try:
line = self.raw_input(prompt)
# Can be None if sys.stdin was redefined
encoding = getattr(sys.stdin, "encoding", None)
if encoding and not isinstance(line, unicode):
line = line.decode(encoding)
except EOFError:
self.write("\n")
break
else:
more = self.push(line)
except KeyboardInterrupt:
self.write("\nKeyboardInterrupt\n")
self.resetbuffer()
more = 0
def push(self, line):
"""Push a line to the interpreter.
The line should not have a trailing newline; it may have
internal newlines. The line is appended to a buffer and the
interpreter's runsource() method is called with the
concatenated contents of the buffer as source. If this
indicates that the command was executed or invalid, the buffer
is reset; otherwise, the command is incomplete, and the buffer
is left as it was after the line was appended. The return
value is 1 if more input is required, 0 if the line was dealt
with in some way (this is the same as runsource()).
"""
self.buffer.append(line)
source = "\n".join(self.buffer)
more = self.runsource(source, self.filename)
if not more:
self.resetbuffer()
return more
def raw_input(self, prompt=""):
"""Write a prompt and read a line.
The returned line does not include the trailing newline.
When the user enters the EOF key sequence, EOFError is raised.
The base implementation uses the built-in function
raw_input(); a subclass may replace this with a different
implementation.
"""
return raw_input(prompt)
def interact(banner=None, readfunc=None, local=None):
"""Closely emulate the interactive Python interpreter.
This is a backwards compatible interface to the InteractiveConsole
class. When readfunc is not specified, it attempts to import the
readline module to enable GNU readline if it is available.
Arguments (all optional, all default to None):
banner -- passed to InteractiveConsole.interact()
readfunc -- if not None, replaces InteractiveConsole.raw_input()
local -- passed to InteractiveInterpreter.__init__()
"""
console = InteractiveConsole(local)
if readfunc is not None:
console.raw_input = readfunc
else:
try:
import readline
except ImportError:
pass
console.interact(banner)
if __name__ == "__main__":
interact()
| gpl-3.0 |
remvo/zstt-ros | src/robocup/src/object_detector.py | 1 | 18205 | #!/usr/bin/env python
import math
from math import hypot
import cv2
import numpy as np
import rospy
from cv_bridge import CvBridge, CvBridgeError
from numpy.linalg import norm
from sensor_msgs.msg import Image
from std_msgs.msg import Bool, Float32MultiArray, Int32MultiArray, String
GREEN_BGR = (0, 255, 0)
YELLOW_BGR = (0, 255, 255)
RED_BGR = (0, 0, 255)
BLUE_BGR = (255, 0, 0)
BLACK_BGR = (0, 0, 0)
WHITE_BGR = (255, 255, 255)
DISABLED_BGR = (221, 221, 221)
ENABLED_BGR = (0, 255, 255)
# TODO: using dynamic parameters
# Minimum and Maximum radius of ball
BALL_RADIUS = {
'min': {
120: 5,
140: 20,
160: 30
},
'max': {
120: 30,
140: 50,
160: 70
}
}
class ObjectDetector(object):
def __init__(self):
self.view_output = rospy.get_param('/detector/view_output', True)
self.bridge = CvBridge()
self.cv_image = None
self.lab_image = None
self.view_image = None
self.field = None
self.field_mask = None
self.ball = None
# ROS Topic Subscribers
self.cv_sub = rospy.Subscriber('image_raw', Image, self.cv_callback)
self.lab_sub = rospy.Subscriber('image_lab', Image, self.lab_callback)
# ROS Topic Publishers
try:
self.field_pub = rospy.Publisher('field_pub', Bool, queue_size=5)
self.ball_pub = rospy.Publisher('ball_pub', Int32MultiArray, queue_size=5)
self.goal_pub = rospy.Publisher('goal_pub', Float32MultiArray, queue_size=5)
self.preview_pub = rospy.Publisher('preview_img', Image, queue_size=5)
except TypeError:
self.field_pub = rospy.Publisher('field_pub', Bool)
self.ball_pub = rospy.Publisher('ball_pub', Int32MultiArray)
self.goal_pub = rospy.Publisher('goal_pub', Float32MultiArray)
self.preview_pub = rospy.Publisher('preview_img', Image)
# INIT MASK COLORS
self.field_lab = {'upper': [], 'lower': []}
self.ball_white_lab = {'upper': [], 'lower': []}
self.ball_black_lab = {'upper': [], 'lower': []}
self.goal_lab = {'upper': [], 'lower': []}
# INIT HOUGH CIRCLE OPTIONS
self.hough_circle = {}
def cv_callback(self, image_message):
try:
self.cv_image = self.bridge.imgmsg_to_cv2(
image_message, desired_encoding='passthrough')
self.view_image = self.cv_image.copy()
except CvBridgeError as error:
rospy.logerr(error)
def lab_callback(self, image_message):
try:
self.lab_image = self.bridge.imgmsg_to_cv2(
image_message, desired_encoding='passthrough')
self.work()
except CvBridgeError as error:
rospy.logerr(error)
def work(self):
# STEP 1. GET MASK COLOR FROM DYNAMIC RECONFIGURE SETTING
self.dynamic_setting()
# STEP 2. FIND FIELD OBJECT
self.find_field()
# STEP 3. FIND BALL OBJECT
self.find_ball()
# STEP 4. FIND GOAL OBJECT
self.find_goal()
if self.view_output and self.view_image is not None:
# cv2.imshow('VIEW', self.view_image)
# cv2.waitKey(3)
try:
msg = self.bridge.cv2_to_imgmsg(self.view_image, 'bgr8')
self.preview_pub.publish(msg)
except CvBridgeError as error:
rospy.logerr(error)
def find_field(self):
"""Detect filed from the image."""
def return_fail():
self.field = None
self.field_mask = None
self.field_pub.publish(False)
return
if self.lab_image is None:
return return_fail()
blur = rospy.get_param('/detector/option/blur', 5)
lab_image = cv2.GaussianBlur(self.lab_image.copy(), (blur, blur), 0)
# STEP 2-1. GET MASK VALUE
lower = np.array(self.field_lab['lower'])
upper = np.array(self.field_lab['upper'])
# STEP 2-2. SET MASK TO LAB_IMAGE
# construct a mask for the color between lower and upper, then perform
# a series of dilations and erosions to remove any small blobs left in the mask
f_mask = cv2.inRange(lab_image, lower, upper)
f_mask = cv2.erode(f_mask, None, iterations=2)
f_mask = cv2.dilate(f_mask, None, iterations=2)
# TEST
# cv2.imshow('TEST', cv2.bitwise_and(self.cv_image, self.cv_image, mask=f_mask))
# cv2.waitKey(3)
# STEP 2-3. FIND FILED CONTOUR
# find contours in the mask and initialize the current center of the field
# we need to using copy, because findContours function modify the input image
contours = cv2.findContours(
f_mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
# only proceed if at least one contour was found
if len(contours) <= 0:
return return_fail()
# STEP 2-4. MERGE CONTOUR AND FIND LARGEST ONE
# merge closed contours
min_contour = rospy.get_param('/detector/option/min_contour', 100)
merge_area = rospy.get_param('/detector/option/merge_field', 250)
contours = merge_contours(contours, min_contour, merge_area)
# return the largest contour in the mask
max_contour = max(contours, key=cv2.contourArea)
if cv2.contourArea(max_contour) < rospy.get_param('/detector/option/min_field', 100):
return return_fail()
# Field!
self.field = cv2.convexHull(max_contour)
# SETP 2-5. FILL BLACK COLOR TO NON-FIELD AREA
self.field_mask = np.zeros(self.lab_image.shape, dtype=np.uint8)
cv2.fillPoly(self.field_mask, [self.field],
(255,) * self.lab_image.shape[2])
# draw field outline
if self.view_output:
cv2.polylines(self.view_image, [self.field], True, GREEN_BGR, 4)
# TEST
# cv2.imshow('FIELD', self.field_mask)
# cv2.imshow('FIELD', cv2.bitwise_and(self.cv_image.copy(), field_mask))
self.field_pub.publish(True)
def find_ball(self, head_up_down=120):
obj_ball = Int32MultiArray()
def return_fail():
self.ball = None
self.ball_pub.publish(obj_ball)
return
# STEP 3-1. CHECK FIELD AREA
if self.cv_image is None or self.field_mask is None:
return return_fail()
# SET MASK IMAGE FOR FINDING BALL
field_image = cv2.bitwise_and(self.cv_image.copy(), self.field_mask)
# STEP 3-2. BLUR BEFORE HOUGH CIRCLE
# bilateralFilter Parameters
d = rospy.get_param('/detector/option/filter_d', 9)
color = rospy.get_param('/detector/option/filter_color', 75)
space = rospy.get_param('/detector/option/filter_space', 75)
# image, d, sigmaColor, sigmaSpace
blurred = cv2.bilateralFilter(field_image, d, color, space)
gray = cv2.bilateralFilter(blurred, d, color, space)
gray = cv2.cvtColor(gray, cv2.COLOR_BGR2GRAY)
gray = cv2.dilate(gray, None, iterations=2)
gray = cv2.erode(gray, None, iterations=2)
# HOUGH CIRCLE
# DYNAMIC RECONFIGURE PARAMETER
hc = self.hough_circle
# image, method, dp, minDist, param1, param2, minRadius, maxRadius
# TODO: using dynamic parameters
circles = cv2.HoughCircles(image=gray, method=cv2.HOUGH_GRADIENT,
dp=hc['dp'],
minDist=hc['min_d'],
param1=hc['p1'],
param2=hc['p2'],
minRadius=BALL_RADIUS['min'][head_up_down],
maxRadius=BALL_RADIUS['max'][head_up_down])
if circles is None:
rospy.logdebug("***** NO CIRCLE *****")
return return_fail()
else:
# CHANGE CIRCLE DATA'S ORDER
circles = [((circle[0], circle[1]), circle[2])
for circle in circles[0]]
# FIND BALL
ball = get_ball_from_circles(circles, head_up_down)
if ball is None:
rospy.logdebug("***** NO BALL *****")
return return_fail()
else:
self.ball = ball_to_int(ball)
(x, y), radius = self.ball
obj_ball.data = [x, y, radius]
# draw ball outline
if self.view_output:
# draw the outer circle
cv2.circle(self.view_image,
self.ball[0], self.ball[1], YELLOW_BGR, 2)
# draw the center of the circle
cv2.circle(self.view_image, self.ball[0], 2, RED_BGR, 3)
cv2.putText(self.view_image, '{}'.format(self.ball[1]),
(self.ball[0][0] - 15, self.ball[0][1] - 10),
cv2.FONT_HERSHEY_TRIPLEX, 0.6, BLUE_BGR)
self.ball_pub.publish(obj_ball)
def find_goal(self):
obj_goal = Float32MultiArray()
def return_fail():
self.goal_pub.publish(obj_goal)
return
# STEP 4-1. CHECK FIELD AREA
if self.lab_image is None or self.field_mask is None or self.field is None:
rospy.logdebug("***** NO IMAGE or NO FIELD *****")
return return_fail()
field_image = cv2.bitwise_and(self.lab_image.copy(), self.field_mask)
# STEP 4-2. SET MASK TO LAB_IMAGE
lower = np.array(self.goal_lab['lower'])
upper = np.array(self.goal_lab['upper'])
g_mask = cv2.inRange(field_image.copy(), lower, upper)
# TEST
# cv2.imshow('GOAL', cv2.bitwise_and(field_image, field_image, mask=g_mask))
# STEP 4-3. FIND FILED CONTOUR AND REMOVE TOO SMALL OR TOO BIG
contours = cv2.findContours(g_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
min_goal = rospy.get_param('/detector/option/min_goal', 100)
max_goal = rospy.get_param('/detector/option/max_goal', 200)
contours = [contour for contour in contours
if min_goal < cv2.contourArea(contour) < max_goal]
if len(contours) <= 0:
rospy.logdebug("***** NO GOAL POST *****")
self.goal_pub.publish(obj_goal)
return return_fail()
goal_post = set()
goal_post_contours = []
thres_goal_dist = rospy.get_param('/detector/option/thres_goal_distance', 50)
thres_goal_angle = rospy.get_param('/detector/option/thres_goal_angle', 30)
image = field_image
field = self.field
for i in range(len(field)):
p1 = field[i][0]
p2 = field[(i + 1) % len(field)][0]
p3 = field[(i + 2) % len(field)][0]
angle = angle_between_three_points(p1, p2, p3)
# if the three points on single line, calculate distance between line and goal post candidates
if abs(180 - angle) < thres_goal_angle:
for contour in contours:
M = cv2.moments(contour)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
if center[1] > image.shape[0] / 2:
continue
dist = int(norm(np.cross(p2 - p1, p1 - center)) / norm(p2 - p1))
if dist < thres_goal_dist:
goal_post.add((center, cv2.contourArea(contour)))
goal_post_contours.append(contour)
if len(goal_post) == 2:
break
for goal in goal_post:
(x, y), point = goal
obj_goal.data += [x, y, point]
# draw field outline
if self.view_output:
for contour in goal_post_contours:
cv2.polylines(self.view_image, [contour], True, BLUE_BGR, 4)
self.goal_pub.publish(obj_goal)
def dynamic_setting(self):
# FIELD MASK LOWER
self.field_lab['lower'] = [
rospy.get_param('/detector/field_color/lower_L', 74),
rospy.get_param('/detector/field_color/lower_A', 61),
rospy.get_param('/detector/field_color/lower_B', 88)
]
# FIELD MASK UPPER
self.field_lab['upper'] = [
rospy.get_param('/detector/field_color/upper_L', 183),
rospy.get_param('/detector/field_color/upper_A', 125),
rospy.get_param('/detector/field_color/upper_B', 215)
]
# BALL WHITE MASK LOWER
self.ball_white_lab['lower'] = [
rospy.get_param('/detector/ball_color/w_lower_L', 170),
rospy.get_param('/detector/ball_color/w_lower_A', 105),
rospy.get_param('/detector/ball_color/w_lower_B', 105)
]
# BALL WHITE MASK UPPER
self.ball_white_lab['upper'] = [
rospy.get_param('/detector/ball_color/w_upper_L', 255),
rospy.get_param('/detector/ball_color/w_upper_A', 170),
rospy.get_param('/detector/ball_color/w_upper_B', 170)
]
# BALL BLACK MASK LOWER
self.ball_black_lab['lower'] = [
rospy.get_param('/detector/ball_color/b_lower_L', 5),
rospy.get_param('/detector/ball_color/b_lower_A', 70),
rospy.get_param('/detector/ball_color/b_lower_B', 70)
]
# BALL BLACK MASK UPPER
self.ball_black_lab['upper'] = [
rospy.get_param('/detector/ball_color/b_upper_L', 125),
rospy.get_param('/detector/ball_color/b_upper_A', 150),
rospy.get_param('/detector/ball_color/b_upper_B', 140)
]
# GOAL MASK LOWER
self.goal_lab['lower'] = [
rospy.get_param('/detector/goal_color/lower_L', 170),
rospy.get_param('/detector/goal_color/lower_A', 105),
rospy.get_param('/detector/goal_color/lower_B', 105)
]
# GOAL MASK UPPER
self.goal_lab['upper'] = [
rospy.get_param('/detector/goal_color/upper_L', 255),
rospy.get_param('/detector/goal_color/upper_A', 170),
rospy.get_param('/detector/goal_color/upper_B', 170)
]
# HOUGH CIRCLE PRAMETER
self.hough_circle = {
'dp': rospy.get_param('/detector/option/dp'),
'min_d': rospy.get_param('/detector/option/min_distance'),
'p1': rospy.get_param('/detector/option/param1'),
'p2': rospy.get_param('/detector/option/param2'),
'min_r': rospy.get_param('/detector/option/min_radius'),
'max_r': rospy.get_param('/detector/option/max_radius')
}
def angle_between_three_points(point1, point2, point3):
"""
Calculate angle between point1 - point2 - point3
param point1 : (x1, y1)
param point2 : (x2, y2)
param point3 : (x3, y3)
return absolute angle in degree integer
"""
angle = math.atan2(point1[1] - point2[1], point1[0] - point2[0])
angle -= math.atan2(point3[1] - point2[1], point3[0] - point2[0])
angle = int(math.degrees(angle))
return angle
def midpoint(point1, point2):
return (point1[0] + point2[0]) * 0.5, (point1[1] + point2[1]) * 0.5
def distance(point1, point2):
"""
Calculate distance between point1 and point2
param point1 : (x1, y1)
param point2 : (x2, y2)
return distance in int type
"""
return int(hypot(point1[0] - point2[0], point1[1] - point2[1]))
def merge_contours(cnts, min_contour=100, merge_area=250):
"""
Merge closed contours
param cnts : contour list
return merged list
"""
while True:
cnts, merged = merge_contours_sub(cnts, min_contour, merge_area)
if merged is False:
return cnts
def merge_contours_sub(cnts, min_contour, merge_area):
"""
Merge closed contours
If two contours are merged successfully, we return immediately (merge performed only once).
param cnts : contour list
return merged list, merged or not
"""
for i in range(len(cnts) - 1):
# if the contour is not sufficiently large, ignore it
if cv2.contourArea(cnts[i]) < min_contour:
continue
center1, ret = get_center_from_contour(cnts[i])
if ret is False:
return cnts, False
for j in range(i + 1, len(cnts)):
if cv2.contourArea(cnts[j]) < min_contour:
continue
center2, ret = get_center_from_contour(cnts[j])
if ret is False:
return cnts, False
dist = hypot(center1[0] - center2[0], center1[1] - center2[1])
threshold = (math.sqrt(cv2.contourArea(cnts[i]))
+ math.sqrt(cv2.contourArea(cnts[j])))
if dist < threshold - merge_area:
cnts.append(np.concatenate((cnts[i], cnts[j])))
cnts.pop(j)
cnts.pop(i)
return cnts, True
return cnts, False
def get_center_from_contour(contour):
try:
M = cv2.moments(contour)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
return center, True
except ZeroDivisionError:
return None, False
def get_ball_from_circles(circles, head_up_down):
"""FIND BALL FROM CIRCLE LIST"""
if circles is None:
return None
# TODO: using dynamic parameters
# min_radius = rospy.get_param('/detector/option/min_ball_radius', 5)
# max_radius = rospy.get_param('/detector/option/max_ball_radius', 10)
for circle in circles:
if BALL_RADIUS['min'][head_up_down] < circle[1] < BALL_RADIUS['max'][head_up_down]:
return circle
return None
def ball_to_int(ball):
"""CHANGE BALL TO INT FROM FLOAT"""
if ball is None:
return None
return (int(ball[0][0]), int(ball[0][1])), int(ball[1])
def main():
rospy.init_node('object_detector', anonymous=False)
# OBJECT DETECT START
ObjectDetector()
try:
rospy.spin()
except KeyboardInterrupt:
rospy.loginfo('Shutting down')
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
| apache-2.0 |
SivilTaram/edx-platform | common/lib/capa/capa/inputtypes.py | 16 | 64448 | #
# File: courseware/capa/inputtypes.py
#
"""
Module containing the problem elements which render into input objects
- textline
- textbox (aka codeinput)
- schematic
- choicegroup (aka radiogroup, checkboxgroup)
- javascriptinput
- imageinput (for clickable image)
- optioninput (for option list)
- filesubmission (upload a file)
- crystallography
- vsepr_input
- drag_and_drop
- formulaequationinput
- chemicalequationinput
These are matched by *.html files templates/*.html which are mako templates with the
actual html.
Each input type takes the xml tree as 'element', the previous answer as 'value', and the
graded status as'status'
"""
# TODO: make hints do something
# TODO: make all inputtypes actually render msg
# TODO: remove unused fields (e.g. 'hidden' in a few places)
# TODO: add validators so that content folks get better error messages.
# Possible todo: make inline the default for textlines and other "one-line" inputs. It probably
# makes sense, but a bunch of problems have markup that assumes block. Bigger TODO: figure out a
# general css and layout strategy for capa, document it, then implement it.
import time
import json
import logging
from lxml import etree
import re
import shlex # for splitting quoted strings
import sys
import pyparsing
import html5lib
import bleach
from .util import sanitize_html
from .registry import TagRegistry
from chem import chemcalc
from calc.preview import latex_preview
import xqueue_interface
from xqueue_interface import XQUEUE_TIMEOUT
from datetime import datetime
from xmodule.stringify import stringify_children
log = logging.getLogger(__name__)
#########################################################################
registry = TagRegistry() # pylint: disable=invalid-name
class Status(object):
"""
Problem status
attributes: classname, display_name, display_tooltip
"""
css_classes = {
# status: css class
'unsubmitted': 'unanswered',
'incomplete': 'incorrect',
'queued': 'processing',
}
__slots__ = ('classname', '_status', 'display_name', 'display_tooltip')
def __init__(self, status, gettext_func=unicode):
self.classname = self.css_classes.get(status, status)
_ = gettext_func
names = {
'correct': _('correct'),
'incorrect': _('incorrect'),
'partially-correct': _('partially correct'),
'incomplete': _('incomplete'),
'unanswered': _('unanswered'),
'unsubmitted': _('unanswered'),
'queued': _('processing'),
}
tooltips = {
# Translators: these are tooltips that indicate the state of an assessment question
'correct': _('This is correct.'),
'incorrect': _('This is incorrect.'),
'partially-correct': _('This is partially correct.'),
'unanswered': _('This is unanswered.'),
'unsubmitted': _('This is unanswered.'),
'queued': _('This is being processed.'),
}
self.display_name = names.get(status, unicode(status))
self.display_tooltip = tooltips.get(status, u'')
self._status = status or ''
def __str__(self):
return self._status
def __unicode__(self):
return self._status.decode('utf8')
def __repr__(self):
return 'Status(%r)' % self._status
def __eq__(self, other):
return self._status == str(other)
class Attribute(object):
"""
Allows specifying required and optional attributes for input types.
"""
# want to allow default to be None, but also allow required objects
_sentinel = object()
def __init__(self, name, default=_sentinel, transform=None, validate=None, render=True):
"""
Define an attribute
name (str): then name of the attribute--should be alphanumeric (valid for an XML attribute)
default (any type): If not specified, this attribute is required. If specified, use this as the default value
if the attribute is not specified. Note that this value will not be transformed or validated.
transform (function str -> any type): If not None, will be called to transform the parsed value into an internal
representation.
validate (function str-or-return-type-of-tranform -> unit or exception): If not None, called to validate the
(possibly transformed) value of the attribute. Should raise ValueError with a helpful message if
the value is invalid.
render (bool): if False, don't include this attribute in the template context.
"""
self.name = name
self.default = default
self.validate = validate
self.transform = transform
self.render = render
def parse_from_xml(self, element):
"""
Given an etree xml element that should have this attribute, do the obvious thing:
- look for it. raise ValueError if not found and required.
- transform and validate. pass through any exceptions from transform or validate.
"""
val = element.get(self.name)
if self.default == self._sentinel and val is None:
raise ValueError(
'Missing required attribute {0}.'.format(self.name)
)
if val is None:
# not required, so return default
return self.default
if self.transform is not None:
val = self.transform(val)
if self.validate is not None:
self.validate(val)
return val
class InputTypeBase(object):
"""
Abstract base class for input types.
"""
template = None
def __init__(self, system, xml, state):
"""
Instantiate an InputType class. Arguments:
- system : LoncapaModule instance which provides OS, rendering, and user context.
Specifically, must have a render_template function.
- xml : Element tree of this Input element
- state : a dictionary with optional keys:
* 'value' -- the current value of this input
(what the student entered last time)
* 'id' -- the id of this input, typically
"{problem-location}_{response-num}_{input-num}"
* 'status' (answered, unanswered, unsubmitted)
* 'input_state' -- dictionary containing any inputtype-specific state
that has been preserved
* 'feedback' (dictionary containing keys for hints, errors, or other
feedback from previous attempt. Specifically 'message', 'hint',
'hintmode'. If 'hintmode' is 'always', the hint is always displayed.)
"""
self.xml = xml
self.tag = xml.tag
self.capa_system = system
# NOTE: ID should only come from one place. If it comes from multiple,
# we use state first, XML second (in case the xml changed, but we have
# existing state with an old id). Since we don't make this guarantee,
# we can swap this around in the future if there's a more logical
# order.
self.input_id = state.get('id', xml.get('id'))
if self.input_id is None:
raise ValueError(
"input id state is None. xml is {0}".format(etree.tostring(xml))
)
self.value = state.get('value', '')
feedback = state.get('feedback', {})
self.msg = feedback.get('message', '')
self.hint = feedback.get('hint', '')
self.hintmode = feedback.get('hintmode', None)
self.input_state = state.get('input_state', {})
self.answervariable = state.get("answervariable", None)
# put hint above msg if it should be displayed
if self.hintmode == 'always':
self.msg = self.hint + ('<br/>' if self.msg else '') + self.msg
self.status = state.get('status', 'unanswered')
try:
# Pre-parse and process all the declared requirements.
self.process_requirements()
# Call subclass "constructor" -- means they don't have to worry about calling
# super().__init__, and are isolated from changes to the input
# constructor interface.
self.setup()
except Exception as err:
# Something went wrong: add xml to message, but keep the traceback
msg = u"Error in xml '{x}': {err} ".format(
x=etree.tostring(xml), err=err.message)
raise Exception, msg, sys.exc_info()[2]
@classmethod
def get_attributes(cls):
"""
Should return a list of Attribute objects (see docstring there for details). Subclasses should override. e.g.
return [Attribute('unicorn', True), Attribute('num_dragons', 12, transform=int), ...]
"""
return []
def process_requirements(self):
"""
Subclasses can declare lists of required and optional attributes. This
function parses the input xml and pulls out those attributes. This
isolates most simple input types from needing to deal with xml parsing at all.
Processes attributes, putting the results in the self.loaded_attributes dictionary. Also creates a set
self.to_render, containing the names of attributes that should be included in the context by default.
"""
# Use local dicts and sets so that if there are exceptions, we don't
# end up in a partially-initialized state.
loaded = {}
to_render = set()
for attribute in self.get_attributes():
loaded[attribute.name] = attribute.parse_from_xml(self.xml)
if attribute.render:
to_render.add(attribute.name)
self.loaded_attributes = loaded
self.to_render = to_render
def setup(self):
"""
InputTypes should override this to do any needed initialization. It is called after the
constructor, so all base attributes will be set.
If this method raises an exception, it will be wrapped with a message that includes the
problem xml.
"""
pass
def handle_ajax(self, dispatch, data):
"""
InputTypes that need to handle specialized AJAX should override this.
Input:
dispatch: a string that can be used to determine how to handle the data passed in
data: a dictionary containing the data that was sent with the ajax call
Output:
a dictionary object that can be serialized into JSON. This will be sent back to the Javascript.
"""
pass
def _get_render_context(self):
"""
Should return a dictionary of keys needed to render the template for the input type.
(Separate from get_html to faciliate testing of logic separately from the rendering)
The default implementation gets the following rendering context: basic things like value, id, status, and msg,
as well as everything in self.loaded_attributes, and everything returned by self._extra_context().
This means that input types that only parse attributes and pass them to the template get everything they need,
and don't need to override this method.
"""
context = {
'id': self.input_id,
'value': self.value,
'status': Status(self.status, self.capa_system.i18n.ugettext),
'msg': self.msg,
'STATIC_URL': self.capa_system.STATIC_URL,
}
context.update(
(a, v) for (a, v) in self.loaded_attributes.iteritems() if a in self.to_render
)
context.update(self._extra_context())
if self.answervariable:
context.update({'answervariable': self.answervariable})
return context
def _extra_context(self):
"""
Subclasses can override this to return extra context that should be passed to their templates for rendering.
This is useful when the input type requires computing new template variables from the parsed attributes.
"""
return {}
def get_html(self):
"""
Return the html for this input, as an etree element.
"""
if self.template is None:
raise NotImplementedError("no rendering template specified for class {0}"
.format(self.__class__))
context = self._get_render_context()
html = self.capa_system.render_template(self.template, context)
try:
output = etree.XML(html)
except etree.XMLSyntaxError as ex:
# If `html` contains attrs with no values, like `controls` in <audio controls src='smth'/>,
# XML parser will raise exception, so wee fallback to html5parser, which will set empty "" values for such attrs.
try:
output = html5lib.parseFragment(html, treebuilder='lxml', namespaceHTMLElements=False)[0]
except IndexError:
raise ex
return output
def get_user_visible_answer(self, internal_answer):
"""
Given the internal representation of the answer provided by the user, return the representation of the answer
as the user saw it. Subclasses should override this method if and only if the internal represenation of the
answer is different from the answer that is displayed to the user.
"""
return internal_answer
#-----------------------------------------------------------------------------
@registry.register
class OptionInput(InputTypeBase):
"""
Input type for selecting and Select option input type.
Example:
<optioninput options="('Up','Down')" label="Where is the sky?" correct="Up"/><text>The location of the sky</text>
# TODO: allow ordering to be randomized
"""
template = "optioninput.html"
tags = ['optioninput']
@staticmethod
def parse_options(options):
"""
Given options string, convert it into an ordered list of (option_id, option_description) tuples, where
id==description for now. TODO: make it possible to specify different id and descriptions.
"""
# convert single quotes inside option values to html encoded string
options = re.sub(r"([a-zA-Z])('|\\')([a-zA-Z])", r"\1'\3", options)
options = re.sub(r"\\'", r"'", options) # replace already escaped single quotes
# parse the set of possible options
lexer = shlex.shlex(options[1:-1].encode('utf8'))
lexer.quotes = "'"
# Allow options to be separated by whitespace as well as commas
lexer.whitespace = ", "
# remove quotes
# convert escaped single quotes (html encoded string) back to single quotes
tokens = [x[1:-1].decode('utf8').replace("'", "'") for x in lexer]
# make list of (option_id, option_description), with description=id
return [(t, t) for t in tokens]
@classmethod
def get_attributes(cls):
"""
Convert options to a convenient format.
"""
return [Attribute('options', transform=cls.parse_options),
Attribute('label', ''),
Attribute('inline', False)]
#-----------------------------------------------------------------------------
# TODO: consolidate choicegroup, radiogroup, checkboxgroup after discussion of
# desired semantics.
@registry.register
class ChoiceGroup(InputTypeBase):
"""
Radio button or checkbox inputs: multiple choice or true/false
TODO: allow order of choices to be randomized, following lon-capa spec. Use
"location" attribute, ie random, top, bottom.
Example:
<choicegroup label="Which foil?">
<choice correct="false" name="foil1">
<text>This is foil One.</text>
</choice>
<choice correct="false" name="foil2">
<text>This is foil Two.</text>
</choice>
<choice correct="true" name="foil3">
<text>This is foil Three.</text>
</choice>
</choicegroup>
"""
template = "choicegroup.html"
tags = ['choicegroup', 'radiogroup', 'checkboxgroup']
def setup(self):
i18n = self.capa_system.i18n
# suffix is '' or [] to change the way the input is handled in --as a scalar or vector
# value. (VS: would be nice to make this less hackish).
if self.tag == 'choicegroup':
self.suffix = ''
self.html_input_type = "radio"
elif self.tag == 'radiogroup':
self.html_input_type = "radio"
self.suffix = '[]'
elif self.tag == 'checkboxgroup':
self.html_input_type = "checkbox"
self.suffix = '[]'
else:
_ = i18n.ugettext
# Translators: 'ChoiceGroup' is an input type and should not be translated.
msg = _("ChoiceGroup: unexpected tag {tag_name}").format(tag_name=self.tag)
raise Exception(msg)
self.choices = self.extract_choices(self.xml, i18n)
self._choices_map = dict(self.choices,) # pylint: disable=attribute-defined-outside-init
@classmethod
def get_attributes(cls):
_ = lambda text: text
return [Attribute("show_correctness", "always"),
Attribute('label', ''),
Attribute("submitted_message", _("Answer received."))]
def _extra_context(self):
return {'input_type': self.html_input_type,
'choices': self.choices,
'name_array_suffix': self.suffix}
@staticmethod
def extract_choices(element, i18n):
"""
Extracts choices for a few input types, such as ChoiceGroup, RadioGroup and
CheckboxGroup.
returns list of (choice_name, choice_text) tuples
TODO: allow order of choices to be randomized, following lon-capa spec. Use
"location" attribute, ie random, top, bottom.
"""
choices = []
_ = i18n.ugettext
for choice in element:
if choice.tag == 'choice':
choices.append((choice.get("name"), stringify_children(choice)))
else:
if choice.tag != 'compoundhint':
msg = u'[capa.inputtypes.extract_choices] {error_message}'.format(
# Translators: '<choice>' and '<compoundhint>' are tag names and should not be translated.
error_message=_('Expected a <choice> or <compoundhint> tag; got {given_tag} instead').format(
given_tag=choice.tag
)
)
raise Exception(msg)
return choices
def get_user_visible_answer(self, internal_answer):
if isinstance(internal_answer, basestring):
return self._choices_map[internal_answer]
return [self._choices_map[i] for i in internal_answer]
#-----------------------------------------------------------------------------
@registry.register
class JavascriptInput(InputTypeBase):
"""
Hidden field for javascript to communicate via; also loads the required
scripts for rendering the problem and passes data to the problem.
TODO (arjun?): document this in detail. Initial notes:
- display_class is a subclass of XProblemClassDisplay (see
xmodule/xmodule/js/src/capa/display.coffee),
- display_file is the js script to be in /static/js/ where display_class is defined.
"""
template = "javascriptinput.html"
tags = ['javascriptinput']
@classmethod
def get_attributes(cls):
"""
Register the attributes.
"""
return [Attribute('params', None),
Attribute('problem_state', None),
Attribute('display_class', None),
Attribute('display_file', None), ]
def setup(self):
# Need to provide a value that JSON can parse if there is no
# student-supplied value yet.
if self.value == "":
self.value = 'null'
#-----------------------------------------------------------------------------
@registry.register
class JSInput(InputTypeBase):
"""
Inputtype for general javascript inputs. Intended to be used with
customresponse.
Loads in a sandboxed iframe to help prevent css and js conflicts between
frame and top-level window.
iframe sandbox whitelist:
- allow-scripts
- allow-popups
- allow-forms
- allow-pointer-lock
This in turn means that the iframe cannot directly access the top-level
window elements.
Example:
<jsinput html_file="/static/test.html"
gradefn="grade"
height="500"
width="400"/>
See the documentation in docs/data/source/course_data_formats/jsinput.rst
for more information.
"""
template = "jsinput.html"
tags = ['jsinput']
@classmethod
def get_attributes(cls):
"""
Register the attributes.
"""
return [
Attribute('params', None), # extra iframe params
Attribute('html_file', None),
Attribute('gradefn', "gradefn"),
Attribute('get_statefn', None), # Function to call in iframe
# to get current state.
Attribute('initial_state', None), # JSON string to be used as initial state
Attribute('set_statefn', None), # Function to call iframe to
# set state
Attribute('width', "400"), # iframe width
Attribute('height', "300"), # iframe height
Attribute('sop', None) # SOP will be relaxed only if this
# attribute is set to false.
]
def _extra_context(self):
context = {
'jschannel_loader': '{static_url}js/capa/src/jschannel.js'.format(
static_url=self.capa_system.STATIC_URL),
'jsinput_loader': '{static_url}js/capa/src/jsinput.js'.format(
static_url=self.capa_system.STATIC_URL),
'saved_state': self.value
}
return context
#-----------------------------------------------------------------------------
@registry.register
class TextLine(InputTypeBase):
"""
A text line input. Can do math preview if "math"="1" is specified.
If "trailing_text" is set to a value, then the textline will be shown with
the value after the text input, and before the checkmark or any input-specific
feedback. HTML will not work, but properly escaped HTML characters will. This
feature is useful if you would like to specify a specific type of units for the
text input.
If the hidden attribute is specified, the textline is hidden and the input id
is stored in a div with name equal to the value of the hidden attribute. This
is used e.g. for embedding simulations turned into questions.
Example:
<textline math="1" trailing_text="m/s" label="How fast is a cheetah?" />
This example will render out a text line with a math preview and the text 'm/s'
after the end of the text line.
"""
template = "textline.html"
tags = ['textline']
@classmethod
def get_attributes(cls):
"""
Register the attributes.
"""
return [
Attribute('size', None),
Attribute('label', ''),
Attribute('hidden', False),
Attribute('inline', False),
# Attributes below used in setup(), not rendered directly.
Attribute('math', None, render=False),
# TODO: 'dojs' flag is temporary, for backwards compatibility with
# 8.02x
Attribute('dojs', None, render=False),
Attribute('preprocessorClassName', None, render=False),
Attribute('preprocessorSrc', None, render=False),
Attribute('trailing_text', ''),
]
def setup(self):
self.do_math = bool(self.loaded_attributes['math'] or
self.loaded_attributes['dojs'])
# TODO: do math checking using ajax instead of using js, so
# that we only have one math parser.
self.preprocessor = None
if self.do_math:
# Preprocessor to insert between raw input and Mathjax
self.preprocessor = {
'class_name': self.loaded_attributes['preprocessorClassName'],
'script_src': self.loaded_attributes['preprocessorSrc'],
}
if None in self.preprocessor.values():
self.preprocessor = None
def _extra_context(self):
return {'do_math': self.do_math,
'preprocessor': self.preprocessor, }
#-----------------------------------------------------------------------------
@registry.register
class FileSubmission(InputTypeBase):
"""
Upload some files (e.g. for programming assignments)
"""
template = "filesubmission.html"
tags = ['filesubmission']
@staticmethod
def parse_files(files):
"""
Given a string like 'a.py b.py c.out', split on whitespace and return as a json list.
"""
return json.dumps(files.split())
@classmethod
def get_attributes(cls):
"""
Convert the list of allowed files to a convenient format.
"""
return [Attribute('allowed_files', '[]', transform=cls.parse_files),
Attribute('label', ''),
Attribute('required_files', '[]', transform=cls.parse_files), ]
def setup(self):
"""
Do some magic to handle queueing status (render as "queued" instead of "incomplete"),
pull queue_len from the msg field. (TODO: get rid of the queue_len hack).
"""
_ = self.capa_system.i18n.ugettext
submitted_msg = _("Your files have been submitted. As soon as your submission is"
" graded, this message will be replaced with the grader's feedback.")
self.submitted_msg = submitted_msg
# Check if problem has been queued
self.queue_len = 0
# Flag indicating that the problem has been queued, 'msg' is length of
# queue
if self.status == 'incomplete':
self.status = 'queued'
self.queue_len = self.msg
self.msg = self.submitted_msg
def _extra_context(self):
return {'queue_len': self.queue_len, }
#-----------------------------------------------------------------------------
@registry.register
class CodeInput(InputTypeBase):
"""
A text area input for code--uses codemirror, does syntax highlighting, special tab handling,
etc.
"""
template = "codeinput.html"
tags = [
'codeinput',
'textbox',
# Another (older) name--at some point we may want to make it use a
# non-codemirror editor.
]
@classmethod
def get_attributes(cls):
"""
Convert options to a convenient format.
"""
return [
Attribute('rows', '30'),
Attribute('cols', '80'),
Attribute('hidden', ''),
# For CodeMirror
Attribute('mode', 'python'),
Attribute('linenumbers', 'true'),
# Template expects tabsize to be an int it can do math with
Attribute('tabsize', 4, transform=int),
]
def setup_code_response_rendering(self):
"""
Implement special logic: handle queueing state, and default input.
"""
# if no student input yet, then use the default input given by the
# problem
if not self.value and self.xml.text:
self.value = self.xml.text.strip()
# Check if problem has been queued
self.queue_len = 0
# Flag indicating that the problem has been queued, 'msg' is length of
# queue
if self.status == 'incomplete':
self.status = 'queued'
self.queue_len = self.msg
self.msg = bleach.clean(self.submitted_msg)
def setup(self):
""" setup this input type """
_ = self.capa_system.i18n.ugettext
submitted_msg = _("Your answer has been submitted. As soon as your submission is"
" graded, this message will be replaced with the grader's feedback.")
self.submitted_msg = submitted_msg
self.setup_code_response_rendering()
def _extra_context(self):
"""Defined queue_len, add it """
return {'queue_len': self.queue_len, }
#-----------------------------------------------------------------------------
@registry.register
class MatlabInput(CodeInput):
"""
InputType for handling Matlab code input
Example:
<matlabinput rows="10" cols="80" tabsize="4">
Initial Text
</matlabinput>
"""
template = "matlabinput.html"
tags = ['matlabinput']
def setup(self):
"""
Handle matlab-specific parsing
"""
_ = self.capa_system.i18n.ugettext
submitted_msg = _("Submitted. As soon as a response is returned, "
"this message will be replaced by that feedback.")
self.submitted_msg = submitted_msg
self.setup_code_response_rendering()
xml = self.xml
self.plot_payload = xml.findtext('./plot_payload')
# Check if problem has been queued
self.queuename = 'matlab'
self.queue_msg = ''
# this is only set if we don't have a graded response
# the graded response takes precedence
if 'queue_msg' in self.input_state and self.status in ['queued', 'incomplete', 'unsubmitted']:
self.queue_msg = sanitize_html(self.input_state['queue_msg'])
if 'queuestate' in self.input_state and self.input_state['queuestate'] == 'queued':
self.status = 'queued'
self.queue_len = 1
self.msg = self.submitted_msg
# Handle situation if no response from xqueue arrived during specified time.
if ('queuetime' not in self.input_state or
time.time() - self.input_state['queuetime'] > XQUEUE_TIMEOUT):
self.queue_len = 0
self.status = 'unsubmitted'
self.msg = _(
'No response from Xqueue within {xqueue_timeout} seconds. Aborted.'
).format(xqueue_timeout=XQUEUE_TIMEOUT)
def handle_ajax(self, dispatch, data):
"""
Handle AJAX calls directed to this input
Args:
- dispatch (str) - indicates how we want this ajax call to be handled
- data (dict) - dictionary of key-value pairs that contain useful data
Returns:
dict - 'success' - whether or not we successfully queued this submission
- 'message' - message to be rendered in case of error
"""
if dispatch == 'plot':
return self._plot_data(data)
return {}
def ungraded_response(self, queue_msg, queuekey):
"""
Handle the response from the XQueue
Stores the response in the input_state so it can be rendered later
Args:
- queue_msg (str) - message returned from the queue. The message to be rendered
- queuekey (str) - a key passed to the queue. Will be matched up to verify that this is the response we're waiting for
Returns:
nothing
"""
# check the queuekey against the saved queuekey
if('queuestate' in self.input_state and self.input_state['queuestate'] == 'queued'
and self.input_state['queuekey'] == queuekey):
msg = self._parse_data(queue_msg)
# save the queue message so that it can be rendered later
self.input_state['queue_msg'] = msg
self.input_state['queuestate'] = None
self.input_state['queuekey'] = None
def button_enabled(self):
""" Return whether or not we want the 'Test Code' button visible
Right now, we only want this button to show up when a problem has not been
checked.
"""
if self.status in ['correct', 'incorrect', 'partially-correct']:
return False
else:
return True
def _extra_context(self):
""" Set up additional context variables"""
_ = self.capa_system.i18n.ugettext
queue_msg = self.queue_msg
if len(self.queue_msg) > 0: # An empty string cannot be parsed as XML but is okay to include in the template.
try:
etree.XML(u'<div>{0}</div>'.format(self.queue_msg))
except etree.XMLSyntaxError:
try:
html5lib.parseFragment(self.queue_msg, treebuilder='lxml', namespaceHTMLElements=False)[0]
except (IndexError, ValueError):
# If neither can parse queue_msg, it contains invalid xml.
queue_msg = u"<span>{0}</span>".format(_("Error running code."))
extra_context = {
'queue_len': str(self.queue_len),
'queue_msg': queue_msg,
'button_enabled': self.button_enabled(),
'matlab_editor_js': '{static_url}js/vendor/CodeMirror/octave.js'.format(
static_url=self.capa_system.STATIC_URL),
'msg': sanitize_html(self.msg) # sanitize msg before rendering into template
}
return extra_context
def _parse_data(self, queue_msg):
"""
Parses the message out of the queue message
Args:
queue_msg (str) - a JSON encoded string
Returns:
returns the value for the the key 'msg' in queue_msg
"""
try:
result = json.loads(queue_msg)
except (TypeError, ValueError):
log.error("External message should be a JSON serialized dict."
" Received queue_msg = %s", queue_msg)
raise
msg = result['msg']
return msg
def _plot_data(self, data):
"""
AJAX handler for the plot button
Args:
get (dict) - should have key 'submission' which contains the student submission
Returns:
dict - 'success' - whether or not we successfully queued this submission
- 'message' - message to be rendered in case of error
"""
_ = self.capa_system.i18n.ugettext
# only send data if xqueue exists
if self.capa_system.xqueue is None:
return {'success': False, 'message': _('Cannot connect to the queue')}
# pull relevant info out of get
response = data['submission']
# construct xqueue headers
qinterface = self.capa_system.xqueue['interface']
qtime = datetime.utcnow().strftime(xqueue_interface.dateformat)
callback_url = self.capa_system.xqueue['construct_callback']('ungraded_response')
anonymous_student_id = self.capa_system.anonymous_student_id
# TODO: Why is this using self.capa_system.seed when we have self.seed???
queuekey = xqueue_interface.make_hashkey(str(self.capa_system.seed) + qtime +
anonymous_student_id +
self.input_id)
xheader = xqueue_interface.make_xheader(
lms_callback_url=callback_url,
lms_key=queuekey,
queue_name=self.queuename)
# construct xqueue body
student_info = {
'anonymous_student_id': anonymous_student_id,
'submission_time': qtime
}
contents = {
'grader_payload': self.plot_payload,
'student_info': json.dumps(student_info),
'student_response': response,
'token': getattr(self.capa_system, 'matlab_api_key', None),
'endpoint_version': "2",
'requestor_id': anonymous_student_id,
}
(error, msg) = qinterface.send_to_queue(header=xheader,
body=json.dumps(contents))
# save the input state if successful
if error == 0:
self.input_state['queuekey'] = queuekey
self.input_state['queuestate'] = 'queued'
self.input_state['queuetime'] = time.time()
return {'success': error == 0, 'message': msg}
#-----------------------------------------------------------------------------
@registry.register
class Schematic(InputTypeBase):
"""
InputType for the schematic editor
"""
template = "schematicinput.html"
tags = ['schematic']
@classmethod
def get_attributes(cls):
"""
Convert options to a convenient format.
"""
return [
Attribute('height', None),
Attribute('width', None),
Attribute('parts', None),
Attribute('analyses', None),
Attribute('initial_value', None),
Attribute('submit_analyses', None),
Attribute('label', ''),
]
def _extra_context(self):
context = {
'setup_script': '{static_url}js/capa/schematicinput.js'.format(
static_url=self.capa_system.STATIC_URL),
}
return context
#-----------------------------------------------------------------------------
@registry.register
class ImageInput(InputTypeBase):
"""
Clickable image as an input field. Element should specify the image source, height,
and width, e.g.
<imageinput src="/static/Figures/Skier-conservation-of-energy.jpg" width="388" height="560" />
TODO: showanswer for imageimput does not work yet - need javascript to put rectangle
over acceptable area of image.
"""
template = "imageinput.html"
tags = ['imageinput']
@classmethod
def get_attributes(cls):
"""
Note: src, height, and width are all required.
"""
return [Attribute('src'),
Attribute('height'),
Attribute('label', ''),
Attribute('width'), ]
def setup(self):
"""
if value is of the form [x,y] then parse it and send along coordinates of previous answer
"""
m = re.match(r'\[([0-9]+),([0-9]+)]',
self.value.strip().replace(' ', ''))
if m:
# Note: we subtract 15 to compensate for the size of the dot on the screen.
# (is a 30x30 image--lms/static/images/green-pointer.png).
(self.gx, self.gy) = [int(x) - 15 for x in m.groups()]
else:
(self.gx, self.gy) = (0, 0)
def _extra_context(self):
return {'gx': self.gx,
'gy': self.gy}
#-----------------------------------------------------------------------------
@registry.register
class Crystallography(InputTypeBase):
"""
An input for crystallography -- user selects 3 points on the axes, and we get a plane.
TODO: what's the actual value format?
"""
template = "crystallography.html"
tags = ['crystallography']
@classmethod
def get_attributes(cls):
"""
Note: height, width are required.
"""
return [Attribute('height'),
Attribute('width'),
]
# -------------------------------------------------------------------------
@registry.register
class VseprInput(InputTypeBase):
"""
Input for molecular geometry--show possible structures, let student
pick structure and label positions with atoms or electron pairs.
"""
template = 'vsepr_input.html'
tags = ['vsepr_input']
@classmethod
def get_attributes(cls):
"""
Note: height, width, molecules and geometries are required.
"""
return [Attribute('height'),
Attribute('width'),
Attribute('molecules'),
Attribute('geometries'),
]
#-------------------------------------------------------------------------
@registry.register
class ChemicalEquationInput(InputTypeBase):
"""
An input type for entering chemical equations. Supports live preview.
Example:
<chemicalequationinput size="50"/>
options: size -- width of the textbox.
"""
template = "chemicalequationinput.html"
tags = ['chemicalequationinput']
@classmethod
def get_attributes(cls):
"""
Can set size of text field.
"""
return [Attribute('size', '20'),
Attribute('label', ''), ]
def _extra_context(self):
"""
TODO (vshnayder): Get rid of this once we have a standard way of requiring js to be loaded.
"""
return {
'previewer': '{static_url}js/capa/chemical_equation_preview.js'.format(
static_url=self.capa_system.STATIC_URL),
}
def handle_ajax(self, dispatch, data):
"""
Since we only have chemcalc preview this input, check to see if it
matches the corresponding dispatch and send it through if it does
"""
if dispatch == 'preview_chemcalc':
return self.preview_chemcalc(data)
return {}
def preview_chemcalc(self, data):
"""
Render an html preview of a chemical formula or equation. get should
contain a key 'formula' and value 'some formula string'.
Returns a json dictionary:
{
'preview' : 'the-preview-html' or ''
'error' : 'the-error' or ''
}
"""
_ = self.capa_system.i18n.ugettext
result = {'preview': '',
'error': ''}
try:
formula = data['formula']
except KeyError:
result['error'] = _("No formula specified.")
return result
try:
result['preview'] = chemcalc.render_to_html(formula)
except pyparsing.ParseException as err:
result['error'] = _("Couldn't parse formula: {error_msg}").format(error_msg=err.msg)
except Exception:
# this is unexpected, so log
log.warning(
"Error while previewing chemical formula", exc_info=True)
result['error'] = _("Error while rendering preview")
return result
#-------------------------------------------------------------------------
@registry.register
class FormulaEquationInput(InputTypeBase):
"""
An input type for entering formula equations. Supports live preview.
Example:
<formulaequationinput size="50" label="Enter the equation for motion"/>
options: size -- width of the textbox.
"""
template = "formulaequationinput.html"
tags = ['formulaequationinput']
@classmethod
def get_attributes(cls):
"""
Can set size of text field.
"""
return [
Attribute('size', '20'),
Attribute('inline', False),
Attribute('label', ''),
]
def _extra_context(self):
"""
TODO (vshnayder): Get rid of 'previewer' once we have a standard way of requiring js to be loaded.
"""
# `reported_status` is basically `status`, except we say 'unanswered'
return {
'previewer': '{static_url}js/capa/src/formula_equation_preview.js'.format(
static_url=self.capa_system.STATIC_URL),
}
def handle_ajax(self, dispatch, get):
"""
Since we only have formcalc preview this input, check to see if it
matches the corresponding dispatch and send it through if it does
"""
if dispatch == 'preview_formcalc':
return self.preview_formcalc(get)
return {}
def preview_formcalc(self, get):
"""
Render an preview of a formula or equation. `get` should
contain a key 'formula' with a math expression.
Returns a json dictionary:
{
'preview' : '<some latex>' or ''
'error' : 'the-error' or ''
'request_start' : <time sent with request>
}
"""
_ = self.capa_system.i18n.ugettext
result = {'preview': '',
'error': ''}
try:
formula = get['formula']
except KeyError:
result['error'] = _("No formula specified.")
return result
result['request_start'] = int(get.get('request_start', 0))
try:
# TODO add references to valid variables and functions
# At some point, we might want to mark invalid variables as red
# or something, and this is where we would need to pass those in.
result['preview'] = latex_preview(formula)
except pyparsing.ParseException as err:
result['error'] = _("Sorry, couldn't parse formula")
result['formula'] = formula
except Exception:
# this is unexpected, so log
log.warning(
"Error while previewing formula", exc_info=True
)
result['error'] = _("Error while rendering preview")
return result
#-----------------------------------------------------------------------------
@registry.register
class DragAndDropInput(InputTypeBase):
"""
Input for drag and drop problems. Allows student to drag and drop images and
labels to base image.
"""
template = 'drag_and_drop_input.html'
tags = ['drag_and_drop_input']
def setup(self):
def parse(tag, tag_type):
"""Parses <tag ... /> xml element to dictionary. Stores
'draggable' and 'target' tags with attributes to dictionary and
returns last.
Args:
tag: xml etree element <tag...> with attributes
tag_type: 'draggable' or 'target'.
If tag_type is 'draggable' : all attributes except id
(name or label or icon or can_reuse) are optional
If tag_type is 'target' all attributes (name, x, y, w, h)
are required. (x, y) - coordinates of center of target,
w, h - weight and height of target.
Returns:
Dictionary of vaues of attributes:
dict{'name': smth, 'label': smth, 'icon': smth,
'can_reuse': smth}.
"""
tag_attrs = dict()
tag_attrs['draggable'] = {
'id': Attribute._sentinel,
'label': "", 'icon': "",
'can_reuse': ""
}
tag_attrs['target'] = {
'id': Attribute._sentinel,
'x': Attribute._sentinel,
'y': Attribute._sentinel,
'w': Attribute._sentinel,
'h': Attribute._sentinel
}
dic = dict()
for attr_name in tag_attrs[tag_type].keys():
dic[attr_name] = Attribute(attr_name,
default=tag_attrs[tag_type][attr_name]).parse_from_xml(tag)
if tag_type == 'draggable' and not self.no_labels:
dic['label'] = dic['label'] or dic['id']
if tag_type == 'draggable':
dic['target_fields'] = [parse(target, 'target') for target in
tag.iterchildren('target')]
return dic
# add labels to images?:
self.no_labels = Attribute('no_labels',
default="False").parse_from_xml(self.xml)
to_js = dict()
# image drag and drop onto
to_js['base_image'] = Attribute('img').parse_from_xml(self.xml)
# outline places on image where to drag adn drop
to_js['target_outline'] = Attribute('target_outline',
default="False").parse_from_xml(self.xml)
# one draggable per target?
to_js['one_per_target'] = Attribute('one_per_target',
default="True").parse_from_xml(self.xml)
# list of draggables
to_js['draggables'] = [parse(draggable, 'draggable') for draggable in
self.xml.iterchildren('draggable')]
# list of targets
to_js['targets'] = [parse(target, 'target') for target in
self.xml.iterchildren('target')]
# custom background color for labels:
label_bg_color = Attribute('label_bg_color',
default=None).parse_from_xml(self.xml)
if label_bg_color:
to_js['label_bg_color'] = label_bg_color
self.loaded_attributes['drag_and_drop_json'] = json.dumps(to_js)
self.to_render.add('drag_and_drop_json')
#-------------------------------------------------------------------------
@registry.register
class EditAMoleculeInput(InputTypeBase):
"""
An input type for edit-a-molecule. Integrates with the molecule editor java applet.
Example:
<editamolecule size="50"/>
options: size -- width of the textbox.
"""
template = "editamolecule.html"
tags = ['editamoleculeinput']
@classmethod
def get_attributes(cls):
"""
Can set size of text field.
"""
return [Attribute('file'),
Attribute('missing', None)]
def _extra_context(self):
context = {
'applet_loader': '{static_url}js/capa/editamolecule.js'.format(
static_url=self.capa_system.STATIC_URL),
}
return context
#-----------------------------------------------------------------------------
@registry.register
class DesignProtein2dInput(InputTypeBase):
"""
An input type for design of a protein in 2D. Integrates with the Protex java applet.
Example:
<designprotein2d width="800" hight="500" target_shape="E;NE;NW;W;SW;E;none" />
"""
template = "designprotein2dinput.html"
tags = ['designprotein2dinput']
@classmethod
def get_attributes(cls):
"""
Note: width, hight, and target_shape are required.
"""
return [Attribute('width'),
Attribute('height'),
Attribute('target_shape')
]
def _extra_context(self):
context = {
'applet_loader': '{static_url}js/capa/design-protein-2d.js'.format(
static_url=self.capa_system.STATIC_URL),
}
return context
#-----------------------------------------------------------------------------
@registry.register
class EditAGeneInput(InputTypeBase):
"""
An input type for editing a gene.
Integrates with the genex GWT application.
Example:
<editagene genex_dna_sequence="CGAT" genex_problem_number="1"/>
"""
template = "editageneinput.html"
tags = ['editageneinput']
@classmethod
def get_attributes(cls):
"""
Note: width, height, and dna_sequencee are required.
"""
return [Attribute('genex_dna_sequence'),
Attribute('genex_problem_number')
]
def _extra_context(self):
context = {
'applet_loader': '{static_url}js/capa/edit-a-gene.js'.format(
static_url=self.capa_system.STATIC_URL),
}
return context
#---------------------------------------------------------------------
@registry.register
class AnnotationInput(InputTypeBase):
"""
Input type for annotations: students can enter some notes or other text
(currently ungraded), and then choose from a set of tags/optoins, which are graded.
Example:
<annotationinput>
<title>Annotation Exercise</title>
<text>
They are the ones who, at the public assembly, had put savage derangement [ate] into my thinking
[phrenes] |89 on that day when I myself deprived Achilles of his honorific portion [geras]
</text>
<comment>Agamemnon says that ate or 'derangement' was the cause of his actions: why could Zeus say the same thing?</comment>
<comment_prompt>Type a commentary below:</comment_prompt>
<tag_prompt>Select one tag:</tag_prompt>
<options>
<option choice="correct">ate - both a cause and an effect</option>
<option choice="incorrect">ate - a cause</option>
<option choice="partially-correct">ate - an effect</option>
</options>
</annotationinput>
# TODO: allow ordering to be randomized
"""
template = "annotationinput.html"
tags = ['annotationinput']
def setup(self):
xml = self.xml
self.debug = False # set to True to display extra debug info with input
self.return_to_annotation = True # return only works in conjunction with annotatable xmodule
self.title = xml.findtext('./title', 'Annotation Exercise')
self.text = xml.findtext('./text')
self.comment = xml.findtext('./comment')
self.comment_prompt = xml.findtext(
'./comment_prompt', 'Type a commentary below:')
self.tag_prompt = xml.findtext('./tag_prompt', 'Select one tag:')
self.options = self._find_options()
# Need to provide a value that JSON can parse if there is no
# student-supplied value yet.
if self.value == '':
self.value = 'null'
self._validate_options()
def _find_options(self):
""" Returns an array of dicts where each dict represents an option. """
elements = self.xml.findall('./options/option')
return [{
'id': index,
'description': option.text,
'choice': option.get('choice')
} for (index, option) in enumerate(elements)]
def _validate_options(self):
""" Raises a ValueError if the choice attribute is missing or invalid. """
valid_choices = ('correct', 'partially-correct', 'incorrect')
for option in self.options:
choice = option['choice']
if choice is None:
raise ValueError('Missing required choice attribute.')
elif choice not in valid_choices:
raise ValueError('Invalid choice attribute: {0}. Must be one of: {1}'.format(
choice, ', '.join(valid_choices)))
def _unpack(self, json_value):
""" Unpacks the json input state into a dict. """
d = json.loads(json_value)
if not isinstance(d, dict):
d = {}
comment_value = d.get('comment', '')
if not isinstance(comment_value, basestring):
comment_value = ''
options_value = d.get('options', [])
if not isinstance(options_value, list):
options_value = []
return {
'options_value': options_value,
'has_options_value': len(options_value) > 0, # for convenience
'comment_value': comment_value,
}
def _extra_context(self):
extra_context = {
'title': self.title,
'text': self.text,
'comment': self.comment,
'comment_prompt': self.comment_prompt,
'tag_prompt': self.tag_prompt,
'options': self.options,
'return_to_annotation': self.return_to_annotation,
'debug': self.debug
}
extra_context.update(self._unpack(self.value))
return extra_context
@registry.register
class ChoiceTextGroup(InputTypeBase):
"""
Groups of radiobutton/checkboxes with text inputs.
Examples:
RadioButton problem
<problem>
<startouttext/>
A person rolls a standard die 100 times and records the results.
On the first roll they received a "1". Given this information
select the correct choice and fill in numbers to make it accurate.
<endouttext/>
<choicetextresponse>
<radiotextgroup label="What is the correct choice?">
<choice correct="false">The lowest number rolled was:
<decoy_input/> and the highest number rolled was:
<decoy_input/> .</choice>
<choice correct="true">The lowest number rolled was <numtolerance_input answer="1"/>
and there is not enough information to determine the highest number rolled.
</choice>
<choice correct="false">There is not enough information to determine the lowest
number rolled, and the highest number rolled was:
<decoy_input/> .
</choice>
</radiotextgroup>
</choicetextresponse>
</problem>
CheckboxProblem:
<problem>
<startouttext/>
A person randomly selects 100 times, with replacement, from the list of numbers \(\sqrt{2}\) , 2, 3, 4 ,5 ,6
and records the results. The first number they pick is \(\sqrt{2}\) Given this information
select the correct choices and fill in numbers to make them accurate.
<endouttext/>
<choicetextresponse>
<checkboxtextgroup label="What is the answer?">
<choice correct="true">
The lowest number selected was <numtolerance_input answer="1.4142" tolerance="0.01"/>
</choice>
<choice correct="false">
The highest number selected was <decoy_input/> .
</choice>
<choice correct="true">There is not enough information given to determine the highest number
which was selected.
</choice>
<choice correct="false">There is not enough information given to determine the lowest number
selected.
</choice>
</checkboxtextgroup>
</choicetextresponse>
</problem>
In the preceding examples the <decoy_input/> is used to generate a textinput html element
in the problem's display. Since it is inside of an incorrect choice, no answer given
for it will be correct, and thus specifying an answer for it is not needed.
"""
template = "choicetext.html"
tags = ['radiotextgroup', 'checkboxtextgroup']
def setup(self):
"""
Performs setup for the initial rendering of the problem.
`self.html_input_type` determines whether this problem is displayed
with radiobuttons or checkboxes
If the initial value of `self.value` is '' change it to {} so that
the template has an empty dictionary to work with.
sets the value of self.choices to be equal to the return value of
`self.extract_choices`
"""
self.text_input_values = {}
if self.tag == 'radiotextgroup':
self.html_input_type = "radio"
elif self.tag == 'checkboxtextgroup':
self.html_input_type = "checkbox"
else:
_ = self.capa_system.i18n.ugettext
msg = _("{input_type}: unexpected tag {tag_name}").format(
input_type="ChoiceTextGroup", tag_name=self.tag
)
raise Exception(msg)
if self.value == '':
# Make `value` an empty dictionary, if it currently has an empty
# value. This is necessary because the template expects a
# dictionary.
self.value = {}
self.choices = self.extract_choices(self.xml, self.capa_system.i18n)
@classmethod
def get_attributes(cls):
"""
Returns a list of `Attribute` for this problem type
"""
_ = lambda text: text
return [
Attribute("show_correctness", "always"),
Attribute("submitted_message", _("Answer received.")),
Attribute("label", ""),
]
def _extra_context(self):
"""
Returns a dictionary of extra content necessary for rendering this InputType.
`input_type` is either 'radio' or 'checkbox' indicating whether the choices for
this problem will have radiobuttons or checkboxes.
"""
return {
'input_type': self.html_input_type,
'choices': self.choices
}
@staticmethod
def extract_choices(element, i18n):
"""
Extracts choices from the xml for this problem type.
If we have xml that is as follows(choice names will have been assigned
by now)
<radiotextgroup>
<choice correct = "true" name ="1_2_1_choiceinput_0bc">
The number
<numtolerance_input name = "1_2_1_choiceinput0_numtolerance_input_0" answer="5"/>
Is the mean of the list.
</choice>
<choice correct = "false" name = "1_2_1_choiceinput_1bc>
False demonstration choice
</choice>
</radiotextgroup>
Choices are used for rendering the problem properly
The function will setup choices as follows:
choices =[
("1_2_1_choiceinput_0bc",
[{'type': 'text', 'contents': "The number", 'tail_text': '',
'value': ''
},
{'type': 'textinput',
'contents': "1_2_1_choiceinput0_numtolerance_input_0",
'tail_text': 'Is the mean of the list',
'value': ''
}
]
),
("1_2_1_choiceinput_1bc",
[{'type': 'text', 'contents': "False demonstration choice",
'tail_text': '',
'value': ''
}
]
)
]
"""
_ = i18n.ugettext
choices = []
for choice in element:
if choice.tag != 'choice':
msg = u"[capa.inputtypes.extract_choices] {0}".format(
# Translators: a "tag" is an XML element, such as "<b>" in HTML
_("Expected a {expected_tag} tag; got {given_tag} instead").format(
expected_tag=u"<choice>",
given_tag=choice.tag,
)
)
raise Exception(msg)
components = []
choice_text = ''
if choice.text is not None:
choice_text += choice.text
# Initialize our dict for the next content
adder = {
'type': 'text',
'contents': choice_text,
'tail_text': '',
'value': ''
}
components.append(adder)
for elt in choice:
# for elements in the choice e.g. <text> <numtolerance_input>
adder = {
'type': 'text',
'contents': '',
'tail_text': '',
'value': ''
}
tag_type = elt.tag
# If the current `elt` is a <numtolerance_input> set the
# `adder`type to 'numtolerance_input', and 'contents' to
# the `elt`'s name.
# Treat decoy_inputs and numtolerance_inputs the same in order
# to prevent students from reading the Html and figuring out
# which inputs are valid
if tag_type in ('numtolerance_input', 'decoy_input'):
# We set this to textinput, so that we get a textinput html
# element.
adder['type'] = 'textinput'
adder['contents'] = elt.get('name')
else:
adder['contents'] = elt.text
# Add any tail text("is the mean" in the example)
adder['tail_text'] = elt.tail if elt.tail else ''
components.append(adder)
# Add the tuple for the current choice to the list of choices
choices.append((choice.get("name"), components))
return choices
| agpl-3.0 |
tmerrick1/spack | var/spack/repos/builtin/packages/quantum-espresso/package.py | 1 | 6095 | ##############################################################################
# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, [email protected], All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
import glob
import os.path
from spack import *
class QuantumEspresso(Package):
"""Quantum-ESPRESSO is an integrated suite of Open-Source computer codes
for electronic-structure calculations and materials modeling at the
nanoscale. It is based on density-functional theory, plane waves, and
pseudopotentials.
"""
homepage = 'http://quantum-espresso.org'
url = 'https://github.com/QEF/q-e/archive/qe-5.3.tar.gz'
version(
'6.2.0',
'972176a58d16ae8cf0c9a308479e2b97',
url='https://github.com/QEF/q-e/archive/qe-6.2.0.tar.gz'
)
version(
'6.1.0',
'3fe861dcb5f6ec3d15f802319d5d801b',
url='https://github.com/QEF/q-e/archive/qe-6.1.0.tar.gz'
)
version(
'5.4.0',
'085f7e4de0952e266957bbc79563c54e',
url='https://github.com/QEF/q-e/archive/qe-5.4.tar.gz'
)
version(
'5.3.0',
'be3f8778e302cffb89258a5f936a7592',
url='https://github.com/QEF/q-e/archive/qe-5.3.tar.gz'
)
variant('mpi', default=True, description='Builds with mpi support')
variant('openmp', default=False, description='Enables openMP support')
variant('scalapack', default=True, description='Enables scalapack support')
variant('elpa', default=True, description='Uses elpa as an eigenvalue solver')
# Support for HDF5 has been added starting in version 6.1.0 and is
# still experimental, therefore we default to False for the variant
variant('hdf5', default=False, description='Builds with HDF5 support')
depends_on('blas')
depends_on('lapack')
depends_on('mpi', when='+mpi')
depends_on('scalapack', when='+scalapack+mpi')
depends_on('fftw+mpi', when='+mpi')
depends_on('fftw~mpi', when='~mpi')
depends_on('elpa+openmp', when='+elpa+openmp')
depends_on('elpa~openmp', when='+elpa~openmp')
depends_on('hdf5', when='+hdf5')
patch('dspev_drv_elpa.patch', when='@6.1+elpa ^[email protected]')
patch('dspev_drv_elpa.patch', when='@6.1+elpa ^[email protected]')
# We can't ask for scalapack or elpa if we don't want MPI
conflicts(
'+scalapack',
when='~mpi',
msg='scalapack is a parallel library and needs MPI support'
)
conflicts(
'+elpa',
when='~mpi',
msg='elpa is a parallel library and needs MPI support'
)
# Elpa is formally supported by @:5.4.0, but QE configure searches
# for it in the wrong folders (or tries to download it within
# the build directory). Instead of patching Elpa to provide the
# folder QE expects as a link, we issue a conflict here.
conflicts('+elpa', when='@:5.4.0')
conflicts('+hdf5', when='@:5.4.0')
# Spurious problems running in parallel the Makefile
# generated by the configure
parallel = False
def install(self, spec, prefix):
prefix_path = prefix.bin if '@:5.4.0' in spec else prefix
options = ['-prefix={0}'.format(prefix_path)]
if '+mpi' in spec:
options.append('--enable-parallel=yes')
else:
options.append('--enable-parallel=no')
if '+openmp' in spec:
options.append('--enable-openmp')
if '+scalapack' in spec:
scalapack_option = 'intel' if '^intel-mkl' in spec else 'yes'
options.append('--with-scalapack={0}'.format(scalapack_option))
if '+elpa' in spec:
# Spec for elpa
elpa = spec['elpa']
# Find where the Fortran module resides
elpa_module = find(elpa.prefix, 'elpa.mod')
# Compute the include directory from there: versions
# of espresso prior to 6.1 requires -I in front of the directory
elpa_include = '' if '@6.1:' in spec else '-I'
elpa_include += os.path.dirname(elpa_module[0])
options.extend([
'--with-elpa-include={0}'.format(elpa_include),
'--with-elpa-lib={0}'.format(elpa.libs[0])
])
if '+hdf5' in spec:
options.append('--with-hdf5={0}'.format(spec['hdf5'].prefix))
# Add a list of directories to search
search_list = []
for dependency_spec in spec.dependencies():
search_list.extend([
dependency_spec.prefix.lib,
dependency_spec.prefix.lib64
])
search_list = " ".join(search_list)
options.extend([
'LIBDIRS={0}'.format(search_list),
'F90={0}'.format(env['SPACK_FC']),
'CC={0}'.format(env['SPACK_CC'])
])
configure(*options)
make('all')
if 'platform=darwin' in spec:
mkdirp(prefix.bin)
for filename in glob.glob("bin/*.x"):
install(filename, prefix.bin)
else:
make('install')
| lgpl-2.1 |
yongshengwang/hue | desktop/core/ext-py/Django-1.6.10/django/contrib/contenttypes/tests.py | 113 | 11127 | from __future__ import unicode_literals
from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.views import shortcut
from django.contrib.sites.models import Site, get_current_site
from django.http import HttpRequest, Http404
from django.test import TestCase
from django.test.utils import override_settings
from django.utils.http import urlquote
from django.utils import six
from django.utils.encoding import python_2_unicode_compatible
class ConcreteModel(models.Model):
name = models.CharField(max_length=10)
class ProxyModel(ConcreteModel):
class Meta:
proxy = True
@python_2_unicode_compatible
class FooWithoutUrl(models.Model):
"""
Fake model not defining ``get_absolute_url`` for
:meth:`ContentTypesTests.test_shortcut_view_without_get_absolute_url`"""
name = models.CharField(max_length=30, unique=True)
def __str__(self):
return self.name
class FooWithUrl(FooWithoutUrl):
"""
Fake model defining ``get_absolute_url`` for
:meth:`ContentTypesTests.test_shortcut_view`
"""
def get_absolute_url(self):
return "/users/%s/" % urlquote(self.name)
class FooWithBrokenAbsoluteUrl(FooWithoutUrl):
"""
Fake model defining a ``get_absolute_url`` method containing an error
"""
def get_absolute_url(self):
return "/users/%s/" % self.unknown_field
class ContentTypesTests(TestCase):
def setUp(self):
self.old_Site_meta_installed = Site._meta.installed
ContentType.objects.clear_cache()
def tearDown(self):
Site._meta.installed = self.old_Site_meta_installed
ContentType.objects.clear_cache()
def test_lookup_cache(self):
"""
Make sure that the content type cache (see ContentTypeManager)
works correctly. Lookups for a particular content type -- by model, ID
or natural key -- should hit the database only on the first lookup.
"""
# At this point, a lookup for a ContentType should hit the DB
with self.assertNumQueries(1):
ContentType.objects.get_for_model(ContentType)
# A second hit, though, won't hit the DB, nor will a lookup by ID
# or natural key
with self.assertNumQueries(0):
ct = ContentType.objects.get_for_model(ContentType)
with self.assertNumQueries(0):
ContentType.objects.get_for_id(ct.id)
with self.assertNumQueries(0):
ContentType.objects.get_by_natural_key('contenttypes',
'contenttype')
# Once we clear the cache, another lookup will again hit the DB
ContentType.objects.clear_cache()
with self.assertNumQueries(1):
ContentType.objects.get_for_model(ContentType)
# The same should happen with a lookup by natural key
ContentType.objects.clear_cache()
with self.assertNumQueries(1):
ContentType.objects.get_by_natural_key('contenttypes',
'contenttype')
# And a second hit shouldn't hit the DB
with self.assertNumQueries(0):
ContentType.objects.get_by_natural_key('contenttypes',
'contenttype')
def test_get_for_models_empty_cache(self):
# Empty cache.
with self.assertNumQueries(1):
cts = ContentType.objects.get_for_models(ContentType, FooWithUrl)
self.assertEqual(cts, {
ContentType: ContentType.objects.get_for_model(ContentType),
FooWithUrl: ContentType.objects.get_for_model(FooWithUrl),
})
def test_get_for_models_partial_cache(self):
# Partial cache
ContentType.objects.get_for_model(ContentType)
with self.assertNumQueries(1):
cts = ContentType.objects.get_for_models(ContentType, FooWithUrl)
self.assertEqual(cts, {
ContentType: ContentType.objects.get_for_model(ContentType),
FooWithUrl: ContentType.objects.get_for_model(FooWithUrl),
})
def test_get_for_models_full_cache(self):
# Full cache
ContentType.objects.get_for_model(ContentType)
ContentType.objects.get_for_model(FooWithUrl)
with self.assertNumQueries(0):
cts = ContentType.objects.get_for_models(ContentType, FooWithUrl)
self.assertEqual(cts, {
ContentType: ContentType.objects.get_for_model(ContentType),
FooWithUrl: ContentType.objects.get_for_model(FooWithUrl),
})
def test_get_for_concrete_model(self):
"""
Make sure the `for_concrete_model` kwarg correctly works
with concrete, proxy and deferred models
"""
concrete_model_ct = ContentType.objects.get_for_model(ConcreteModel)
self.assertEqual(concrete_model_ct,
ContentType.objects.get_for_model(ProxyModel))
self.assertEqual(concrete_model_ct,
ContentType.objects.get_for_model(ConcreteModel,
for_concrete_model=False))
proxy_model_ct = ContentType.objects.get_for_model(ProxyModel,
for_concrete_model=False)
self.assertNotEqual(concrete_model_ct, proxy_model_ct)
# Make sure deferred model are correctly handled
ConcreteModel.objects.create(name="Concrete")
DeferredConcreteModel = ConcreteModel.objects.only('pk').get().__class__
DeferredProxyModel = ProxyModel.objects.only('pk').get().__class__
self.assertEqual(concrete_model_ct,
ContentType.objects.get_for_model(DeferredConcreteModel))
self.assertEqual(concrete_model_ct,
ContentType.objects.get_for_model(DeferredConcreteModel,
for_concrete_model=False))
self.assertEqual(concrete_model_ct,
ContentType.objects.get_for_model(DeferredProxyModel))
self.assertEqual(proxy_model_ct,
ContentType.objects.get_for_model(DeferredProxyModel,
for_concrete_model=False))
def test_get_for_concrete_models(self):
"""
Make sure the `for_concrete_models` kwarg correctly works
with concrete, proxy and deferred models.
"""
concrete_model_ct = ContentType.objects.get_for_model(ConcreteModel)
cts = ContentType.objects.get_for_models(ConcreteModel, ProxyModel)
self.assertEqual(cts, {
ConcreteModel: concrete_model_ct,
ProxyModel: concrete_model_ct,
})
proxy_model_ct = ContentType.objects.get_for_model(ProxyModel,
for_concrete_model=False)
cts = ContentType.objects.get_for_models(ConcreteModel, ProxyModel,
for_concrete_models=False)
self.assertEqual(cts, {
ConcreteModel: concrete_model_ct,
ProxyModel: proxy_model_ct,
})
# Make sure deferred model are correctly handled
ConcreteModel.objects.create(name="Concrete")
DeferredConcreteModel = ConcreteModel.objects.only('pk').get().__class__
DeferredProxyModel = ProxyModel.objects.only('pk').get().__class__
cts = ContentType.objects.get_for_models(DeferredConcreteModel,
DeferredProxyModel)
self.assertEqual(cts, {
DeferredConcreteModel: concrete_model_ct,
DeferredProxyModel: concrete_model_ct,
})
cts = ContentType.objects.get_for_models(DeferredConcreteModel,
DeferredProxyModel,
for_concrete_models=False)
self.assertEqual(cts, {
DeferredConcreteModel: concrete_model_ct,
DeferredProxyModel: proxy_model_ct,
})
@override_settings(ALLOWED_HOSTS=['example.com'])
def test_shortcut_view(self):
"""
Check that the shortcut view (used for the admin "view on site"
functionality) returns a complete URL regardless of whether the sites
framework is installed
"""
request = HttpRequest()
request.META = {
"SERVER_NAME": "Example.com",
"SERVER_PORT": "80",
}
user_ct = ContentType.objects.get_for_model(FooWithUrl)
obj = FooWithUrl.objects.create(name="john")
if Site._meta.installed:
response = shortcut(request, user_ct.id, obj.id)
self.assertEqual("http://%s/users/john/" % get_current_site(request).domain,
response._headers.get("location")[1])
Site._meta.installed = False
response = shortcut(request, user_ct.id, obj.id)
self.assertEqual("http://Example.com/users/john/",
response._headers.get("location")[1])
def test_shortcut_view_without_get_absolute_url(self):
"""
Check that the shortcut view (used for the admin "view on site"
functionality) returns 404 when get_absolute_url is not defined.
"""
request = HttpRequest()
request.META = {
"SERVER_NAME": "Example.com",
"SERVER_PORT": "80",
}
user_ct = ContentType.objects.get_for_model(FooWithoutUrl)
obj = FooWithoutUrl.objects.create(name="john")
self.assertRaises(Http404, shortcut, request, user_ct.id, obj.id)
def test_shortcut_view_with_broken_get_absolute_url(self):
"""
Check that the shortcut view does not catch an AttributeError raised
by the model's get_absolute_url method.
Refs #8997.
"""
request = HttpRequest()
request.META = {
"SERVER_NAME": "Example.com",
"SERVER_PORT": "80",
}
user_ct = ContentType.objects.get_for_model(FooWithBrokenAbsoluteUrl)
obj = FooWithBrokenAbsoluteUrl.objects.create(name="john")
self.assertRaises(AttributeError, shortcut, request, user_ct.id, obj.id)
def test_missing_model(self):
"""
Ensures that displaying content types in admin (or anywhere) doesn't
break on leftover content type records in the DB for which no model
is defined anymore.
"""
ct = ContentType.objects.create(
name = 'Old model',
app_label = 'contenttypes',
model = 'OldModel',
)
self.assertEqual(six.text_type(ct), 'Old model')
self.assertIsNone(ct.model_class())
# Make sure stale ContentTypes can be fetched like any other object.
# Before Django 1.6 this caused a NoneType error in the caching mechanism.
# Instead, just return the ContentType object and let the app detect stale states.
ct_fetched = ContentType.objects.get_for_id(ct.pk)
self.assertIsNone(ct_fetched.model_class())
| apache-2.0 |
maohongyuan/kbengine | kbe/res/scripts/common/Lib/test/test_userlist.py | 116 | 1896 | # Check every path through every method of UserList
from collections import UserList
from test import support, list_tests
class UserListTest(list_tests.CommonTest):
type2test = UserList
def test_getslice(self):
super().test_getslice()
l = [0, 1, 2, 3, 4]
u = self.type2test(l)
for i in range(-3, 6):
self.assertEqual(u[:i], l[:i])
self.assertEqual(u[i:], l[i:])
for j in range(-3, 6):
self.assertEqual(u[i:j], l[i:j])
def test_add_specials(self):
u = UserList("spam")
u2 = u + "eggs"
self.assertEqual(u2, list("spameggs"))
def test_radd_specials(self):
u = UserList("eggs")
u2 = "spam" + u
self.assertEqual(u2, list("spameggs"))
u2 = u.__radd__(UserList("spam"))
self.assertEqual(u2, list("spameggs"))
def test_iadd(self):
super().test_iadd()
u = [0, 1]
u += UserList([0, 1])
self.assertEqual(u, [0, 1, 0, 1])
def test_mixedcmp(self):
u = self.type2test([0, 1])
self.assertEqual(u, [0, 1])
self.assertNotEqual(u, [0])
self.assertNotEqual(u, [0, 2])
def test_mixedadd(self):
u = self.type2test([0, 1])
self.assertEqual(u + [], u)
self.assertEqual(u + [2], [0, 1, 2])
def test_getitemoverwriteiter(self):
# Verify that __getitem__ overrides *are* recognized by __iter__
class T(self.type2test):
def __getitem__(self, key):
return str(key) + '!!!'
self.assertEqual(next(iter(T((1,2)))), "0!!!")
def test_userlist_copy(self):
u = self.type2test([6, 8, 1, 9, 1])
v = u.copy()
self.assertEqual(u, v)
self.assertEqual(type(u), type(v))
def test_main():
support.run_unittest(UserListTest)
if __name__ == "__main__":
test_main()
| lgpl-3.0 |
acshan/odoo | openerp/addons/base/ir/ir_fields.py | 194 | 17664 | # -*- coding: utf-8 -*-
import cStringIO
import datetime
import functools
import itertools
import time
import psycopg2
import pytz
from openerp import models, api, _
from openerp.tools import DEFAULT_SERVER_DATE_FORMAT, DEFAULT_SERVER_DATETIME_FORMAT, ustr
REFERENCING_FIELDS = set([None, 'id', '.id'])
def only_ref_fields(record):
return dict((k, v) for k, v in record.iteritems()
if k in REFERENCING_FIELDS)
def exclude_ref_fields(record):
return dict((k, v) for k, v in record.iteritems()
if k not in REFERENCING_FIELDS)
CREATE = lambda values: (0, False, values)
UPDATE = lambda id, values: (1, id, values)
DELETE = lambda id: (2, id, False)
FORGET = lambda id: (3, id, False)
LINK_TO = lambda id: (4, id, False)
DELETE_ALL = lambda: (5, False, False)
REPLACE_WITH = lambda ids: (6, False, ids)
class ImportWarning(Warning):
""" Used to send warnings upwards the stack during the import process """
pass
class ConversionNotFound(ValueError): pass
class ir_fields_converter(models.Model):
_name = 'ir.fields.converter'
@api.model
def _format_import_error(self, error_type, error_msg, error_params=(), error_args=None):
# sanitize error params for later formatting by the import system
sanitize = lambda p: p.replace('%', '%%') if isinstance(p, basestring) else p
if error_params:
if isinstance(error_params, basestring):
error_params = sanitize(error_params)
elif isinstance(error_params, dict):
error_params = dict((k, sanitize(v)) for k, v in error_params.iteritems())
elif isinstance(error_params, tuple):
error_params = tuple(map(sanitize, error_params))
return error_type(error_msg % error_params, error_args)
@api.model
def for_model(self, model, fromtype=str):
""" Returns a converter object for the model. A converter is a
callable taking a record-ish (a dictionary representing an openerp
record with values of typetag ``fromtype``) and returning a converted
records matching what :meth:`openerp.osv.orm.Model.write` expects.
:param model: :class:`openerp.osv.orm.Model` for the conversion base
:returns: a converter callable
:rtype: (record: dict, logger: (field, error) -> None) -> dict
"""
# make sure model is new api
model = self.env[model._name]
converters = {
name: self.to_field(model, field, fromtype)
for name, field in model._fields.iteritems()
}
def fn(record, log):
converted = {}
for field, value in record.iteritems():
if field in (None, 'id', '.id'):
continue
if not value:
converted[field] = False
continue
try:
converted[field], ws = converters[field](value)
for w in ws:
if isinstance(w, basestring):
# wrap warning string in an ImportWarning for
# uniform handling
w = ImportWarning(w)
log(field, w)
except ValueError, e:
log(field, e)
return converted
return fn
@api.model
def to_field(self, model, field, fromtype=str):
""" Fetches a converter for the provided field object, from the
specified type.
A converter is simply a callable taking a value of type ``fromtype``
(or a composite of ``fromtype``, e.g. list or dict) and returning a
value acceptable for a write() on the field ``field``.
By default, tries to get a method on itself with a name matching the
pattern ``_$fromtype_to_$field.type`` and returns it.
Converter callables can either return a value and a list of warnings
to their caller or raise ``ValueError``, which will be interpreted as a
validation & conversion failure.
ValueError can have either one or two parameters. The first parameter
is mandatory, **must** be a unicode string and will be used as the
user-visible message for the error (it should be translatable and
translated). It can contain a ``field`` named format placeholder so the
caller can inject the field's translated, user-facing name (@string).
The second parameter is optional and, if provided, must be a mapping.
This mapping will be merged into the error dictionary returned to the
client.
If a converter can perform its function but has to make assumptions
about the data, it can send a warning to the user through adding an
instance of :class:`~.ImportWarning` to the second value
it returns. The handling of a warning at the upper levels is the same
as ``ValueError`` above.
:param field: field object to generate a value for
:type field: :class:`openerp.fields.Field`
:param fromtype: type to convert to something fitting for ``field``
:type fromtype: type | str
:param context: openerp request context
:return: a function (fromtype -> field.write_type), if a converter is found
:rtype: Callable | None
"""
assert isinstance(fromtype, (type, str))
# FIXME: return None
typename = fromtype.__name__ if isinstance(fromtype, type) else fromtype
converter = getattr(self, '_%s_to_%s' % (typename, field.type), None)
if not converter:
return None
return functools.partial(converter, model, field)
@api.model
def _str_to_boolean(self, model, field, value):
# all translatables used for booleans
true, yes, false, no = _(u"true"), _(u"yes"), _(u"false"), _(u"no")
# potentially broken casefolding? What about locales?
trues = set(word.lower() for word in itertools.chain(
[u'1', u"true", u"yes"], # don't use potentially translated values
self._get_translations(['code'], u"true"),
self._get_translations(['code'], u"yes"),
))
if value.lower() in trues:
return True, []
# potentially broken casefolding? What about locales?
falses = set(word.lower() for word in itertools.chain(
[u'', u"0", u"false", u"no"],
self._get_translations(['code'], u"false"),
self._get_translations(['code'], u"no"),
))
if value.lower() in falses:
return False, []
return True, [self._format_import_error(
ImportWarning,
_(u"Unknown value '%s' for boolean field '%%(field)s', assuming '%s'"),
(value, yes),
{'moreinfo': _(u"Use '1' for yes and '0' for no")}
)]
@api.model
def _str_to_integer(self, model, field, value):
try:
return int(value), []
except ValueError:
raise self._format_import_error(
ValueError,
_(u"'%s' does not seem to be an integer for field '%%(field)s'"),
value
)
@api.model
def _str_to_float(self, model, field, value):
try:
return float(value), []
except ValueError:
raise self._format_import_error(
ValueError,
_(u"'%s' does not seem to be a number for field '%%(field)s'"),
value
)
@api.model
def _str_id(self, model, field, value):
return value, []
_str_to_reference = _str_to_char = _str_to_text = _str_to_binary = _str_to_html = _str_id
@api.model
def _str_to_date(self, model, field, value):
try:
time.strptime(value, DEFAULT_SERVER_DATE_FORMAT)
return value, []
except ValueError:
raise self._format_import_error(
ValueError,
_(u"'%s' does not seem to be a valid date for field '%%(field)s'"),
value,
{'moreinfo': _(u"Use the format '%s'") % u"2012-12-31"}
)
@api.model
def _input_tz(self):
# if there's a tz in context, try to use that
if self._context.get('tz'):
try:
return pytz.timezone(self._context['tz'])
except pytz.UnknownTimeZoneError:
pass
# if the current user has a tz set, try to use that
user = self.env.user
if user.tz:
try:
return pytz.timezone(user.tz)
except pytz.UnknownTimeZoneError:
pass
# fallback if no tz in context or on user: UTC
return pytz.UTC
@api.model
def _str_to_datetime(self, model, field, value):
try:
parsed_value = datetime.datetime.strptime(
value, DEFAULT_SERVER_DATETIME_FORMAT)
except ValueError:
raise self._format_import_error(
ValueError,
_(u"'%s' does not seem to be a valid datetime for field '%%(field)s'"),
value,
{'moreinfo': _(u"Use the format '%s'") % u"2012-12-31 23:59:59"}
)
input_tz = self._input_tz()# Apply input tz to the parsed naive datetime
dt = input_tz.localize(parsed_value, is_dst=False)
# And convert to UTC before reformatting for writing
return dt.astimezone(pytz.UTC).strftime(DEFAULT_SERVER_DATETIME_FORMAT), []
@api.model
def _get_translations(self, types, src):
types = tuple(types)
# Cache translations so they don't have to be reloaded from scratch on
# every row of the file
tnx_cache = self._cr.cache.setdefault(self._name, {})
if tnx_cache.setdefault(types, {}) and src in tnx_cache[types]:
return tnx_cache[types][src]
Translations = self.env['ir.translation']
tnx = Translations.search([('type', 'in', types), ('src', '=', src)])
result = tnx_cache[types][src] = [t.value for t in tnx if t.value is not False]
return result
@api.model
def _str_to_selection(self, model, field, value):
# get untranslated values
env = self.with_context(lang=None).env
selection = field.get_description(env)['selection']
for item, label in selection:
label = ustr(label)
labels = [label] + self._get_translations(('selection', 'model', 'code'), label)
if value == unicode(item) or value in labels:
return item, []
raise self._format_import_error(
ValueError,
_(u"Value '%s' not found in selection field '%%(field)s'"),
value,
{'moreinfo': [_label or unicode(item) for item, _label in selection if _label or item]}
)
@api.model
def db_id_for(self, model, field, subfield, value):
""" Finds a database id for the reference ``value`` in the referencing
subfield ``subfield`` of the provided field of the provided model.
:param model: model to which the field belongs
:param field: relational field for which references are provided
:param subfield: a relational subfield allowing building of refs to
existing records: ``None`` for a name_get/name_search,
``id`` for an external id and ``.id`` for a database
id
:param value: value of the reference to match to an actual record
:param context: OpenERP request context
:return: a pair of the matched database identifier (if any), the
translated user-readable name for the field and the list of
warnings
:rtype: (ID|None, unicode, list)
"""
id = None
warnings = []
action = {'type': 'ir.actions.act_window', 'target': 'new',
'view_mode': 'tree,form', 'view_type': 'form',
'views': [(False, 'tree'), (False, 'form')],
'help': _(u"See all possible values")}
if subfield is None:
action['res_model'] = field.comodel_name
elif subfield in ('id', '.id'):
action['res_model'] = 'ir.model.data'
action['domain'] = [('model', '=', field.comodel_name)]
RelatedModel = self.env[field.comodel_name]
if subfield == '.id':
field_type = _(u"database id")
try: tentative_id = int(value)
except ValueError: tentative_id = value
try:
if RelatedModel.search([('id', '=', tentative_id)]):
id = tentative_id
except psycopg2.DataError:
# type error
raise self._format_import_error(
ValueError,
_(u"Invalid database id '%s' for the field '%%(field)s'"),
value,
{'moreinfo': action})
elif subfield == 'id':
field_type = _(u"external id")
if '.' in value:
xmlid = value
else:
xmlid = "%s.%s" % (self._context.get('_import_current_module', ''), value)
try:
id = self.env.ref(xmlid).id
except ValueError:
pass # leave id is None
elif subfield is None:
field_type = _(u"name")
ids = RelatedModel.name_search(name=value, operator='=')
if ids:
if len(ids) > 1:
warnings.append(ImportWarning(
_(u"Found multiple matches for field '%%(field)s' (%d matches)")
% (len(ids))))
id, _name = ids[0]
else:
raise self._format_import_error(
Exception,
_(u"Unknown sub-field '%s'"),
subfield
)
if id is None:
raise self._format_import_error(
ValueError,
_(u"No matching record found for %(field_type)s '%(value)s' in field '%%(field)s'"),
{'field_type': field_type, 'value': value},
{'moreinfo': action})
return id, field_type, warnings
def _referencing_subfield(self, record):
""" Checks the record for the subfields allowing referencing (an
existing record in an other table), errors out if it finds potential
conflicts (multiple referencing subfields) or non-referencing subfields
returns the name of the correct subfield.
:param record:
:return: the record subfield to use for referencing and a list of warnings
:rtype: str, list
"""
# Can import by name_get, external id or database id
fieldset = set(record.iterkeys())
if fieldset - REFERENCING_FIELDS:
raise ValueError(
_(u"Can not create Many-To-One records indirectly, import the field separately"))
if len(fieldset) > 1:
raise ValueError(
_(u"Ambiguous specification for field '%(field)s', only provide one of name, external id or database id"))
# only one field left possible, unpack
[subfield] = fieldset
return subfield, []
@api.model
def _str_to_many2one(self, model, field, values):
# Should only be one record, unpack
[record] = values
subfield, w1 = self._referencing_subfield(record)
reference = record[subfield]
id, _, w2 = self.db_id_for(model, field, subfield, reference)
return id, w1 + w2
@api.model
def _str_to_many2many(self, model, field, value):
[record] = value
subfield, warnings = self._referencing_subfield(record)
ids = []
for reference in record[subfield].split(','):
id, _, ws = self.db_id_for(model, field, subfield, reference)
ids.append(id)
warnings.extend(ws)
return [REPLACE_WITH(ids)], warnings
@api.model
def _str_to_one2many(self, model, field, records):
commands = []
warnings = []
if len(records) == 1 and exclude_ref_fields(records[0]) == {}:
# only one row with only ref field, field=ref1,ref2,ref3 as in
# m2o/m2m
record = records[0]
subfield, ws = self._referencing_subfield(record)
warnings.extend(ws)
# transform [{subfield:ref1,ref2,ref3}] into
# [{subfield:ref1},{subfield:ref2},{subfield:ref3}]
records = ({subfield:item} for item in record[subfield].split(','))
def log(_, e):
if not isinstance(e, Warning):
raise e
warnings.append(e)
convert = self.for_model(self.env[field.comodel_name])
for record in records:
id = None
refs = only_ref_fields(record)
# there are ref fields in the record
if refs:
subfield, w1 = self._referencing_subfield(refs)
warnings.extend(w1)
reference = record[subfield]
id, _, w2 = self.db_id_for(model, field, subfield, reference)
warnings.extend(w2)
writable = convert(exclude_ref_fields(record), log)
if id:
commands.append(LINK_TO(id))
commands.append(UPDATE(id, writable))
else:
commands.append(CREATE(writable))
return commands, warnings
| agpl-3.0 |
will-Do/avocado-vt | virttest/libvirt_xml/vol_xml.py | 12 | 7155 | """
Module simplifying manipulation of XML described at
http://libvirt.org/formatstorage.html#StorageVol
"""
from virttest.libvirt_xml import base, accessors
from virttest.libvirt_xml.xcepts import LibvirtXMLNotFoundError
class VolXMLBase(base.LibvirtXMLBase):
"""
Accessor methods for VolXML class.
Properties:
name: string, operates on XML name tag
key: string, operates on key tag
capacity: integer, operates on capacity attribute of capacity tag
allocation: integer, operates on allocation attribute of allocation
format: string, operates on type attribute of format tag
path: string, operates on path attribute of path tag
owner, integer, operates on owner attribute of owner tag
group, integer, operates on group attribute of group tag
mode: string, operates on mode attribute of mode tag
label: string, operates on label attribute of label tag
compat: string, operates on compat attribute of label tag
lazy_refcounts: bool, True/False
encryption: VolXMLBase.Encryption instance.
capacity_unit: string, operates on unit attribute of capacity tag
"""
__slots__ = ('name', 'key', 'capacity', 'allocation', 'format', 'path',
'owner', 'group', 'mode', 'label', 'compat', 'lazy_refcounts',
'encryption', "capacity_unit")
__uncompareable__ = base.LibvirtXMLBase.__uncompareable__
__schema_name__ = "storagevol"
def __init__(self, virsh_instance=base.virsh):
accessors.XMLElementText('name', self, parent_xpath='/',
tag_name='name')
accessors.XMLElementText('key', self, parent_xpath='/',
tag_name='key')
accessors.XMLElementInt('capacity', self, parent_xpath='/',
tag_name='capacity')
accessors.XMLElementInt('allocation', self, parent_xpath='/',
tag_name='allocation')
accessors.XMLAttribute('format', self, parent_xpath='/target',
tag_name='format', attribute='type')
accessors.XMLAttribute('capacity_unit', self, parent_xpath='/',
tag_name='capacity', attribute='unit')
accessors.XMLElementNest('encryption', self, parent_xpath='/target',
tag_name='encryption', subclass=self.Encryption,
subclass_dargs={
'virsh_instance': virsh_instance})
accessors.XMLElementText('path', self, parent_xpath='/target',
tag_name='path')
accessors.XMLElementInt('owner', self,
parent_xpath='/target/permissions',
tag_name='owner')
accessors.XMLElementInt('group', self,
parent_xpath='/target/permissions',
tag_name='group')
accessors.XMLElementText('mode', self,
parent_xpath='/target/permissions',
tag_name='mode')
accessors.XMLElementText('label', self,
parent_xpath='/target/permissions',
tag_name='label')
accessors.XMLElementText('compat', self, parent_xpath='/target',
tag_name='compat')
accessors.XMLElementBool('lazy_refcounts', self,
parent_xpath='/target/features',
tag_name='lazy_refcounts')
super(VolXMLBase, self).__init__(virsh_instance=virsh_instance)
class VolXML(VolXMLBase):
"""
Manipulators of a Virtual Vol through it's XML definition.
"""
__slots__ = []
def __init__(self, vol_name='default', virsh_instance=base.virsh):
"""
Initialize new instance with empty XML
"""
super(VolXML, self).__init__(virsh_instance=virsh_instance)
self.xml = u"<volume><name>%s</name></volume>" % vol_name
def new_encryption(self, **dargs):
"""
Return a new volume encryption instance and set properties from dargs
"""
new_one = self.Encryption(virsh_instance=self.virsh)
for key, value in dargs.items():
setattr(new_one, key, value)
return new_one
def create(self, pool_name, virsh_instance=base.virsh):
"""
Create volume with virsh from this instance
"""
result = virsh_instance.vol_create(pool_name, self.xml)
if result.exit_status:
return False
return True
@staticmethod
def new_from_vol_dumpxml(vol_name, pool_name, virsh_instance=base.virsh):
"""
Return new VolXML instance from virsh vol-dumpxml command
:param vol_name: Name of vol to vol-dumpxml
:param virsh_instance: virsh module or instance to use
:return: New initialized VolXML instance
"""
volxml = VolXML(virsh_instance=virsh_instance)
volxml['xml'] = virsh_instance.vol_dumpxml(vol_name, pool_name)\
.stdout.strip()
return volxml
@staticmethod
def get_vol_details_by_name(vol_name, pool_name, virsh_instance=base.virsh):
"""
Return volume xml dictionary by Vol's uuid or name.
:param vol_name: Vol's name
:return: volume xml dictionary
"""
volume_xml = {}
vol_xml = VolXML.new_from_vol_dumpxml(vol_name, pool_name,
virsh_instance)
volume_xml['key'] = vol_xml.key
volume_xml['path'] = vol_xml.path
volume_xml['capacity'] = vol_xml.capacity
volume_xml['allocation'] = vol_xml.allocation
try:
volume_xml['format'] = vol_xml.format
except LibvirtXMLNotFoundError:
volume_xml['format'] = None
return volume_xml
@staticmethod
def new_vol(**dargs):
"""
Return a new VolXML instance and set properties from dargs
:param dargs: param dictionary
:return: new VolXML instance
"""
new_one = VolXML(virsh_instance=base.virsh)
for key, value in dargs.items():
setattr(new_one, key, value)
return new_one
class Encryption(base.LibvirtXMLBase):
"""
Encryption volume XML class
Properties:
format:
string.
secret:
dict, keys: type, uuid
"""
__slots__ = ('format', 'secret')
def __init__(self, virsh_instance=base.virsh):
accessors.XMLAttribute('format', self, parent_xpath='/',
tag_name='encryption', attribute='format')
accessors.XMLElementDict('secret', self, parent_xpath='/',
tag_name='secret')
super(VolXML.Encryption, self).__init__(
virsh_instance=virsh_instance)
self.xml = '<encryption/>'
| gpl-2.0 |
MadsJensen/agency_connectivity | correlation_analysis.py | 1 | 4393 | # -*- coding: utf-8 -*-
"""
@author: mje
@emai: [email protected]
"""
import numpy as np
# import mne
import matplotlib.pyplot as plt
import pandas as pd
from scipy.stats import spearmanr
from my_settings import *
plt.style.use("ggplot")
b_df = pd.read_csv(
"/Users/au194693/projects/agency_connectivity/data/" +
"behavioural_results.csv")
def calc_ISPC_time_between(data, chan_1=52, chan_2=1):
result = np.empty([data.shape[0]])
for i in range(data.shape[0]):
result[i] = np.abs(
np.mean(
np.exp(1j * (np.angle(data[i, chan_1, window_start:window_end])
- np.angle(data[i, chan_2, window_start:
window_end])))))
return result
def make_correlation(data, chan_1=52, chan_2=1):
result = np.empty([data.shape[0]])
for i in range(len(data)):
result[i] = spearmanr(data[i, chan_1, window_start:window_end],
data[i, chan_2, window_start:window_end])[0]
return result
label_dict = {"ba_1_4_r": [1, 52],
"ba_1_4_l": [0, 51],
"ba_4_4": [51, 52],
"ba_1_1": [0, 1]}
# "ba_4_39_l": [49, 51],
# "ba_4_39_r": [50, 52],
# "ba_39_39": [49, 50]}
# bands = ["delta", "theta", "alpha", "beta", "gamma1", "gamma2"]
bands = ["beta"]
# subjects = ["p9"]
labels = list(np.load(data_path + "label_names.npy"))
times = np.arange(-2000, 2001, 1.95325)
times = times / 1000.
window_length = 153
step_length = 15
results_all = pd.DataFrame()
for subject in subjects:
print("Working on: " + subject)
# ht_vol = np.load(tf_folder + "/%s_vol_HT-comp.npy" %
# subject)
ht_invol = np.load(tf_folder + "%s_inv_HT-pow_zscore.npy" % subject)
b_tmp = b_df[(b_df.subject == subject) & (b_df.condition == "invol"
)].reset_index()
for k, band in enumerate(bands):
k = 3
# results_invol = {}
ht_invol_band = ht_invol[-89:, :, :, k]
for lbl in label_dict.keys():
step = 1
j = 768 # times index to start
while times[window_length + j] < times[1040]:
window_start = j
window_end = j + window_length
res = pd.DataFrame(
make_correlation(
ht_invol_band,
chan_1=label_dict[lbl][0], chan_2=label_dict[lbl][1]),
columns=["corr"])
res["step"] = step
res["subject"] = subject
res["label"] = lbl
res["binding"] = b_tmp.binding
res["trial_status"] = b_tmp.trial_status
res["condition"] = "invol"
res["band"] = band
res["trial_nr"] = np.arange(2, 91, 1)
results_all = results_all.append(res)
j += step_length
step += 1
print("Working on: " + subject)
# ht_vol = np.load(tf_folder + "/%s_vol_HT-comp.npy" %
# subject)
ht_vol = np.load(tf_folder + "%s_vol_HT-pow_zscore.npy" % subject)
b_tmp = b_df[(b_df.subject == subject) & (b_df.condition == "vol"
)].reset_index()
for k, band in enumerate(bands):
k = 3
# Results_vol = {}
ht_vol_band = ht_vol[-89:, :, :, k]
for lbl in label_dict.keys():
step = 1
j = 768 # times index to start
while times[window_length + j] < times[1040]:
window_start = j
window_end = j + window_length
res = pd.DataFrame(
make_correlation(
ht_vol_band,
chan_1=label_dict[lbl][0], chan_2=label_dict[lbl][1]),
columns=["corr"])
res["step"] = step
res["subject"] = subject
res["label"] = lbl
res["binding"] = b_tmp.binding
res["trial_status"] = b_tmp.trial_status
res["condition"] = "vol"
res["band"] = band
res["trial_nr"] = np.arange(2, 91, 1)
results_all = results_all.append(res)
j += step_length
step += 1
| bsd-3-clause |
jdcc2/campussearch | venv/lib/python3.4/site-packages/requests/packages/urllib3/util/retry.py | 699 | 9924 | import time
import logging
from ..exceptions import (
ConnectTimeoutError,
MaxRetryError,
ProtocolError,
ReadTimeoutError,
ResponseError,
)
from ..packages import six
log = logging.getLogger(__name__)
class Retry(object):
""" Retry configuration.
Each retry attempt will create a new Retry object with updated values, so
they can be safely reused.
Retries can be defined as a default for a pool::
retries = Retry(connect=5, read=2, redirect=5)
http = PoolManager(retries=retries)
response = http.request('GET', 'http://example.com/')
Or per-request (which overrides the default for the pool)::
response = http.request('GET', 'http://example.com/', retries=Retry(10))
Retries can be disabled by passing ``False``::
response = http.request('GET', 'http://example.com/', retries=False)
Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless
retries are disabled, in which case the causing exception will be raised.
:param int total:
Total number of retries to allow. Takes precedence over other counts.
Set to ``None`` to remove this constraint and fall back on other
counts. It's a good idea to set this to some sensibly-high value to
account for unexpected edge cases and avoid infinite retry loops.
Set to ``0`` to fail on the first retry.
Set to ``False`` to disable and imply ``raise_on_redirect=False``.
:param int connect:
How many connection-related errors to retry on.
These are errors raised before the request is sent to the remote server,
which we assume has not triggered the server to process the request.
Set to ``0`` to fail on the first retry of this type.
:param int read:
How many times to retry on read errors.
These errors are raised after the request was sent to the server, so the
request may have side-effects.
Set to ``0`` to fail on the first retry of this type.
:param int redirect:
How many redirects to perform. Limit this to avoid infinite redirect
loops.
A redirect is a HTTP response with a status code 301, 302, 303, 307 or
308.
Set to ``0`` to fail on the first retry of this type.
Set to ``False`` to disable and imply ``raise_on_redirect=False``.
:param iterable method_whitelist:
Set of uppercased HTTP method verbs that we should retry on.
By default, we only retry on methods which are considered to be
indempotent (multiple requests with the same parameters end with the
same state). See :attr:`Retry.DEFAULT_METHOD_WHITELIST`.
:param iterable status_forcelist:
A set of HTTP status codes that we should force a retry on.
By default, this is disabled with ``None``.
:param float backoff_factor:
A backoff factor to apply between attempts. urllib3 will sleep for::
{backoff factor} * (2 ^ ({number of total retries} - 1))
seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep
for [0.1s, 0.2s, 0.4s, ...] between retries. It will never be longer
than :attr:`Retry.MAX_BACKOFF`.
By default, backoff is disabled (set to 0).
:param bool raise_on_redirect: Whether, if the number of redirects is
exhausted, to raise a MaxRetryError, or to return a response with a
response code in the 3xx range.
"""
DEFAULT_METHOD_WHITELIST = frozenset([
'HEAD', 'GET', 'PUT', 'DELETE', 'OPTIONS', 'TRACE'])
#: Maximum backoff time.
BACKOFF_MAX = 120
def __init__(self, total=10, connect=None, read=None, redirect=None,
method_whitelist=DEFAULT_METHOD_WHITELIST, status_forcelist=None,
backoff_factor=0, raise_on_redirect=True, _observed_errors=0):
self.total = total
self.connect = connect
self.read = read
if redirect is False or total is False:
redirect = 0
raise_on_redirect = False
self.redirect = redirect
self.status_forcelist = status_forcelist or set()
self.method_whitelist = method_whitelist
self.backoff_factor = backoff_factor
self.raise_on_redirect = raise_on_redirect
self._observed_errors = _observed_errors # TODO: use .history instead?
def new(self, **kw):
params = dict(
total=self.total,
connect=self.connect, read=self.read, redirect=self.redirect,
method_whitelist=self.method_whitelist,
status_forcelist=self.status_forcelist,
backoff_factor=self.backoff_factor,
raise_on_redirect=self.raise_on_redirect,
_observed_errors=self._observed_errors,
)
params.update(kw)
return type(self)(**params)
@classmethod
def from_int(cls, retries, redirect=True, default=None):
""" Backwards-compatibility for the old retries format."""
if retries is None:
retries = default if default is not None else cls.DEFAULT
if isinstance(retries, Retry):
return retries
redirect = bool(redirect) and None
new_retries = cls(retries, redirect=redirect)
log.debug("Converted retries value: %r -> %r" % (retries, new_retries))
return new_retries
def get_backoff_time(self):
""" Formula for computing the current backoff
:rtype: float
"""
if self._observed_errors <= 1:
return 0
backoff_value = self.backoff_factor * (2 ** (self._observed_errors - 1))
return min(self.BACKOFF_MAX, backoff_value)
def sleep(self):
""" Sleep between retry attempts using an exponential backoff.
By default, the backoff factor is 0 and this method will return
immediately.
"""
backoff = self.get_backoff_time()
if backoff <= 0:
return
time.sleep(backoff)
def _is_connection_error(self, err):
""" Errors when we're fairly sure that the server did not receive the
request, so it should be safe to retry.
"""
return isinstance(err, ConnectTimeoutError)
def _is_read_error(self, err):
""" Errors that occur after the request has been started, so we should
assume that the server began processing it.
"""
return isinstance(err, (ReadTimeoutError, ProtocolError))
def is_forced_retry(self, method, status_code):
""" Is this method/status code retryable? (Based on method/codes whitelists)
"""
if self.method_whitelist and method.upper() not in self.method_whitelist:
return False
return self.status_forcelist and status_code in self.status_forcelist
def is_exhausted(self):
""" Are we out of retries? """
retry_counts = (self.total, self.connect, self.read, self.redirect)
retry_counts = list(filter(None, retry_counts))
if not retry_counts:
return False
return min(retry_counts) < 0
def increment(self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
_observed_errors = self._observed_errors
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
_observed_errors += 1
elif error and self._is_read_error(error):
# Read retry?
if read is False:
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
_observed_errors += 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
_observed_errors += 1
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
_observed_errors=_observed_errors)
if new_retry.is_exhausted():
raise MaxRetryError(_pool, url, error or ResponseError(cause))
log.debug("Incremented Retry for (url='%s'): %r" % (url, new_retry))
return new_retry
def __repr__(self):
return ('{cls.__name__}(total={self.total}, connect={self.connect}, '
'read={self.read}, redirect={self.redirect})').format(
cls=type(self), self=self)
# For backwards compatibility (equivalent to pre-v1.9):
Retry.DEFAULT = Retry(3)
| gpl-2.0 |
AdaptiveApplications/carnegie | tarc_bus_locator_client/numpy-1.8.1/doc/source/conf.py | 33 | 9781 | # -*- coding: utf-8 -*-
from __future__ import division, absolute_import, print_function
import sys, os, re
# Check Sphinx version
import sphinx
if sphinx.__version__ < "1.0.1":
raise RuntimeError("Sphinx 1.0.1 or newer required")
needs_sphinx = '1.0'
# -----------------------------------------------------------------------------
# General configuration
# -----------------------------------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
sys.path.insert(0, os.path.abspath('../sphinxext'))
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
'sphinx.ext.intersphinx', 'sphinx.ext.coverage',
'sphinx.ext.doctest', 'sphinx.ext.autosummary',
'matplotlib.sphinxext.plot_directive']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# General substitutions.
project = 'NumPy'
copyright = '2008-2009, The Scipy community'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
import numpy
# The short X.Y version (including .devXXXX, rcX, b1 suffixes if present)
version = re.sub(r'(\d+\.\d+)\.\d+(.*)', r'\1\2', numpy.__version__)
version = re.sub(r'(\.dev\d+).*?$', r'\1', version)
# The full version, including alpha/beta/rc tags.
release = numpy.__version__
print("%s %s" % (version, release))
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# The reST default role (used for this markup: `text`) to use for all documents.
default_role = "autolink"
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
exclude_dirs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = False
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -----------------------------------------------------------------------------
# HTML output
# -----------------------------------------------------------------------------
themedir = os.path.join(os.pardir, 'scipy-sphinx-theme', '_theme')
if not os.path.isdir(themedir):
raise RuntimeError("Get the scipy-sphinx-theme first, "
"via git submodule init && git submodule update")
html_theme = 'scipy'
html_theme_path = [themedir]
if 'scipyorg' in tags:
# Build for the scipy.org website
html_theme_options = {
"edit_link": True,
"sidebar": "right",
"scipy_org_logo": True,
"rootlinks": [("http://scipy.org/", "Scipy.org"),
("http://docs.scipy.org/", "Docs")]
}
else:
# Default build
html_theme_options = {
"edit_link": False,
"sidebar": "left",
"scipy_org_logo": False,
"rootlinks": []
}
html_sidebars = {'index': 'indexsidebar.html'}
html_additional_pages = {
'index': 'indexcontent.html',
}
html_title = "%s v%s Manual" % (project, version)
html_static_path = ['_static']
html_last_updated_fmt = '%b %d, %Y'
html_use_modindex = True
html_copy_source = False
html_domain_indices = False
html_file_suffix = '.html'
htmlhelp_basename = 'numpy'
pngmath_use_preview = True
pngmath_dvipng_args = ['-gamma', '1.5', '-D', '96', '-bg', 'Transparent']
# -----------------------------------------------------------------------------
# LaTeX output
# -----------------------------------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
_stdauthor = 'Written by the NumPy community'
latex_documents = [
('reference/index', 'numpy-ref.tex', 'NumPy Reference',
_stdauthor, 'manual'),
('user/index', 'numpy-user.tex', 'NumPy User Guide',
_stdauthor, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
latex_preamble = r'''
\usepackage{amsmath}
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
% In the parameters section, place a newline after the Parameters
% header
\usepackage{expdlist}
\let\latexdescription=\description
\def\description{\latexdescription{}{} \breaklabel}
% Make Examples/etc section headers smaller and more compact
\makeatletter
\titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
\titlespacing*{\paragraph}{0pt}{1ex}{0pt}
\makeatother
% Fix footer/header
\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
'''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_use_modindex = False
# -----------------------------------------------------------------------------
# Texinfo output
# -----------------------------------------------------------------------------
texinfo_documents = [
("contents", 'numpy', 'Numpy Documentation', _stdauthor, 'Numpy',
"NumPy: array processing for numbers, strings, records, and objects.",
'Programming',
1),
]
# -----------------------------------------------------------------------------
# Intersphinx configuration
# -----------------------------------------------------------------------------
intersphinx_mapping = {'http://docs.python.org/dev': None}
# -----------------------------------------------------------------------------
# Numpy extensions
# -----------------------------------------------------------------------------
# If we want to do a phantom import from an XML file for all autodocs
phantom_import_file = 'dump.xml'
# Make numpydoc to generate plots for example sections
numpydoc_use_plots = True
# -----------------------------------------------------------------------------
# Autosummary
# -----------------------------------------------------------------------------
import glob
autosummary_generate = glob.glob("reference/*.rst")
# -----------------------------------------------------------------------------
# Coverage checker
# -----------------------------------------------------------------------------
coverage_ignore_modules = r"""
""".split()
coverage_ignore_functions = r"""
test($|_) (some|all)true bitwise_not cumproduct pkgload
generic\.
""".split()
coverage_ignore_classes = r"""
""".split()
coverage_c_path = []
coverage_c_regexes = {}
coverage_ignore_c_items = {}
# -----------------------------------------------------------------------------
# Plots
# -----------------------------------------------------------------------------
plot_pre_code = """
import numpy as np
np.random.seed(0)
"""
plot_include_source = True
plot_formats = [('png', 100), 'pdf']
import math
phi = (math.sqrt(5) + 1)/2
plot_rcparams = {
'font.size': 8,
'axes.titlesize': 8,
'axes.labelsize': 8,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'legend.fontsize': 8,
'figure.figsize': (3*phi, 3),
'figure.subplot.bottom': 0.2,
'figure.subplot.left': 0.2,
'figure.subplot.right': 0.9,
'figure.subplot.top': 0.85,
'figure.subplot.wspace': 0.4,
'text.usetex': False,
}
# -----------------------------------------------------------------------------
# Source code links
# -----------------------------------------------------------------------------
import inspect
from os.path import relpath, dirname
for name in ['sphinx.ext.linkcode', 'numpydoc.linkcode']:
try:
__import__(name)
extensions.append(name)
break
except ImportError:
pass
else:
print("NOTE: linkcode extension not found -- no links to source generated")
def linkcode_resolve(domain, info):
"""
Determine the URL corresponding to Python object
"""
if domain != 'py':
return None
modname = info['module']
fullname = info['fullname']
submod = sys.modules.get(modname)
if submod is None:
return None
obj = submod
for part in fullname.split('.'):
try:
obj = getattr(obj, part)
except:
return None
try:
fn = inspect.getsourcefile(obj)
except:
fn = None
if not fn:
return None
try:
source, lineno = inspect.findsource(obj)
except:
lineno = None
if lineno:
linespec = "#L%d" % (lineno + 1)
else:
linespec = ""
fn = relpath(fn, start=dirname(numpy.__file__))
if 'dev' in numpy.__version__:
return "http://github.com/numpy/numpy/blob/master/numpy/%s%s" % (
fn, linespec)
else:
return "http://github.com/numpy/numpy/blob/v%s/numpy/%s%s" % (
numpy.__version__, fn, linespec)
| mit |
SimeonFritz/aima-python | submissions/aardvark/vacuum2Runner.py | 15 | 2089 | import agents as ag
import envgui as gui
# change this line ONLY to refer to your project
import submissions.miles.vacuum2 as v2
# ______________________________________________________________________________
# Vacuum environment
class Dirt(ag.Thing):
pass
class VacuumEnvironment(ag.XYEnvironment):
"""The environment of [Ex. 2.12]. Agent perceives dirty or clean,
and bump (into obstacle) or not; 2D discrete world of unknown size;
performance measure is 100 for each dirt cleaned, and -1 for
each turn taken."""
def __init__(self, width=4, height=3):
super(VacuumEnvironment, self).__init__(width, height)
self.add_walls()
def thing_classes(self):
return [ag.Wall, Dirt,
# ReflexVacuumAgent, RandomVacuumAgent,
# TableDrivenVacuumAgent, ModelBasedVacuumAgent
]
def percept(self, agent):
"""The percept is a tuple of ('Dirty' or 'Clean', 'Bump' or 'None').
Unlike the TrivialVacuumEnvironment, location is NOT perceived."""
status = ('Dirty' if self.some_things_at(
agent.location, Dirt) else 'Clean')
bump = ('Bump' if agent.bump else'None')
return (bump, status)
def execute_action(self, agent, action):
if action == 'Suck':
dirt_list = self.list_things_at(agent.location, Dirt)
if dirt_list != []:
dirt = dirt_list[0]
agent.performance += 100
self.delete_thing(dirt)
else:
super(VacuumEnvironment, self).execute_action(agent, action)
if action != 'NoOp':
agent.performance -= 1
# Launch GUI of more complex environment
v = VacuumEnvironment(5, 3)
a = v2.HW2Agent()
a = ag.TraceAgent(a)
loc = v.random_location_inbounds()
v.add_thing(a, location=loc)
v.scatter_things(Dirt)
g = gui.EnvGUI(v, 'Vaccuum')
c = g.getCanvas()
c.mapImageNames({
ag.Wall: 'images/wall.jpg',
# Floor: 'images/floor.png',
Dirt: 'images/dirt.png',
ag.Agent: 'images/vacuum.png',
})
c.update()
g.mainloop() | mit |
Jaesin/OctoPrint | src/octoprint/server/api/languages.py | 1 | 5684 | # coding=utf-8
from __future__ import absolute_import, division, print_function
__author__ = "Gina Häußge <[email protected]>"
__license__ = 'GNU Affero General Public License http://www.gnu.org/licenses/agpl.html'
__copyright__ = "Copyright (C) 2015 The OctoPrint Project - Released under terms of the AGPLv3 License"
import os
import tarfile
import zipfile
try:
from os import scandir
except ImportError:
from scandir import scandir
from collections import defaultdict
from flask import request, jsonify, make_response
import logging
from octoprint.settings import settings
from octoprint.server import admin_permission
from octoprint.server.api import api
from octoprint.server.util.flask import restricted_access
from octoprint.plugin import plugin_manager
from flask_babel import Locale
@api.route("/languages", methods=["GET"])
@restricted_access
@admin_permission.require(403)
def getInstalledLanguagePacks():
translation_folder = settings().getBaseFolder("translations", check_writable=False)
if not os.path.exists(translation_folder):
return jsonify(language_packs=dict(_core=[]))
core_packs = []
plugin_packs = defaultdict(lambda: dict(identifier=None, display=None, languages=[]))
for entry in scandir(translation_folder):
if not entry.is_dir():
continue
def load_meta(path, locale):
meta = dict()
meta_path = os.path.join(path, "meta.yaml")
if os.path.isfile(meta_path):
import yaml
try:
with open(meta_path) as f:
meta = yaml.safe_load(f)
except:
pass
else:
import datetime
if "last_update" in meta and isinstance(meta["last_update"], datetime.datetime):
meta["last_update"] = (meta["last_update"] - datetime.datetime(1970,1,1)).total_seconds()
l = Locale.parse(locale)
meta["locale"] = locale
meta["locale_display"] = l.display_name
meta["locale_english"] = l.english_name
return meta
if entry.name == "_plugins":
for plugin_entry in scandir(entry.path):
if not plugin_entry.is_dir():
continue
if not plugin_entry.name in plugin_manager().plugins:
continue
plugin_info = plugin_manager().plugins[plugin_entry.name]
plugin_packs[plugin_entry.name]["identifier"] = plugin_entry.name
plugin_packs[plugin_entry.name]["display"] = plugin_info.name
for language_entry in scandir(plugin_entry.path):
try:
plugin_packs[plugin_entry.name]["languages"].append(load_meta(language_entry.path, language_entry.name))
except Exception:
logging.getLogger(__name__).exception("Error while parsing metadata for language pack {} from {} for plugin {}".format(language_entry.name,
language_entry.path,
plugin_entry.name))
continue
else:
try:
core_packs.append(load_meta(entry.path, entry.name))
except Exception:
logging.getLogger(__name__).exception("Error while parsing metadata for core language pack {} from {}".format(entry.name,
entry.path))
result = dict(_core=dict(identifier="_core", display="Core", languages=core_packs))
result.update(plugin_packs)
return jsonify(language_packs=result)
@api.route("/languages", methods=["POST"])
@restricted_access
@admin_permission.require(403)
def uploadLanguagePack():
input_name = "file"
input_upload_path = input_name + "." + settings().get(["server", "uploads", "pathSuffix"])
input_upload_name = input_name + "." + settings().get(["server", "uploads", "nameSuffix"])
if not input_upload_path in request.values or not input_upload_name in request.values:
return make_response("No file included", 400)
upload_name = request.values[input_upload_name]
upload_path = request.values[input_upload_path]
exts = filter(lambda x: upload_name.lower().endswith(x), (".zip", ".tar.gz", ".tgz", ".tar"))
if not len(exts):
return make_response("File doesn't have a valid extension for a language pack archive", 400)
target_path = settings().getBaseFolder("translations")
if tarfile.is_tarfile(upload_path):
_unpack_uploaded_tarball(upload_path, target_path)
elif zipfile.is_zipfile(upload_path):
_unpack_uploaded_zipfile(upload_path, target_path)
else:
return make_response("Neither zip file nor tarball included", 400)
return getInstalledLanguagePacks()
@api.route("/languages/<string:locale>/<string:pack>", methods=["DELETE"])
@restricted_access
@admin_permission.require(403)
def deleteInstalledLanguagePack(locale, pack):
if pack == "_core":
target_path = os.path.join(settings().getBaseFolder("translations"), locale)
else:
target_path = os.path.join(settings().getBaseFolder("translations"), "_plugins", pack, locale)
if os.path.isdir(target_path):
import shutil
shutil.rmtree(target_path)
return getInstalledLanguagePacks()
def _unpack_uploaded_zipfile(path, target):
with zipfile.ZipFile(path, "r") as zip:
# sanity check
map(_validate_archive_name, zip.namelist())
# unpack everything
zip.extractall(target)
def _unpack_uploaded_tarball(path, target):
with tarfile.open(path, "r") as tar:
# sanity check
map(_validate_archive_name, tar.getmembers())
# unpack everything
tar.extractall(target)
def _validate_archive_name(name):
if name.startswith("/") or ".." in name:
raise InvalidLanguagePack("Provided language pack contains invalid name {name}".format(**locals()))
class InvalidLanguagePack(Exception):
pass
| agpl-3.0 |
andyfaff/scipy | scipy/__init__.py | 5 | 5685 | """
SciPy: A scientific computing package for Python
================================================
Documentation is available in the docstrings and
online at https://docs.scipy.org.
Contents
--------
SciPy imports all the functions from the NumPy namespace, and in
addition provides:
Subpackages
-----------
Using any of these subpackages requires an explicit import. For example,
``import scipy.cluster``.
::
cluster --- Vector Quantization / Kmeans
fft --- Discrete Fourier transforms
fftpack --- Legacy discrete Fourier transforms
integrate --- Integration routines
interpolate --- Interpolation Tools
io --- Data input and output
linalg --- Linear algebra routines
linalg.blas --- Wrappers to BLAS library
linalg.lapack --- Wrappers to LAPACK library
misc --- Various utilities that don't have
another home.
ndimage --- N-D image package
odr --- Orthogonal Distance Regression
optimize --- Optimization Tools
signal --- Signal Processing Tools
signal.windows --- Window functions
sparse --- Sparse Matrices
sparse.linalg --- Sparse Linear Algebra
sparse.linalg.dsolve --- Linear Solvers
sparse.linalg.dsolve.umfpack --- :Interface to the UMFPACK library:
Conjugate Gradient Method (LOBPCG)
sparse.linalg.eigen --- Sparse Eigenvalue Solvers
sparse.linalg.eigen.lobpcg --- Locally Optimal Block Preconditioned
Conjugate Gradient Method (LOBPCG)
spatial --- Spatial data structures and algorithms
special --- Special functions
stats --- Statistical Functions
Utility tools
-------------
::
test --- Run scipy unittests
show_config --- Show scipy build configuration
show_numpy_config --- Show numpy build configuration
__version__ --- SciPy version string
__numpy_version__ --- Numpy version string
"""
__all__ = ['test']
from numpy import show_config as show_numpy_config
if show_numpy_config is None:
raise ImportError(
"Cannot import SciPy when running from NumPy source directory.")
from numpy import __version__ as __numpy_version__
# Import numpy symbols to scipy name space (DEPRECATED)
from ._lib.deprecation import _deprecated
import numpy as _num
linalg = None
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.{0} instead')
# deprecate callable objects, skipping classes
for _key in _num.__all__:
_fun = getattr(_num, _key)
if callable(_fun) and not isinstance(_fun, type):
_fun = _deprecated(_msg.format(_key))(_fun)
globals()[_key] = _fun
from numpy.random import rand, randn
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.random.{0} instead')
rand = _deprecated(_msg.format('rand'))(rand)
randn = _deprecated(_msg.format('randn'))(randn)
# fft is especially problematic, so was removed in SciPy 1.6.0
from numpy.fft import ifft
ifft = _deprecated('scipy.ifft is deprecated and will be removed in SciPy '
'2.0.0, use scipy.fft.ifft instead')(ifft)
import numpy.lib.scimath as _sci
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.lib.scimath.{0} instead')
for _key in _sci.__all__:
_fun = getattr(_sci, _key)
if callable(_fun):
_fun = _deprecated(_msg.format(_key))(_fun)
globals()[_key] = _fun
__all__ += _num.__all__
__all__ += ['randn', 'rand', 'ifft']
del _num
# Remove the linalg imported from NumPy so that the scipy.linalg package can be
# imported.
del linalg
__all__.remove('linalg')
# We first need to detect if we're being called as part of the SciPy
# setup procedure itself in a reliable manner.
try:
__SCIPY_SETUP__
except NameError:
__SCIPY_SETUP__ = False
if __SCIPY_SETUP__:
import sys as _sys
_sys.stderr.write('Running from SciPy source directory.\n')
del _sys
else:
try:
from scipy.__config__ import show as show_config
except ImportError as e:
msg = """Error importing SciPy: you cannot import SciPy while
being in scipy source directory; please exit the SciPy source
tree first and relaunch your Python interpreter."""
raise ImportError(msg) from e
from scipy.version import version as __version__
# Allow distributors to run custom init code
from . import _distributor_init
from scipy._lib import _pep440
# In maintenance branch, change to np_maxversion N+3 if numpy is at N
# See setup.py for more details
np_minversion = '1.16.5'
np_maxversion = '9.9.99'
if (_pep440.parse(__numpy_version__) < _pep440.Version(np_minversion) or
_pep440.parse(__numpy_version__) >= _pep440.Version(np_maxversion)):
import warnings
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
f" is required for this version of SciPy (detected "
f"version {__numpy_version__}",
UserWarning)
del _pep440
from scipy._lib._ccallback import LowLevelCallable
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
# This makes "from scipy import fft" return scipy.fft, not np.fft
del fft
| bsd-3-clause |
edlabh/SickRage | sickbeard/notifiers/pushbullet.py | 3 | 4976 | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Author: echel0n <[email protected]>
# URL: http://www.github.com/sickragetv/sickrage/
#
# This file is part of SickRage.
#
# SickRage is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# SickRage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with SickRage. If not, see <http://www.gnu.org/licenses/>.
from __future__ import unicode_literals
import json
import requests
import traceback
import sickbeard
import logging
from sickbeard.common import notifyStrings
from sickbeard.common import NOTIFY_SNATCH
from sickbeard.common import NOTIFY_DOWNLOAD
from sickbeard.common import NOTIFY_GIT_UPDATE
from sickbeard.common import NOTIFY_GIT_UPDATE_TEXT
from sickbeard.common import NOTIFY_SUBTITLE_DOWNLOAD
class PushbulletNotifier(object):
session = requests.Session()
TEST_EVENT = 'Test'
def __init__(self):
pass
def test_notify(self, pushbullet_api):
logging.debug("Sending a test Pushbullet notification.")
return self._sendPushbullet(pushbullet_api, event=self.TEST_EVENT,
message="Testing Pushbullet settings from SiCKRAGE")
def get_devices(self, pushbullet_api):
logging.debug("Testing Pushbullet authentication and retrieving the device list.")
return self._sendPushbullet(pushbullet_api)
def notify_snatch(self, ep_name):
if sickbeard.PUSHBULLET_NOTIFY_ONSNATCH:
self._sendPushbullet(pushbullet_api=None, event=notifyStrings[NOTIFY_SNATCH] + " : " + ep_name,
message=ep_name)
def notify_download(self, ep_name):
if sickbeard.PUSHBULLET_NOTIFY_ONDOWNLOAD:
self._sendPushbullet(pushbullet_api=None, event=notifyStrings[NOTIFY_DOWNLOAD] + " : " + ep_name,
message=ep_name)
def notify_subtitle_download(self, ep_name, lang):
if sickbeard.PUSHBULLET_NOTIFY_ONSUBTITLEDOWNLOAD:
self._sendPushbullet(pushbullet_api=None,
event=notifyStrings[NOTIFY_SUBTITLE_DOWNLOAD] + " : " + ep_name + " : " + lang,
message=ep_name + ": " + lang)
def notify_git_update(self, new_version="??"):
if sickbeard.USE_PUSHBULLET:
self._sendPushbullet(pushbullet_api=None, event=notifyStrings[NOTIFY_GIT_UPDATE],
message=notifyStrings[NOTIFY_GIT_UPDATE_TEXT] + new_version)
def _sendPushbullet(self, pushbullet_api=None, pushbullet_device=None, event=None, message=None):
if not (sickbeard.USE_PUSHBULLET or event is 'Test' or event is None):
return False
pushbullet_api = pushbullet_api or sickbeard.PUSHBULLET_API
pushbullet_device = pushbullet_device or sickbeard.PUSHBULLET_DEVICE
logging.debug("Pushbullet event: %r" % event)
logging.debug("Pushbullet message: %r" % message)
logging.debug("Pushbullet api: %r" % pushbullet_api)
logging.debug("Pushbullet devices: %r" % pushbullet_device)
logging.debug("Pushbullet notification type: %r" % 'note' if event else 'None')
url = 'https://api.pushbullet.com/v2/%s' % ('devices', 'pushes')[event is not None]
data = json.dumps({
'title': event.encode('utf-8'),
'body': message.encode('utf-8'),
'device_iden': pushbullet_device.encode('utf-8'),
'type': 'note'
}) if event else None
method = 'GET' if data is None else 'POST'
headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer %s' % pushbullet_api}
try:
response = self.session.request(method, url, data=data, headers=headers)
except Exception:
logging.debug('Pushbullet authorization failed with exception: %r' % traceback.format_exc())
return False
if response.status_code == 410:
logging.debug('Pushbullet authorization failed')
return False
if response.status_code != 200:
logging.debug('Pushbullet call failed with error code %r' % response.status_code)
return False
logging.debug("Pushbullet response: %r" % response.text)
if not response.text:
logging.error("Pushbullet notification failed.")
return False
logging.debug("Pushbullet notifications sent.")
return (True, response.text)[event is self.TEST_EVENT or event is None]
notifier = PushbulletNotifier
| gpl-3.0 |
Mte90/remo | remo/featuredrep/models.py | 5 | 1272 | from django.contrib.auth.models import User
from django.db import models
from django.utils.timezone import now
class FeaturedRep(models.Model):
"""Featured Rep model.
Featured Rep -or Rep of the Month- relates existing users with
some text explaining why they are so cool.
"""
created_on = models.DateTimeField(null=True, blank=True)
updated_on = models.DateTimeField(null=True, blank=True)
created_by = models.ForeignKey(User, related_name='reps_featured')
text = models.TextField(blank=False, null=False)
users = models.ManyToManyField(User, related_name='featuredrep_users')
class Meta:
ordering = ['-created_on']
get_latest_by = 'updated_on'
permissions = (('can_edit_featured', 'Can edit featured reps'),
('can_delete_featured', 'Can delete featured reps'))
def save(self, *args, **kwargs):
"""Override save method for custom functionality"""
# This allows to override the updated_on through the admin interface
self.updated_on = kwargs.pop('updated_on', None)
if not self.updated_on:
self.updated_on = now()
if not self.pk:
self.created_on = now()
super(FeaturedRep, self).save(*args, **kwargs)
| bsd-3-clause |
cjcjameson/gpdb | gpMgmt/bin/gppylib/operations/test/unit/test_unit_dump.py | 10 | 70055 | #
# Copyright (c) Greenplum Inc 2012. All Rights Reserved.
#
import unittest
from datetime import datetime
from gppylib.commands.base import Command, CommandResult
from gppylib.gparray import GpArray, GpDB
from gppylib.operations.backup_utils import *
from gppylib.operations.dump import *
from mock import patch, MagicMock, Mock, mock_open, call, ANY
from . import setup_fake_gparray
class DumpTestCase(unittest.TestCase):
@patch('gppylib.operations.backup_utils.Context.get_master_port', return_value = 5432)
def setUp(self, mock1):
with patch('gppylib.gparray.GpArray.initFromCatalog', return_value=setup_fake_gparray()):
context = Context()
context.target_db ='testdb'
context.dump_schema='testschema'
context.include_dump_tables_file='/tmp/table_list.txt'
context.master_datadir=context.backup_dir='/data/master'
context.batch_default=None
context.timestamp_key = '20160101010101'
context.generate_dump_timestamp()
context.schema_file = None
self.context = context
self.dumper = DumpDatabase(self.context)
self.dump_globals = DumpGlobal(self.context)
self.mailEvent = MailEvent(subject="test", message="Hello", to_addrs="[email protected]")
@patch('gppylib.operations.dump.get_heap_partition_list', return_value=[['123', 'public', 't4'], ['123', 'public', 't5'], ['123', 'testschema', 't6']])
def test_get_dirty_heap_tables_default(self, mock1):
expected_output = set(['public.t4', 'public.t5', 'testschema.t6'])
dirty_table_list = get_dirty_heap_tables(self.context)
self.assertEqual(dirty_table_list, expected_output)
@patch('gppylib.operations.dump.get_heap_partition_list', return_value=[[], ['123', 'public', 't5'], ['123', 'public', 't6']])
def test_get_dirty_heap_tables_empty_arg(self, mock1):
with self.assertRaisesRegexp(Exception, 'Heap tables query returned rows with unexpected number of columns 0'):
dirty_table_list = get_dirty_heap_tables(self.context)
def test_write_dirty_file_default(self):
dirty_tables = ['t1', 't2', 't3']
m = mock_open()
with patch('__builtin__.open', m, create=True):
tmpfilename = write_dirty_file(self.context, dirty_tables)
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='test_dirty_filename')
def test_write_dirty_file_timestamp(self, mock1):
dirty_tables = ['t1', 't2', 't3']
timestamp = '20160101010101'
m = mock_open()
with patch('__builtin__.open', m, create=True):
tmpfilename = write_dirty_file(self.context, dirty_tables, timestamp)
mock1.assert_called_with("dirty_table", timestamp=timestamp)
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
def test_write_dirty_file_no_list(self):
dirty_tables = None
tmpfilename = write_dirty_file(self.context, dirty_tables)
self.assertEqual(tmpfilename, None)
def test_write_dirty_file_empty_list(self):
dirty_tables = []
m = mock_open()
with patch('__builtin__.open', m, create=True):
tmpfilename = write_dirty_file(self.context, dirty_tables)
result = m()
self.assertEqual(len(result.write.call_args_list), 0)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20120330120102', '20120330120103'])
@patch('gppylib.operations.dump.get_incremental_ts_from_report_file', return_value='20120330120102')
def test_validate_increments_file_default(self, mock1, mock2):
# expect no exception to die out of this
CreateIncrementsFile.validate_increments_file(self.context, '/tmp/fn')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20120330120102', '20120330120103'])
@patch('gppylib.operations.dump.get_incremental_ts_from_report_file', side_effect=Exception('invalid timestamp'))
def test_validate_increments_file_bad_increment(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, "Timestamp '20120330120102' from increments file '/tmp/fn' is not a valid increment"):
CreateIncrementsFile.validate_increments_file(self.context, '/tmp/fn')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20120330120102', '20120330120103'])
@patch('gppylib.operations.dump.get_incremental_ts_from_report_file', return_value=None)
def test_validate_increments_file_empty_file(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, "Timestamp '20120330120102' from increments file '/tmp/fn' is not a valid increment"):
CreateIncrementsFile.validate_increments_file(self.context, '/tmp/fn')
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
def test_CreateIncrementsFile_init(self, mock1, mock2, mock3):
obj = CreateIncrementsFile(self.context)
self.assertEquals(obj.increments_filename, '/data/master/db_dumps/20160101/gp_dump_20160101000000_increments')
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
@patch('gppylib.operations.dump.get_lines_from_file', side_effect=[ [], ['20160101010101'] ])
def test_CreateIncrementsFile_execute_no_file(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with patch('__builtin__.open', mock_open(), create=True):
result = obj.execute()
self.assertEquals(1, result)
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.get_incremental_ts_from_report_file', return_value='')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20160101010101'])
def test_CreateIncrementsFile_execute_invalid_timestamp(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with self.assertRaisesRegexp(Exception, ".* is not a valid increment"):
obj.execute()
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.get_lines_from_file', side_effect=[ ['20160101010000'], ['20160101010000', '20160101010101'] ])
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
def test_CreateIncrementsFile_execute_append(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with patch('__builtin__.open', mock_open(), create=True):
result = obj.execute()
self.assertEquals(2, result)
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=[])
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
def test_CreateIncrementsFile_execute_no_output(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with patch('__builtin__.open', mock_open(), create=True):
with self.assertRaisesRegexp(Exception, 'File not written to'):
result = obj.execute()
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20160101000000'])
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
def test_CreateIncrementsFile_execute_wrong_timestamp(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with patch('__builtin__.open', mock_open(), create=True):
with self.assertRaisesRegexp(Exception, 'Timestamp .* not written to'):
result = obj.execute()
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.get_lines_from_file', side_effect=[ ['20160101010000'], ['20160101010001', '20160101010101'] ])
@patch('gppylib.operations.dump.CreateIncrementsFile.validate_increments_file')
def test_CreateIncrementsFile_execute_modified_timestamp(self, mock1, mock2, mock3, mock4):
obj = CreateIncrementsFile(self.context)
with patch('__builtin__.open', mock_open(), create=True):
with self.assertRaisesRegexp(Exception, 'trouble adding timestamp'):
result = obj.execute()
@patch('gppylib.operations.dump.get_filter_file', return_value=None)
def test_write_partition_list_file_no_filter_file(self, mock1):
with patch('gppylib.operations.dump.get_partition_list') as p:
part_list = [[123, 'myschema', 't1'], [4444, 'otherschema', 't2'], [992313, 'public', 't3']]
p.return_value = part_list
m = mock_open()
with patch('__builtin__.open', m, create=True):
write_partition_list_file(self.context)
result = m()
self.assertEqual(len(part_list), len(result.write.call_args_list))
for i in range(len(part_list)):
expected = "%s.%s\n" % (part_list[i][1], part_list[i][2])
self.assertEqual(call(expected), result.write.call_args_list[i])
@patch('gppylib.operations.dump.get_partition_list', return_value=[['t1', 'foo', 'koo'], ['public', 't2'], ['public', 't3']])
@patch('gppylib.operations.dump.get_filter_file', return_value=None)
def test_write_partition_list_file_bad_query_return(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, 'Invalid results from query to get all tables'):
write_partition_list_file(self.context)
def test_create_dump_outcome_default(self):
start = datetime(2012, 7, 31, 9, 30, 00)
end = datetime(2012, 8, 1, 12, 21, 11)
rc = 5
expected_outcome = {'timestamp_start': '20120731093000',
'time_start': '09:30:00',
'time_end': '12:21:11',
'exit_status': 5}
outcome = self.dumper.create_dump_outcome(start, end, rc)
self.assertTrue(expected_outcome == outcome)
@patch('gppylib.operations.dump.ValidateDumpDatabase.run')
@patch('gppylib.operations.dump.Command.run')
@patch('gppylib.operations.dump.Command.get_results', return_value=CommandResult(0, "", "", True, False))
@patch('gppylib.operations.dump.DumpDatabase.create_filter_file')
def test_execute_default(self, mock1, mock2, mock3, mock4):
self.context.include_dump_tables_file = ''
self.dumper.execute()
# should not raise any exceptions
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
@patch('gppylib.operations.dump.execSQLForSingleton', return_value='100')
def test_get_partition_state_default(self, mock1, mock2, mock3):
partition_info = [(123, 'testschema', 't1', 4444), (234, 'testschema', 't2', 5555)]
expected_output = ['testschema, t1, 100', 'testschema, t2, 100']
result = get_partition_state(self.context, 'pg_aoseg', partition_info)
self.assertEqual(result, expected_output)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
def test_get_partition_state_empty(self, mock1, mock2):
partition_info = []
expected_output = []
result = get_partition_state(self.context, 'pg_aoseg', partition_info)
self.assertEqual(result, expected_output)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
@patch('gppylib.operations.dump.execSQLForSingleton', return_value='10000000000000000')
def test_get_partition_state_exceeded_count(self, mock1, mock2, mock3):
partition_info = [(123, 'testschema', 't1', 4444), (234, 'testschema', 't2', 5555)]
expected_output = ['testschema, t1, 10000000000000000', 'testschema, t2, 10000000000000000']
with self.assertRaisesRegexp(Exception, 'Exceeded backup max tuple count of 1 quadrillion rows per table for:'):
get_partition_state(self.context, 'pg_aoseg', partition_info)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
@patch('gppylib.operations.dump.execSQLForSingleton', return_value='100')
def test_get_partition_state_many_partition(self, mock1, mock2, mock3):
master_port=5432
dbname='testdb'
partition_info = [(123, 'testschema', 't1', 4444), (234, 'testschema', 't2', 5555)] * 1
expected_output = ['testschema, t1, 100', 'testschema, t2, 100'] * 1
result = get_partition_state(self.context, 'pg_aoseg', partition_info)
self.assertEqual(result, expected_output)
def test_get_filename_from_filetype_ao(self):
expected_output = '/data/master/db_dumps/20160101/gp_dump_20160101010101_ao_state_file'
result = get_filename_from_filetype(self.context, "ao", self.context.timestamp)
self.assertEqual(result, expected_output)
def test_get_filename_from_filetype_co(self):
expected_output = '/data/master/db_dumps/20160101/gp_dump_20160101010101_co_state_file'
result = get_filename_from_filetype(self.context, "co", self.context.timestamp)
self.assertEqual(result, expected_output)
def test_get_filename_from_filetype_bad_type(self):
with self.assertRaisesRegexp(Exception, 'Invalid table type *'):
result = get_filename_from_filetype(self.context, "schema", self.context.timestamp)
def test_write_state_file_bad_type(self):
table_type = 'foo'
partition_list = ['testschema, t1, 100', 'testschema, t2, 100']
with self.assertRaisesRegexp(Exception, 'Invalid table type *'):
write_state_file(self.context, table_type, partition_list)
@patch('gppylib.operations.dump.get_filename_from_filetype', return_value='/tmp/db_dumps/20160101/gp_dump_20160101010101')
def test_write_state_file_default(self, mock1):
table_type = 'ao'
part_list = ['testschema, t1, 100', 'testschema, t2, 100']
m = mock_open()
with patch('__builtin__.open', m, create=True):
write_state_file(self.context, table_type, part_list)
result = m()
self.assertEqual(len(part_list), len(result.write.call_args_list))
for i in range(len(part_list)):
self.assertEqual(call(part_list[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.dump.get_filename_from_filetype', return_value='/tmp/db_dumps/20170413/gp_dump_20170413224743_ao_state_file')
def test_write_state_file_empty(self, mock1):
table_type = 'ao'
part_list = ['']
m = mock_open()
with patch('__builtin__.open', m, create=True):
write_state_file(self.context, table_type, part_list)
result = m()
self.assertEqual(1, len(result.write.call_args_list))
for i in range(len(part_list)):
self.assertEqual(call('\n'), result.write.call_args_list[i])
@patch('gppylib.operations.dump.execute_sql', return_value=[['public', 'ao_table', 123, 'CREATE', 'table', '2012: 1'], ['testschema', 'co_table', 333, 'TRUNCATE', '', '2033 :1 - 111']])
def test_get_last_operation_data_default(self, mock):
output = get_last_operation_data(self.context)
expected = ['public,ao_table,123,CREATE,table,2012: 1', 'testschema,co_table,333,TRUNCATE,,2033 :1 - 111']
self.assertEquals(output, expected)
@patch('gppylib.operations.dump.execute_sql', return_value=[])
def test_get_last_operation_data_empty(self, mock):
output = get_last_operation_data(self.context)
expected = []
self.assertEquals(output, expected)
@patch('gppylib.operations.dump.execute_sql', return_value=[[123, 'table', '2012: 1'], [333, 'TRUNCATE', '', '2033 :1 - 111']])
def test_get_last_operation_data_invalid(self, mock):
with self.assertRaisesRegexp(Exception, 'Invalid return from query'):
get_last_operation_data(self.context)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101121212')
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['testschema, t1, 100', 'testschema, t2, 100'])
def test_get_last_state_default(self, mock1, mock2, mock3):
table_type = 'ao'
expected_output = ['testschema, t1, 100', 'testschema, t2, 100']
output = get_last_state(self.context, table_type)
self.assertEqual(output, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101121212')
@patch('gppylib.operations.dump.os.path.isfile', return_value=False)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo')
def test_get_last_state_no_file(self, mock1, mock2, mock3):
table_type = 'ao'
with self.assertRaisesRegexp(Exception, 'ao state file does not exist: foo'):
get_last_state(self.context, table_type)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101121212')
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=[])
def test_get_last_state_empty_file(self, mock1, mock2, mock3):
table_type = 'ao'
output = get_last_state(self.context, table_type)
self.assertEqual(output, [])
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101121212')
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=[])
@patch('gppylib.operations.dump.check_file_dumped_with_nbu', return_value=True)
@patch('gppylib.operations.dump.restore_file_with_nbu')
def test_get_last_state_nbu(self, mock1, mock2, mock3, mock4, mock5):
table_type = 'ao'
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = "1024"
output = get_last_state(self.context, table_type)
self.assertEqual(output, [])
def test_compare_dict_different(self):
last_dict = {'testschema.t1':'100', 'testschema.t2':'200'}
curr_dict = {'testschema.t1':'200', 'testschema.t2':'200'}
expected_output = set(['testschema.t1'])
result = compare_dict(last_dict, curr_dict)
self.assertEqual(result, expected_output)
def test_compare_dict_extra(self):
last_dict = {'testschema.t1':'100', 'testschema.t2':'200', 'testschema.t3':'300'}
curr_dict = {'testschema.t1':'100', 'testschema.t2':'100'}
expected_output = set(['testschema.t2'])
result = compare_dict(last_dict, curr_dict)
self.assertEqual(result, expected_output)
def test_compare_dict_missing(self):
last_dict = {'testschema.t1':'100', 'testschema.t2':'200'}
curr_dict = {'testschema.t1':'100', 'testschema.t2':'200', 'testschema.t3':'300'}
expected_output = set(['testschema.t3'])
result = compare_dict(last_dict, curr_dict)
self.assertEqual(result, expected_output)
def test_compare_dict_identical(self):
last_dict = {'testschema.t1':'100', 'testschema.t2':'200'}
curr_dict = {'testschema.t1':'100', 'testschema.t2':'200'}
expected_output = set([])
result = compare_dict(last_dict, curr_dict)
self.assertEqual(result, expected_output)
def test_create_partition_dict_default(self):
partition_list = ['testschema, t1, 100', 'testschema, t2, 200']
expected_output = {'testschema.t1':'100', 'testschema.t2':'200'}
result = create_partition_dict(partition_list)
self.assertEqual(result, expected_output)
def test_create_partition_dict_empty(self):
partition_list = ['']
expected_output = {}
result = create_partition_dict(partition_list)
self.assertEqual(result, expected_output)
def test_create_partition_dict_invalid_format(self):
partition_list = ['testschema t1 100']
with self.assertRaisesRegexp(Exception, 'Invalid state file format *'):
create_partition_dict(partition_list)
@patch('gppylib.operations.backup_utils.Context.generate_filename')
@patch('gppylib.operations.dump.os.path.isdir', return_value=False)
@patch('gppylib.operations.dump.os.path.isfile', return_value=False)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_last_dump_timestamp_default(self, mock1, mock2, mock3, mock4):
full_timestamp = '20160101000000'
result = get_last_dump_timestamp(self.context)
self.assertEqual(result, full_timestamp)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['20160101010000', '20160101010001'])
@patch('gppylib.operations.dump.os.path.isdir', return_value=True)
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_last_dump_timestamp_one_previous(self, mock1, mock2, mock3, mock4):
master_datadir = 'foo'
backup_dir = None
full_timestamp = '20160101000000'
expected_output = '20160101010001'
result = get_last_dump_timestamp(self.context)
self.assertEqual(result, expected_output)
@patch('gppylib.operations.dump.os.path.isdir', return_value=True)
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['2012093009300q'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_last_dump_timestamp_invalid_timestamp(self, mock1, mock2, mock3, mock4):
with self.assertRaisesRegexp(Exception, 'get_last_dump_timestamp found invalid ts in file'):
get_last_dump_timestamp(self.context)
@patch('gppylib.operations.dump.os.path.isdir', return_value=True)
@patch('gppylib.operations.dump.os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_lines_from_file', return_value=[' 20160101010101 \n \n '])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_last_dump_timestamp_extra_whitespace(self, mock1, mock2, mock3, mock4):
expected = '20160101010101'
result = get_last_dump_timestamp(self.context)
self.assertEqual(result, expected)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
@patch('gppylib.operations.dump.check_file_dumped_with_nbu', return_value=False)
def test_get_last_dump_timestamp_nbu(self, mock1, mock2):
netbackup_service_host = "mdw"
netbackup_block_size = "1024"
expected = '20160101000000'
result = get_last_dump_timestamp(self.context)
self.assertEqual(result, expected)
def test_get_pgstatlastoperations_dict_single_input(self):
last_operations = ['public,t1,1234,ALTER,,201601011212:101010']
last_operations_dict = get_pgstatlastoperations_dict(last_operations)
expected_output = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
self.assertEqual(last_operations_dict, expected_output)
def test_get_pgstatlastoperations_dict_multiple_input(self):
last_operations = ['public,t1,1234,ALTER,,201601011212:101010', 'public,t2,1234,VACCUM,TRUNCATE,201601011212:101015']
last_operations_dict = get_pgstatlastoperations_dict(last_operations)
expected_output = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010',
('1234', 'VACCUM'): 'public,t2,1234,VACCUM,TRUNCATE,201601011212:101015'}
self.assertEqual(last_operations_dict, expected_output)
def test_get_pgstatlastoperations_dict_empty(self):
last_operations = ['']
last_operations_dict = get_pgstatlastoperations_dict(last_operations)
expected_output = {}
self.assertEqual(last_operations_dict, expected_output)
def test_get_pgstatlastoperations_dict_invalid_input(self):
last_operations = ['public,t1,1234,ALTER,,201601011212:101010', '2345,VACCUM,TRUNCATE,201601011212:101015']
with self.assertRaisesRegexp(Exception, 'Wrong number of tokens in last_operation data for last backup'):
get_pgstatlastoperations_dict(last_operations)
def test_compare_metadata_(self):
old_metadata = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
cur_metadata = ['public,t1,1234,ALTER,,201601011212:101010']
dirty_tables = compare_metadata(old_metadata, cur_metadata)
self.assertEquals(dirty_tables, set())
def test_compare_metadata_different_keyword(self):
old_metadata = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
cur_metadata = ['public,t1,1234,TRUNCATE,,201601011212:101010']
dirty_tables = compare_metadata(old_metadata, cur_metadata)
self.assertEquals(dirty_tables, set(['public.t1']))
def test_compare_metadata_different_timestamp(self):
old_metadata = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
cur_metadata = ['public,t1,1234,ALTER,,201601011212:102510']
dirty_tables = compare_metadata(old_metadata, cur_metadata)
self.assertEquals(dirty_tables, set(['public.t1']))
def test_compare_metadata_duplicate_input(self):
old_metadata = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
cur_metadata = ['public,t1,1234,ALTER,,201601011212:101010','public,t1,1234,TRUNCATE,,201601011212:101010']
dirty_tables = compare_metadata(old_metadata, cur_metadata)
self.assertEquals(dirty_tables, set(['public.t1']))
def test_compare_metadata_invalid_input(self):
old_metadata = {('1234', 'ALTER'): 'public,t1,1234,ALTER,,201601011212:101010'}
cur_metadata = ['public,t1,1234,ALTER,,201601011212:101010,']
with self.assertRaisesRegexp(Exception, 'Wrong number of tokens in last_operation data for current backup'):
compare_metadata(old_metadata, cur_metadata)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=[])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_tables_with_dirty_metadata_empty(self, mock1, mock2, mock3):
expected_output = set()
full_timestamp = '20160101010101'
cur_pgstatoperations = []
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_tables_with_dirty_metadata_default(self, mock1, mock2, mock3):
expected_output = set()
cur_pgstatoperations = ['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510']
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102511'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_tables_with_dirty_metadata_changed_table(self, mock1, mock2, mock3):
expected_output = set(['testschema.t2'])
cur_pgstatoperations = ['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510']
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['testschema,t1,2234,TRUNCATE,,201601011213:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_tables_with_dirty_metadata_extras(self, mock1, mock2, mock3):
expected_output = set(['testschema.t2', 'public.t3'])
full_timestamp = '20160101010101'
cur_pgstatoperations = ['testschema,t2,1234,ALTER,CHANGE COLUMN,201601011212:102510',
'testschema,t2,2234,TRUNCATE,,201601011213:102510',
'public,t3,2234,TRUNCATE,,201601011213:102510']
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['testschema,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101000000')
def test_get_tables_with_dirty_metadata_different_schema(self, mock1, mock2, mock3):
expected_output = set(['public.t1'])
cur_pgstatoperations = ['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510']
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_dump_timestamp', return_value='20160101010100')
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['testschema,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510'])
@patch('gppylib.operations.dump.restore_file_with_nbu')
def test_get_tables_with_dirty_metadata_nbu(self, mock1, mock2, mock3):
expected_output = set(['public.t1'])
cur_pgstatoperations = ['public,t1,1234,ALTER,CHANGE COLUMN,201601011212:102510', 'testschema,t2,2234,TRUNCATE,,201601011213:102510']
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = "1024"
dirty_tables = get_tables_with_dirty_metadata(self.context, cur_pgstatoperations)
self.assertEqual(dirty_tables, expected_output)
@patch('gppylib.operations.dump.get_last_state', return_value=['testschema, t1, 100', 'testschema, t2, 200'])
def test_get_dirty_partition_tables_default(self, mock1):
table_type = 'ao'
curr_state_partition_list = ['testschema, t3, 300', 'testschema, t1, 200']
expected_output = set(['testschema.t3', 'testschema.t1'])
result = get_dirty_partition_tables(self.context, table_type, curr_state_partition_list)
self.assertEqual(result, expected_output)
@patch('gppylib.operations.dump.get_last_state', return_value=['testschema, t1, 100', 'testschema, t2, 200'])
def test_get_dirty_partition_tables_nbu(self, mock1):
table_type = 'ao'
curr_state_partition_list = ['testschema, t3, 300', 'testschema, t1, 200']
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = "1024"
expected_output = set(['testschema.t3', 'testschema.t1'])
result = get_dirty_partition_tables(self.context, table_type, curr_state_partition_list)
self.assertEqual(result, expected_output)
@patch('gppylib.operations.dump.get_dirty_heap_tables', return_value=set(['public.heap_table1']))
@patch('gppylib.operations.dump.get_dirty_partition_tables', side_effect=[set(['public,ao_t1,100', 'public,ao_t2,100']), set(['public,co_t1,100', 'public,co_t2,100'])])
@patch('gppylib.operations.dump.get_tables_with_dirty_metadata', return_value=set(['public,ao_t3,1234,CREATE,,20160101101010', 'public,co_t3,2345,VACCUM,,20160101101010', 'public,ao_t1,1234,CREATE,,20160101101010']))
def test_get_dirty_tables(self, mock1, mock2, mock3):
ao_partition_list = []
co_partition_list = []
last_operation_data = []
dirty_tables = get_dirty_tables(self.context, ao_partition_list, co_partition_list, last_operation_data)
expected_output = ['public.heap_table1', 'public.ao_t1', 'public.ao_t2', 'public.co_t1', 'public.co_t2', 'public.ao_t3', 'public.co_t3']
self.assertEqual(dirty_tables.sort(), expected_output.sort())
@patch('gppylib.operations.dump.get_latest_report_timestamp', return_value = '20160101010100')
def test_validate_current_timestamp_default(self, mock):
directory = '/foo'
#no exception
validate_current_timestamp(self.context, current='20160101010101')
@patch('gppylib.operations.dump.get_latest_report_timestamp', return_value = '20160101010101')
def test_validate_current_timestamp_same_timestamp(self, mock):
directory = '/foo'
with self.assertRaisesRegexp(Exception, 'There is a future dated backup on the system preventing new backups'):
validate_current_timestamp(self.context, current='20160101010101')
@patch('gppylib.operations.dump.get_latest_report_timestamp', return_value = '20170101010101')
def test_validate_current_timestamp_future_timestamp(self, mock):
directory = '/foo'
with self.assertRaisesRegexp(Exception, 'There is a future dated backup on the system preventing new backups'):
validate_current_timestamp(self.context, current='20160101010101')
def test_validate_modcount_default(self):
schemaname = 'public'
partitionname = 't1'
tuple_count = '999999999999999'
validate_modcount(schemaname, partitionname, tuple_count)
def test_validate_modcount_non_int(self):
schemaname = 'public'
partitionname = 't1'
tuple_count = '#########'
with self.assertRaisesRegexp(Exception, 'Can not convert modification count for table.'):
validate_modcount(schemaname, partitionname, tuple_count)
def test_validate_modcount_scientific_notation(self):
schemaname = 'public'
partitionname = 't1'
tuple_count = '1+e15'
with self.assertRaisesRegexp(Exception, 'Can not convert modification count for table.'):
validate_modcount(schemaname, partitionname, tuple_count)
def test_validate_modcount_exceeded_count(self):
schemaname = 'public'
partitionname = 't1'
tuple_count = '1000000000000000'
with self.assertRaisesRegexp(Exception, 'Exceeded backup max tuple count of 1 quadrillion rows per table for:'):
validate_modcount(schemaname, partitionname, tuple_count)
def test_generate_dump_timestamp_default(self):
ts_key = datetime(2013, 02, 04, 10, 10, 10, 10000).strftime("%Y%m%d%H%M%S")
self.context.timestamp_key = ts_key
self.context.generate_dump_timestamp()
self.assertEqual(ts_key, self.context.timestamp)
self.assertEqual(ts_key[0:8], self.context.db_date_dir)
def test_generate_dump_timestamp_no_timestamp(self):
self.context.timestamp_key = None
self.context.generate_dump_timestamp()
self.assertNotEqual(None, self.context.timestamp)
self.assertNotEqual(None, self.context.db_date_dir)
def test_generate_dump_timestamp_replace_timestamp(self):
ts1 = datetime(2013, 02, 04, 10, 10, 10, 10000)
ts2 = datetime(2013, 03, 04, 10, 10, 10, 10000)
self.context.timestamp_key = ts1.strftime("%Y%m%d%H%M%S")
self.context.generate_dump_timestamp()
self.context.timestamp_key = ts2.strftime("%Y%m%d%H%M%S")
self.context.generate_dump_timestamp()
ts_key = ts2.strftime("%Y%m%d%H%M%S")
self.assertEqual(ts_key, self.context.timestamp)
self.assertEqual(ts_key[0:8], self.context.db_date_dir)
def test_create_dump_string_with_prefix_schema_level_dump(self):
self.context.dump_prefix = 'foo_'
self.context.schema_file = '/tmp/schema_file '
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --prefix=foo_ --no-expand-children -n "\\"testschema\\"" "testdb" --schema-file=/tmp/schema_file """
self.assertEquals(output, expected_output)
def test_create_dump_string_default(self):
self.context.schema_file = '/tmp/schema_file'
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --no-expand-children -n "\\"testschema\\"" "testdb" --schema-file=/tmp/schema_file"""
self.assertEquals(output, expected_output)
def test_create_dump_string_without_incremental(self):
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --no-expand-children -n "\\"testschema\\"" "testdb" --table-file=/tmp/table_list.txt"""
self.assertEquals(output, expected_output)
def test_create_dump_string_with_prefix(self):
self.context.dump_prefix = 'foo_'
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --prefix=foo_ --no-expand-children -n "\\"testschema\\"" "testdb" --table-file=/tmp/table_list.txt"""
self.assertEquals(output, expected_output)
def test_create_dump_string_with_include_file(self):
self.context.dump_prefix = 'metro_'
self.context.include_dump_tables_file = 'bar'
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --prefix=metro_ --no-expand-children -n "\\"testschema\\"" "testdb" --table-file=%s""" % self.context.include_dump_tables_file
self.assertEquals(output, expected_output)
def test_create_dump_string_with_no_file_args(self):
self.context.dump_prefix = 'metro_'
self.context.include_dump_tables_file = None
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --prefix=metro_ --no-expand-children -n "\\"testschema\\"" "testdb\""""
self.assertEquals(output, expected_output)
def test_create_dump_string_with_netbackup_params(self):
self.context.include_dump_tables_file = None
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --no-expand-children -n "\\"testschema\\"" "testdb" --netbackup-service-host=mdw --netbackup-policy=test_policy --netbackup-schedule=test_schedule"""
self.assertEquals(output, expected_output)
def test_get_backup_dir_with_master_data_dir(self):
self.assertEquals('/data/master/db_dumps/20160101', self.context.get_backup_dir())
def test_get_backup_dir_with_backup_dir(self):
self.context.backup_dir = '/tmp'
self.assertEquals('/tmp/db_dumps/20160101', self.context.get_backup_dir())
@patch('gppylib.operations.backup_utils.Context.is_timestamp_in_old_format', return_value=False)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101010101')
@patch('os.path.isfile', return_value=True)
def test_get_filter_file_file_exists(self, mock1, mock2, mock3):
self.context.dump_prefix = 'foo_'
expected_output = '/data/master/db_dumps/20160101/foo_gp_dump_20160101010101_filter'
self.assertEquals(expected_output, get_filter_file(self.context))
@patch('os.path.isfile', return_value=False)
@patch('gppylib.operations.backup_utils.Context.is_timestamp_in_old_format', return_value=False)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101010101')
@patch('gppylib.operations.dump.get_latest_full_ts_with_nbu', return_value='20160101010101')
@patch('gppylib.operations.dump.check_file_dumped_with_nbu', return_value=True)
@patch('gppylib.operations.dump.restore_file_with_nbu')
def test_get_filter_file_file_exists_on_nbu(self, mock1, mock2, mock3, mock4, mock5, mock6):
self.context.dump_prefix = 'foo_'
self.context.netbackup_block_size = "1024"
self.context.netbackup_service_host = "mdw"
expected_output = '/data/master/db_dumps/20160101/foo_gp_dump_20160101010101_filter'
self.assertEquals(expected_output, get_filter_file(self.context))
@patch('gppylib.operations.backup_utils.Context.is_timestamp_in_old_format', return_value=False)
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20160101010101')
@patch('os.path.isfile', return_value=False)
def test_get_filter_file_file_does_not_exist(self, mock1, mock2, mock3):
self.assertEquals(None, get_filter_file(self.context))
def test_update_filter_file_with_dirty_list_default(self):
filter_file = '/tmp/foo'
dirty_tables = ['public.t1', 'public.t2']
expected_output = ['public.t1', 'public.t2']
m = mock_open()
with patch('__builtin__.open', m, create=True):
update_filter_file_with_dirty_list(filter_file, dirty_tables)
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.backup_utils.get_lines_from_file', return_value=['public.t1', 'public.t2'])
def test_update_filter_file_with_dirty_list_duplicates(self, mock1):
filter_file = '/tmp/foo'
dirty_tables = ['public.t2']
expected_output = ['public.t1', 'public.t2']
m = mock_open()
with patch('__builtin__.open', m, create=True):
update_filter_file_with_dirty_list(filter_file, dirty_tables)
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
def test_update_filter_file_with_dirty_list_empty_file(self):
filter_file = '/tmp/foo'
dirty_tables = ['public.t1', 'public.t2']
expected_output = ['public.t1', 'public.t2']
m = mock_open()
with patch('__builtin__.open', m, create=True):
update_filter_file_with_dirty_list(filter_file, dirty_tables)
result = m()
self.assertEqual(len(dirty_tables), len(result.write.call_args_list))
for i in range(len(dirty_tables)):
self.assertEqual(call(dirty_tables[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['public.t1', 'testschema.t2'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20130101010101')
@patch('gppylib.operations.dump.get_filter_file', return_value='/foo/metro_gp_dump_20130101010101_filter')
@patch('gppylib.operations.dump.get_latest_full_ts_with_nbu', return_value='20130101010101')
def test_filter_dirty_tables_with_filter(self, mock1, mock2, mock3, mock4):
dirty_tables = ['public.t1', 'public.t2', 'testschema.t1', 'testschema.t2']
expected_output = ['public.t1', 'testschema.t2']
self.context.netbackup_service_host = 'mdw'
self.assertEquals(sorted(expected_output), sorted(filter_dirty_tables(self.context, dirty_tables)))
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['public.t1', 'testschema.t2'])
@patch('gppylib.operations.dump.get_filter_file', return_value='/foo/metro_gp_dump_20130101010101_filter')
@patch('gppylib.operations.dump.get_latest_full_ts_with_nbu', return_value='20130101010101')
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20130101010101')
def test_filter_dirty_tables_with_filter_with_nbu(self, mock1, mock2, mock3, mock4):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = "1024"
dirty_tables = ['public.t1', 'public.t2', 'testschema.t1', 'testschema.t2']
expected_output = ['public.t1', 'testschema.t2']
self.assertEquals(sorted(expected_output), sorted(filter_dirty_tables(self.context, dirty_tables)))
@patch('gppylib.operations.dump.get_lines_from_file', return_value=['public.t1', 'testschema.t2'])
@patch('gppylib.operations.dump.get_latest_full_dump_timestamp', return_value='20130101010101')
@patch('gppylib.operations.dump.get_filter_file', return_value=None)
def test_filter_dirty_tables_without_filter(self, mock1, mock2, mock3):
dirty_tables = ['public.t1', 'public.t2', 'testschema.t1', 'testschema.t2']
self.assertEquals(sorted(dirty_tables), sorted(filter_dirty_tables(self.context, dirty_tables)))
@patch('gppylib.operations.dump.get_filter_file', return_value='/tmp/db_dumps/20160101/foo_gp_dump_01234567891234_filter')
def test_create_filtered_dump_string(self, mock1):
self.context.dump_prefix = 'foo_'
with patch.dict(os.environ, {'LOGNAME':'gpadmin'}):
output = self.dumper.create_filtered_dump_string()
expected_output = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=20160101010101 --no-lock --gp-c --prefix=foo_ --no-expand-children -n "\\"testschema\\"" "testdb" --table-file=/tmp/table_list.txt --incremental-filter=/tmp/db_dumps/20160101/foo_gp_dump_01234567891234_filter"""
self.assertEquals(output, expected_output)
@patch('gppylib.operations.dump.Command.get_results', return_value=CommandResult(0, "", "", True, False))
@patch('gppylib.operations.dump.Command.run')
def test_perform_dump_normal(self, mock1, mock2):
self.context.dump_prefix = 'foo_'
title = 'Dump process'
dump_line = """gp_dump -p 5432 -U gpadmin --gp-d=/data/master/db_dumps/20160101 --gp-r=/data/master/db_dumps/20160101 --gp-s=p --gp-k=01234567891234 --no-lock --gp-c --prefix=foo_ --no-expand-children -n "\\"testschema\\"" "testdb" --table-file=/tmp/table_list.txt"""
(start, end, rc) = self.dumper.perform_dump(title, dump_line)
self.assertNotEqual(start, None)
self.assertNotEqual(end, None)
self.assertEquals(rc, 0)
def test_create_pgdump_command_line(self):
global_file_name = '/data/master/db_dumps/20160101/gp_global_-1_1_20160101010101'
expected_output = "pg_dumpall -p 5432 -g --gp-syntax > %s" % global_file_name
output = self.dump_globals.create_pgdump_command_line()
self.assertEquals(output, expected_output)
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_filter_file', return_value = '/tmp/update_test')
@patch('gppylib.operations.dump.get_lines_from_file', return_value = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1'])
@patch('gppylib.operations.dump.execute_sql', side_effect = [ [['public.ao_part_table']], [['public.ao_part_table_1_prt_p1'], ['public.ao_part_table_1_prt_p2']] ])
def test_update_filter_file_default(self, mock1, mock2, mock3, mock4):
filter_filename = '/tmp/update_test'
contents = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1']
expected_result = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1', 'public.ao_part_table_1_prt_p2']
m = mock_open()
with patch('__builtin__.open', m, create=True):
update_filter_file(self.context)
result = m()
self.assertEqual(len(expected_result), len(result.write.call_args_list))
expected = sorted(expected_result)
output = sorted(result.write.call_args_list)
for i in range(len(expected)):
self.assertEqual(call(expected[i]+'\n'), output[i])
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.dump.get_filter_file', return_value = '/tmp/update_test')
@patch('gppylib.operations.dump.get_lines_from_file', return_value = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1'])
@patch('gppylib.operations.dump.execute_sql', side_effect = [ [['public.ao_part_table']], [['public.ao_part_table_1_prt_p1'], ['public.ao_part_table_1_prt_p2']] ])
@patch('gppylib.operations.dump.restore_file_with_nbu')
@patch('gppylib.operations.dump.backup_file_with_nbu')
def test_update_filter_file_default_with_nbu(self, mock1, mock2, mock3, mock4, mock5, mock6):
filter_filename = '/tmp/update_test'
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "nbu_policy"
self.context.netbackup_schedule = "nbu_schedule"
self.context.netbackup_block_size = "1024"
contents = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1']
expected_result = ['public.heap_table1','public.ao_part_table','public.ao_part_table_1_prt_p1', 'public.ao_part_table_1_prt_p2']
m = mock_open()
with patch('__builtin__.open', m, create=True):
update_filter_file(self.context)
result = m()
self.assertEqual(len(expected_result), len(result.write.call_args_list))
expected = sorted(expected_result)
output = sorted(result.write.call_args_list)
for i in range(len(expected)):
self.assertEqual(call(expected[i]+'\n'), output[i])
@patch('gppylib.operations.dump.backup_file_with_nbu')
def test_backup_state_files_with_nbu_default(self, mock):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
backup_state_files_with_nbu(self.context)
self.assertEqual(mock.call_count, 3)
class MyMock(MagicMock):
def __init__(self, num_segs):
super(MagicMock, self).__init__()
self.mock_segs = []
for i in range(num_segs):
self.mock_segs.append(Mock())
def getSegmentList(self):
for id, seg in enumerate(self.mock_segs):
seg.get_active_primary.getSegmentHostName.return_value = Mock()
seg.get_primary_dbid.return_value = id + 2
return self.mock_segs
@patch('gppylib.gparray.GpDB.getSegmentHostName', return_value='sdw')
def test_backup_config_files_with_nbu_default(self, mock1):
with patch('gppylib.operations.dump.backup_file_with_nbu', side_effect=my_counter) as nbu_mock:
global i
i = 0
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
backup_config_files_with_nbu(self.context)
args, _ = nbu_mock.call_args_list[0]
self.assertEqual(args[1], "master_config")
for id, seg in enumerate(mock1.mock_segs):
self.assertEqual(seg.get_active_primary.call_count, 1)
self.assertEqual(seg.get_primary_dbid.call_count, 1)
args, _ = nbu_mock.call_args_list[id]
self.assertEqual(args, ("segment_config", id+2, "sdw"))
self.assertEqual(i, 3)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_ddboost_default(self, mock1, mock2):
self.context.backup_dir = None
self.context.dump_dir = 'backup/DCA-35'
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_ddboost(self.context, "schema")
cmd.assert_called_with("copy file foo_schema to DD machine", "gpddboost --copyToDDBoost --from-file=foo_schema --to-file=backup/DCA-35/20160101/foo_schema")
self.assertEqual(mock2.call_count, 1)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_ddboost_no_filetype(self, mock1, mock2):
self.context.backup_dir = None
self.context.dump_dir = 'backup/DCA-35'
with self.assertRaisesRegexp(Exception, 'Cannot call backup_file_with_ddboost without a filetype argument'):
backup_file_with_ddboost(self.context)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_default(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 100
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 100"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, "schema")
cmd.assert_called_with("dumping metadata files from master", cmdStr)
self.assertEqual(mock2.call_count, 1)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_no_filetype(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 100
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 100"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, path="/tmp/foo_schema")
cmd.assert_called_with("dumping metadata files from master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_no_path(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 100
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 100"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, "schema")
cmd.assert_called_with("dumping metadata files from master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_both_args(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, 'Cannot supply both a file type and a file path to backup_file_with_nbu'):
backup_file_with_nbu(self.context, "schema", "/tmp/foo_schema")
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_neither_arg(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, 'Cannot call backup_file_with_nbu with no type or path argument'):
backup_file_with_nbu(self.context)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_block_size(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 1024
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 1024"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, "schema")
cmd.assert_called_with("dumping metadata files from master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_keyword(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 100
self.context.netbackup_keyword = "foo"
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 100 --netbackup-keyword foo"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, "schema")
cmd.assert_called_with("dumping metadata files from master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_backup_file_with_nbu_segment(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
self.context.netbackup_block_size = 100
cmdStr = "cat /tmp/foo_schema | gp_bsa_dump_agent --netbackup-service-host mdw --netbackup-policy test_policy --netbackup-schedule test_schedule --netbackup-filename /tmp/foo_schema --netbackup-block-size 100"
with patch.object(Command, '__init__', return_value=None) as cmd:
backup_file_with_nbu(self.context, "schema", hostname="sdw")
from gppylib.commands.base import REMOTE
cmd.assert_called_with("dumping metadata files from segment", cmdStr, ctxt=REMOTE, remoteHost="sdw")
@patch('gppylib.operations.dump.execute_sql', return_value = [['gp_toolkit'], ['pg_aoseg'], ['pg_toast'], ['pg_bitmapindex'], ['bar'], ['foo'], ['pg_catalog'], ['public'], ['information_schema']])
def test_get_include_schema_list_from_exclude_schema_default(self, mock1):
exclude_schema_list = ['public', 'foo']
expected_result = ['bar']
output = get_include_schema_list_from_exclude_schema(self.context, exclude_schema_list)
self.assertEqual(expected_result.sort(), output.sort())
@patch('gppylib.operations.dump.execute_sql', return_value = [['gp_toolkit'], ['pg_aoseg'], ['pg_toast'], ['pg_bitmapindex'], ['bar'], ['foo'], ['pg_catalog'], ['public'], ['information_schema']])
def test_get_include_schema_list_from_exclude_schema_empty_list(self, mock1):
exclude_schema_list = []
expected_result = ['public', 'foo', 'bar']
output = get_include_schema_list_from_exclude_schema(self.context, exclude_schema_list)
self.assertEqual(expected_result.sort(), output.sort())
@patch('gppylib.operations.dump.Command.run')
@patch('gppylib.operations.dump.findCmdInPath', return_value='/bin/mail')
def test_mail_execute_default(self, mock1, mock2):
m = MailEvent(subject="test", message="Hello", to_addrs="[email protected]")
m.execute()
@patch('gppylib.operations.dump.execute_sql', side_effect=[[['public', 'test'], ['public', 'foo']], [['public', 'foo']]])
def test_check_table_exists_table_list_changes(self, mock):
self.context.target_db = "gptest"
exists = CheckTableExists(self.context, "public", "test").run()
self.assertTrue(exists)
exists = CheckTableExists(self.context, "public", "test").run()
self.assertFalse(exists)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
@patch('gppylib.operations.dump.CheckTableExists.run', return_value=True)
@patch('gppylib.operations.dump.execSQL', return_value='10000000000000000')
def test_update_history_table_with_existing_history_table(self, execSQL_mock, mock2, mock3, mock4):
self.context.history = True
time_start = datetime(2015, 7, 31, 9, 30, 00)
time_end = datetime(2015, 8, 1, 12, 21, 11)
timestamp = '121601010101'
options_list = '-x 1337 -a'
dump_exit_status = 0
pseudo_exit_status = 0
UpdateHistoryTable(self.context, time_start, time_end,
options_list, timestamp,
dump_exit_status,
pseudo_exit_status).execute()
expected_queries = " insert into public.gpcrondump_history values (now(), '2015-07-31 09:30:00', '2015-08-01 12:21:11', '-x 1337 -a', '121601010101', 0, 0, 'COMPLETED'); "
for exec_sql in execSQL_mock.call_args_list:
# [0] index removes the call object,
# [1] grabs the sql command from execSQL
self.assertEquals(exec_sql[0][1], expected_queries)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
@patch('gppylib.operations.dump.CheckTableExists.run', return_value=False)
@patch('gppylib.operations.dump.execSQL', return_value='10000000000000000')
def test_update_history_table_with_new_update_table(self, execSQL_mock, mock2, mock3, mock4):
self.context.history = True
time_start = datetime(2015, 7, 31, 9, 30, 00)
time_end = datetime(2015, 8, 1, 12, 21, 11)
timestamp = '121601010101'
options_list = '-x bkdb -a'
dump_exit_status = 0
pseudo_exit_status = 0
UpdateHistoryTable(self.context, time_start, time_end,
options_list, timestamp,
dump_exit_status,
pseudo_exit_status).execute()
expected_queries = []
expected_queries.append(' create table public.gpcrondump_history (rec_date timestamp, start_time char(8), end_time char(8), options text, dump_key varchar(20), dump_exit_status smallint, script_exit_status smallint, exit_text varchar(10)) distributed by (rec_date); ')
expected_queries.append(" insert into public.gpcrondump_history values (now(), '2015-07-31 09:30:00', '2015-08-01 12:21:11', '-x bkdb -a', '121601010101', 0, 0, 'COMPLETED'); ")
for i, exec_sql in enumerate(execSQL_mock.call_args_list):
# [0] index removes the call object,
# [1] grabs the sql command from execSQL
self.assertEquals(exec_sql[0][1] , expected_queries[i])
@patch('gppylib.operations.dump.DumpStats.print_tuples')
@patch('gppylib.operations.dump.execute_sql_with_connection', return_value=[[1]*4, [2]*4, [3]*4])
def test_dump_stats_writes_tuples_to_file_when_dumping_tuples(self, execute_sql_with_connection, print_tuples):
dump_stats = DumpStats(Mock())
db_connection = Mock()
dump_stats.dump_tuples('select * from foo', db_connection)
execute_sql_with_connection.assert_called_with('select * from foo', db_connection)
print_tuples.assert_any_call([1,1,1,1])
print_tuples.assert_any_call([2,2,2,2])
print_tuples.assert_any_call([3,3,3,3])
@patch('gppylib.operations.dump.DumpStats.print_stats')
@patch('gppylib.operations.dump.execute_sql_with_connection', return_value=[[1]*25, [2]*25, [3]*25])
def test_dump_stats_writes_stats_to_file_when_dumping_stats(self, execute_sql_with_connection, print_stats):
dump_stats = DumpStats(Mock())
db_connection = Mock()
dump_stats.dump_stats('select * from foo', db_connection)
execute_sql_with_connection.assert_called_with('select * from foo', db_connection)
print_stats.assert_any_call([1]*25)
print_stats.assert_any_call([2]*25)
print_stats.assert_any_call([3]*25)
@patch('gppylib.operations.dump.DumpStats.dump_tuples')
@patch('gppylib.operations.dump.DumpStats.dump_stats')
def test_dump_stats_uses_db_connection_to_dump_tables(self, dump_stats, dump_tuples):
db_connection = Mock()
subject = DumpStats(Mock())
subject.dump_table('someSchema.someTable', db_connection)
dump_stats.assert_called_with(ANY, db_connection)
dump_tuples.assert_called_with(ANY, db_connection)
@patch('gppylib.operations.dump.dbconn.DbURL')
@patch('gppylib.operations.dump.dbconn.connect')
def test_excute_uses_the_same_connection_for_all_queries(self, connect, DbURL):
DbURL.return_value = 'dburl'
db_connection = Mock()
connect.return_value = db_connection
fakeContext = Mock()
fakeContext.ddboost = False
fakeContext.master_port = 9999
fakeContext.target_db= 'db_name'
dump_stats = DumpStats(fakeContext)
dump_stats.get_include_tables_from_context = Mock(return_value=['schema1.table1', 'schema2.table2'])
dump_stats.write_stats_file_header = Mock()
dump_stats.dump_table = Mock()
dump_stats.execute()
dump_stats.dump_table.assert_any_call('schema1.table1', db_connection)
dump_stats.dump_table.assert_any_call('schema2.table2', db_connection)
connect.assert_called_with('dburl')
DbURL.assert_called_with(port=9999, dbname='db_name')
db_connection.close.assert_any_call()
if __name__ == '__main__':
unittest.main()
i=0
def my_counter(*args, **kwargs):
global i
i += 1
return Mock()
| apache-2.0 |
quarckster/cfme_tests | cfme/tests/cloud_infra_common/test_discovery.py | 2 | 2568 | # -*- coding: utf-8 -*-
import pytest
import time
from cfme.common.provider import BaseProvider
from cfme.common.vm import VM
from cfme.exceptions import CFMEException
from cfme.infrastructure.provider.scvmm import SCVMMProvider
from cfme.utils.generators import random_vm_name
from cfme.utils.log import logger
from cfme.utils.wait import TimedOutError
from cfme import test_requirements
pytestmark = [
pytest.mark.tier(2),
test_requirements.discovery,
pytest.mark.provider([BaseProvider], scope='module')
]
@pytest.fixture(scope="module")
def vm_name():
return random_vm_name("dscvry")
@pytest.fixture(scope="module")
def vm_crud(vm_name, provider):
return VM.factory(vm_name, provider)
def if_scvmm_refresh_provider(provider):
# No eventing from SCVMM so force a relationship refresh
if isinstance(provider, SCVMMProvider):
provider.refresh_provider_relationships()
def wait_for_vm_state_changes(vm, timeout=600):
count = 0
while count < timeout:
try:
vm_state = vm.find_quadicon(from_any_provider=True).data['state'].lower()
logger.info("Quadicon state for %s is %s", vm.name, repr(vm_state))
if "archived" in vm_state:
return True
elif "orphaned" in vm_state:
raise CFMEException("VM should be Archived but it is Orphaned now.")
except Exception as e:
logger.exception(e)
pass
time.sleep(15)
count += 15
if count > timeout:
raise CFMEException("VM should be Archived but it is Orphaned now.")
def test_vm_discovery(request, setup_provider, provider, vm_crud):
""" Tests whether cfme will discover a vm change (add/delete) without being manually refreshed.
Prerequisities:
* Desired provider set up
Steps:
* Create a virtual machine on the provider.
* Wait for the VM to appear
* Delete the VM from the provider (not using CFME)
* Wait for the VM to become Archived.
Metadata:
test_flag: discovery
"""
@request.addfinalizer
def _cleanup():
vm_crud.delete_from_provider()
if_scvmm_refresh_provider(provider)
vm_crud.create_on_provider(allow_skip="default")
if_scvmm_refresh_provider(provider)
try:
vm_crud.wait_to_appear(timeout=600, load_details=False)
except TimedOutError:
pytest.fail("VM was not found in CFME")
vm_crud.delete_from_provider()
if_scvmm_refresh_provider(provider)
wait_for_vm_state_changes(vm_crud)
| gpl-2.0 |
tlakshman26/cinder-bug-fix-volume-conversion-full | cinder/volume/drivers/san/san.py | 15 | 6918 | # Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Default Driver for san-stored volumes.
The unique thing about a SAN is that we don't expect that we can run the volume
controller on the SAN hardware. We expect to access it over SSH or some API.
"""
import random
from eventlet import greenthread
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import excutils
from cinder import exception
from cinder.i18n import _, _LE
from cinder import ssh_utils
from cinder import utils
from cinder.volume import driver
LOG = logging.getLogger(__name__)
san_opts = [
cfg.BoolOpt('san_thin_provision',
default=True,
help='Use thin provisioning for SAN volumes?'),
cfg.StrOpt('san_ip',
default='',
help='IP address of SAN controller'),
cfg.StrOpt('san_login',
default='admin',
help='Username for SAN controller'),
cfg.StrOpt('san_password',
default='',
help='Password for SAN controller',
secret=True),
cfg.StrOpt('san_private_key',
default='',
help='Filename of private key to use for SSH authentication'),
cfg.StrOpt('san_clustername',
default='',
help='Cluster name to use for creating volumes'),
cfg.IntOpt('san_ssh_port',
default=22,
min=1, max=65535,
help='SSH port to use with SAN'),
cfg.BoolOpt('san_is_local',
default=False,
help='Execute commands locally instead of over SSH; '
'use if the volume service is running on the SAN device'),
cfg.IntOpt('ssh_conn_timeout',
default=30,
help="SSH connection timeout in seconds"),
cfg.IntOpt('ssh_min_pool_conn',
default=1,
help='Minimum ssh connections in the pool'),
cfg.IntOpt('ssh_max_pool_conn',
default=5,
help='Maximum ssh connections in the pool'),
]
CONF = cfg.CONF
CONF.register_opts(san_opts)
class SanDriver(driver.BaseVD):
"""Base class for SAN-style storage volumes
A SAN-style storage value is 'different' because the volume controller
probably won't run on it, so we need to access is over SSH or another
remote protocol.
"""
def __init__(self, *args, **kwargs):
execute = kwargs.pop('execute', self.san_execute)
super(SanDriver, self).__init__(execute=execute,
*args, **kwargs)
self.configuration.append_config_values(san_opts)
self.run_local = self.configuration.san_is_local
self.sshpool = None
def san_execute(self, *cmd, **kwargs):
if self.run_local:
return utils.execute(*cmd, **kwargs)
else:
check_exit_code = kwargs.pop('check_exit_code', None)
return self._run_ssh(cmd, check_exit_code)
def _run_ssh(self, cmd_list, check_exit_code=True, attempts=1):
utils.check_ssh_injection(cmd_list)
command = ' '. join(cmd_list)
if not self.sshpool:
password = self.configuration.san_password
privatekey = self.configuration.san_private_key
min_size = self.configuration.ssh_min_pool_conn
max_size = self.configuration.ssh_max_pool_conn
self.sshpool = ssh_utils.SSHPool(
self.configuration.san_ip,
self.configuration.san_ssh_port,
self.configuration.ssh_conn_timeout,
self.configuration.san_login,
password=password,
privatekey=privatekey,
min_size=min_size,
max_size=max_size)
last_exception = None
try:
with self.sshpool.item() as ssh:
while attempts > 0:
attempts -= 1
try:
return processutils.ssh_execute(
ssh,
command,
check_exit_code=check_exit_code)
except Exception as e:
LOG.error(e)
last_exception = e
greenthread.sleep(random.randint(20, 500) / 100.0)
try:
raise processutils.ProcessExecutionError(
exit_code=last_exception.exit_code,
stdout=last_exception.stdout,
stderr=last_exception.stderr,
cmd=last_exception.cmd)
except AttributeError:
raise processutils.ProcessExecutionError(
exit_code=-1,
stdout="",
stderr="Error running SSH command",
cmd=command)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error running SSH command: %s"), command)
def ensure_export(self, context, volume):
"""Synchronously recreates an export for a logical volume."""
pass
def create_export(self, context, volume, connector):
"""Exports the volume."""
pass
def remove_export(self, context, volume):
"""Removes an export for a logical volume."""
pass
def check_for_setup_error(self):
"""Returns an error if prerequisites aren't met."""
if not self.run_local:
if not (self.configuration.san_password or
self.configuration.san_private_key):
raise exception.InvalidInput(
reason=_('Specify san_password or san_private_key'))
# The san_ip must always be set, because we use it for the target
if not self.configuration.san_ip:
raise exception.InvalidInput(reason=_("san_ip must be set"))
class SanISCSIDriver(SanDriver, driver.ISCSIDriver):
def __init__(self, *args, **kwargs):
super(SanISCSIDriver, self).__init__(*args, **kwargs)
def _build_iscsi_target_name(self, volume):
return "%s%s" % (self.configuration.iscsi_target_prefix,
volume['name'])
| apache-2.0 |
2013Commons/hue | desktop/core/ext-py/Django-1.4.5/django/contrib/localflavor/uy/forms.py | 87 | 2143 | # -*- coding: utf-8 -*-
"""
UY-specific form helpers.
"""
from __future__ import absolute_import
from django.core.validators import EMPTY_VALUES
from django.forms.fields import Select, RegexField
from django.forms import ValidationError
from django.utils.translation import ugettext_lazy as _
from django.contrib.localflavor.uy.util import get_validation_digit
class UYDepartamentSelect(Select):
"""
A Select widget that uses a list of Uruguayan departaments as its choices.
"""
def __init__(self, attrs=None):
from django.contrib.localflavor.uy.uy_departaments import DEPARTAMENT_CHOICES
super(UYDepartamentSelect, self).__init__(attrs, choices=DEPARTAMENT_CHOICES)
class UYCIField(RegexField):
"""
A field that validates Uruguayan 'Cedula de identidad' (CI) numbers.
"""
default_error_messages = {
'invalid': _("Enter a valid CI number in X.XXX.XXX-X,"
"XXXXXXX-X or XXXXXXXX format."),
'invalid_validation_digit': _("Enter a valid CI number."),
}
def __init__(self, *args, **kwargs):
super(UYCIField, self).__init__(r'(?P<num>(\d{6,7}|(\d\.)?\d{3}\.\d{3}))-?(?P<val>\d)',
*args, **kwargs)
def clean(self, value):
"""
Validates format and validation digit.
The official format is [X.]XXX.XXX-X but usually dots and/or slash are
omitted so, when validating, those characters are ignored if found in
the correct place. The three typically used formats are supported:
[X]XXXXXXX, [X]XXXXXX-X and [X.]XXX.XXX-X.
"""
value = super(UYCIField, self).clean(value)
if value in EMPTY_VALUES:
return u''
match = self.regex.match(value)
if not match:
raise ValidationError(self.error_messages['invalid'])
number = int(match.group('num').replace('.', ''))
validation_digit = int(match.group('val'))
if not validation_digit == get_validation_digit(number):
raise ValidationError(self.error_messages['invalid_validation_digit'])
return value
| apache-2.0 |
luceatnobis/youtube-dl | youtube_dl/extractor/eighttracks.py | 91 | 5868 | # coding: utf-8
from __future__ import unicode_literals
import json
import random
from .common import InfoExtractor
from ..compat import (
compat_str,
)
from ..utils import (
ExtractorError,
)
class EightTracksIE(InfoExtractor):
IE_NAME = '8tracks'
_VALID_URL = r'https?://8tracks\.com/(?P<user>[^/]+)/(?P<id>[^/#]+)(?:#.*)?$'
_TEST = {
'name': 'EightTracks',
'url': 'http://8tracks.com/ytdl/youtube-dl-test-tracks-a',
'info_dict': {
'id': '1336550',
'display_id': 'youtube-dl-test-tracks-a',
'description': "test chars: \"'/\\ä↭",
'title': "youtube-dl test tracks \"'/\\ä↭<>",
},
'playlist': [
{
'md5': '96ce57f24389fc8734ce47f4c1abcc55',
'info_dict': {
'id': '11885610',
'ext': 'm4a',
'title': "youtue-dl project<>\"' - youtube-dl test track 1 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': '4ab26f05c1f7291ea460a3920be8021f',
'info_dict': {
'id': '11885608',
'ext': 'm4a',
'title': "youtube-dl project - youtube-dl test track 2 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': 'd30b5b5f74217410f4689605c35d1fd7',
'info_dict': {
'id': '11885679',
'ext': 'm4a',
'title': "youtube-dl project as well - youtube-dl test track 3 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': '4eb0a669317cd725f6bbd336a29f923a',
'info_dict': {
'id': '11885680',
'ext': 'm4a',
'title': "youtube-dl project as well - youtube-dl test track 4 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': '1893e872e263a2705558d1d319ad19e8',
'info_dict': {
'id': '11885682',
'ext': 'm4a',
'title': "PH - youtube-dl test track 5 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': 'b673c46f47a216ab1741ae8836af5899',
'info_dict': {
'id': '11885683',
'ext': 'm4a',
'title': "PH - youtube-dl test track 6 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': '1d74534e95df54986da7f5abf7d842b7',
'info_dict': {
'id': '11885684',
'ext': 'm4a',
'title': "phihag - youtube-dl test track 7 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
},
{
'md5': 'f081f47af8f6ae782ed131d38b9cd1c0',
'info_dict': {
'id': '11885685',
'ext': 'm4a',
'title': "phihag - youtube-dl test track 8 \"'/\\\u00e4\u21ad",
'uploader_id': 'ytdl'
}
}
]
}
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
data = self._parse_json(
self._search_regex(
r"(?s)PAGE\.mix\s*=\s*({.+?});\n", webpage, 'trax information'),
playlist_id)
session = str(random.randint(0, 1000000000))
mix_id = data['id']
track_count = data['tracks_count']
duration = data['duration']
avg_song_duration = float(duration) / track_count
# duration is sometimes negative, use predefined avg duration
if avg_song_duration <= 0:
avg_song_duration = 300
first_url = 'http://8tracks.com/sets/%s/play?player=sm&mix_id=%s&format=jsonh' % (session, mix_id)
next_url = first_url
entries = []
for i in range(track_count):
api_json = None
download_tries = 0
while api_json is None:
try:
api_json = self._download_webpage(
next_url, playlist_id,
note='Downloading song information %d/%d' % (i + 1, track_count),
errnote='Failed to download song information')
except ExtractorError:
if download_tries > 3:
raise
else:
download_tries += 1
self._sleep(avg_song_duration, playlist_id)
api_data = json.loads(api_json)
track_data = api_data['set']['track']
info = {
'id': compat_str(track_data['id']),
'url': track_data['track_file_stream_url'],
'title': track_data['performer'] + ' - ' + track_data['name'],
'raw_title': track_data['name'],
'uploader_id': data['user']['login'],
'ext': 'm4a',
}
entries.append(info)
next_url = 'http://8tracks.com/sets/%s/next?player=sm&mix_id=%s&format=jsonh&track_id=%s' % (
session, mix_id, track_data['id'])
return {
'_type': 'playlist',
'entries': entries,
'id': compat_str(mix_id),
'display_id': playlist_id,
'title': data.get('name'),
'description': data.get('description'),
}
| unlicense |
skirsdeda/django | django/contrib/sitemaps/tests/test_https.py | 21 | 3805 | from __future__ import unicode_literals
from datetime import date
import warnings
from django.test import override_settings
from django.utils.deprecation import RemovedInDjango20Warning
from .base import SitemapTestsBase
@override_settings(ROOT_URLCONF='django.contrib.sitemaps.tests.urls.https')
class HTTPSSitemapTests(SitemapTestsBase):
protocol = 'https'
def test_secure_sitemap_index(self):
"A secure sitemap index can be rendered"
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=RemovedInDjango20Warning)
# The URL for views.sitemap in tests/urls/https.py has been updated
# with a name but since reversing by Python path is tried first
# before reversing by name and works since we're giving
# name='django.contrib.sitemaps.views.sitemap', we need to silence
# the erroneous warning until reversing by dotted path is removed.
# The test will work without modification when it's removed.
response = self.client.get('/secure/index.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>%s/secure/sitemap-simple.xml</loc></sitemap>
</sitemapindex>
""" % self.base_url
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_secure_sitemap_section(self):
"A secure sitemap section can be rendered"
response = self.client.get('/secure/sitemap-simple.xml')
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>%s/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % (self.base_url, date.today())
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
@override_settings(SECURE_PROXY_SSL_HEADER=False)
class HTTPSDetectionSitemapTests(SitemapTestsBase):
extra = {'wsgi.url_scheme': 'https'}
def test_sitemap_index_with_https_request(self):
"A sitemap index requested in HTTPS is rendered with HTTPS links"
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=RemovedInDjango20Warning)
# The URL for views.sitemap in tests/urls/https.py has been updated
# with a name but since reversing by Python path is tried first
# before reversing by name and works since we're giving
# name='django.contrib.sitemaps.views.sitemap', we need to silence
# the erroneous warning until reversing by dotted path is removed.
# The test will work without modification when it's removed.
response = self.client.get('/simple/index.xml', **self.extra)
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>%s/simple/sitemap-simple.xml</loc></sitemap>
</sitemapindex>
""" % self.base_url.replace('http://', 'https://')
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
def test_sitemap_section_with_https_request(self):
"A sitemap section requested in HTTPS is rendered with HTTPS links"
response = self.client.get('/simple/sitemap-simple.xml', **self.extra)
expected_content = """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>%s/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % (self.base_url.replace('http://', 'https://'), date.today())
self.assertXMLEqual(response.content.decode('utf-8'), expected_content)
| bsd-3-clause |
technologiescollege/Blockly-rduino-communication | scripts_XP/Lib/idlelib/CallTips.py | 97 | 5932 | """CallTips.py - An IDLE Extension to Jog Your Memory
Call Tips are floating windows which display function, class, and method
parameter and docstring information when you type an opening parenthesis, and
which disappear when you type a closing parenthesis.
"""
import __main__
import inspect
import re
import sys
import textwrap
import types
from idlelib import CallTipWindow
from idlelib.HyperParser import HyperParser
class CallTips:
menudefs = [
('edit', [
("Show call tip", "<<force-open-calltip>>"),
])
]
def __init__(self, editwin=None):
if editwin is None: # subprocess and test
self.editwin = None
else:
self.editwin = editwin
self.text = editwin.text
self.active_calltip = None
self._calltip_window = self._make_tk_calltip_window
def close(self):
self._calltip_window = None
def _make_tk_calltip_window(self):
# See __init__ for usage
return CallTipWindow.CallTip(self.text)
def _remove_calltip_window(self, event=None):
if self.active_calltip:
self.active_calltip.hidetip()
self.active_calltip = None
def force_open_calltip_event(self, event):
"The user selected the menu entry or hotkey, open the tip."
self.open_calltip(True)
def try_open_calltip_event(self, event):
"""Happens when it would be nice to open a CallTip, but not really
necessary, for example after an opening bracket, so function calls
won't be made.
"""
self.open_calltip(False)
def refresh_calltip_event(self, event):
if self.active_calltip and self.active_calltip.is_active():
self.open_calltip(False)
def open_calltip(self, evalfuncs):
self._remove_calltip_window()
hp = HyperParser(self.editwin, "insert")
sur_paren = hp.get_surrounding_brackets('(')
if not sur_paren:
return
hp.set_index(sur_paren[0])
expression = hp.get_expression()
if not expression:
return
if not evalfuncs and (expression.find('(') != -1):
return
argspec = self.fetch_tip(expression)
if not argspec:
return
self.active_calltip = self._calltip_window()
self.active_calltip.showtip(argspec, sur_paren[0], sur_paren[1])
def fetch_tip(self, expression):
"""Return the argument list and docstring of a function or class.
If there is a Python subprocess, get the calltip there. Otherwise,
either this fetch_tip() is running in the subprocess or it was
called in an IDLE running without the subprocess.
The subprocess environment is that of the most recently run script. If
two unrelated modules are being edited some calltips in the current
module may be inoperative if the module was not the last to run.
To find methods, fetch_tip must be fed a fully qualified name.
"""
try:
rpcclt = self.editwin.flist.pyshell.interp.rpcclt
except AttributeError:
rpcclt = None
if rpcclt:
return rpcclt.remotecall("exec", "get_the_calltip",
(expression,), {})
else:
return get_argspec(get_entity(expression))
def get_entity(expression):
"""Return the object corresponding to expression evaluated
in a namespace spanning sys.modules and __main.dict__.
"""
if expression:
namespace = sys.modules.copy()
namespace.update(__main__.__dict__)
try:
return eval(expression, namespace)
except BaseException:
# An uncaught exception closes idle, and eval can raise any
# exception, especially if user classes are involved.
return None
# The following are used in get_argspec and some in tests
_MAX_COLS = 85
_MAX_LINES = 5 # enough for bytes
_INDENT = ' '*4 # for wrapped signatures
_first_param = re.compile('(?<=\()\w*\,?\s*')
_default_callable_argspec = "See source or doc"
def get_argspec(ob):
'''Return a string describing the signature of a callable object, or ''.
For Python-coded functions and methods, the first line is introspected.
Delete 'self' parameter for classes (.__init__) and bound methods.
The next lines are the first lines of the doc string up to the first
empty line or _MAX_LINES. For builtins, this typically includes
the arguments in addition to the return value.
'''
argspec = ""
try:
ob_call = ob.__call__
except BaseException:
return argspec
if isinstance(ob, type):
fob = ob.__init__
elif isinstance(ob_call, types.MethodType):
fob = ob_call
else:
fob = ob
if isinstance(fob, (types.FunctionType, types.MethodType)):
argspec = inspect.formatargspec(*inspect.getfullargspec(fob))
if (isinstance(ob, (type, types.MethodType)) or
isinstance(ob_call, types.MethodType)):
argspec = _first_param.sub("", argspec)
lines = (textwrap.wrap(argspec, _MAX_COLS, subsequent_indent=_INDENT)
if len(argspec) > _MAX_COLS else [argspec] if argspec else [])
if isinstance(ob_call, types.MethodType):
doc = ob_call.__doc__
else:
doc = getattr(ob, "__doc__", "")
if doc:
for line in doc.split('\n', _MAX_LINES)[:_MAX_LINES]:
line = line.strip()
if not line:
break
if len(line) > _MAX_COLS:
line = line[: _MAX_COLS - 3] + '...'
lines.append(line)
argspec = '\n'.join(lines)
if not argspec:
argspec = _default_callable_argspec
return argspec
if __name__ == '__main__':
from unittest import main
main('idlelib.idle_test.test_calltips', verbosity=2)
| gpl-3.0 |
wallrazer/graphite-web | webapp/graphite/account/views.py | 31 | 2176 | """Copyright 2008 Orbitz WorldWide
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License."""
from django.shortcuts import render_to_response
from django.http import HttpResponseRedirect
from django.contrib.auth import authenticate, login, logout
from graphite.util import getProfile
from graphite.logger import log
from graphite.account.models import Profile
def loginView(request):
username = request.POST.get('username')
password = request.POST.get('password')
if request.method == 'GET':
nextPage = request.GET.get('nextPage','/')
else:
nextPage = request.POST.get('nextPage','/')
if username and password:
user = authenticate(username=username,password=password)
if user is None:
return render_to_response("login.html",{'authenticationFailed' : True, 'nextPage' : nextPage})
elif not user.is_active:
return render_to_response("login.html",{'accountDisabled' : True, 'nextPage' : nextPage})
else:
login(request,user)
return HttpResponseRedirect(nextPage)
else:
return render_to_response("login.html",{'nextPage' : nextPage})
def logoutView(request):
nextPage = request.GET.get('nextPage','/')
logout(request)
return HttpResponseRedirect(nextPage)
def editProfile(request):
if not request.user.is_authenticated():
return HttpResponseRedirect('../..')
context = { 'profile' : getProfile(request) }
return render_to_response("editProfile.html",context)
def updateProfile(request):
profile = getProfile(request,allowDefault=False)
if profile:
profile.advancedUI = request.POST.get('advancedUI','off') == 'on'
profile.save()
nextPage = request.POST.get('nextPage','/')
return HttpResponseRedirect(nextPage)
| apache-2.0 |
mano3m/CouchPotatoServer | libs/sqlalchemy/dialects/mysql/oursql.py | 18 | 9764 | # mysql/oursql.py
# Copyright (C) 2005-2013 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Support for the MySQL database via the oursql adapter.
OurSQL is available at:
http://packages.python.org/oursql/
Connecting
-----------
Connect string format::
mysql+oursql://<user>:<password>@<host>[:<port>]/<dbname>
Unicode
-------
oursql defaults to using ``utf8`` as the connection charset, but other
encodings may be used instead. Like the MySQL-Python driver, unicode support
can be completely disabled::
# oursql sets the connection charset to utf8 automatically; all strings come
# back as utf8 str
create_engine('mysql+oursql:///mydb?use_unicode=0')
To not automatically use ``utf8`` and instead use whatever the connection
defaults to, there is a separate parameter::
# use the default connection charset; all strings come back as unicode
create_engine('mysql+oursql:///mydb?default_charset=1')
# use latin1 as the connection charset; all strings come back as unicode
create_engine('mysql+oursql:///mydb?charset=latin1')
"""
import re
from sqlalchemy.dialects.mysql.base import (BIT, MySQLDialect, MySQLExecutionContext,
MySQLCompiler, MySQLIdentifierPreparer)
from sqlalchemy.engine import base as engine_base, default
from sqlalchemy.sql import operators as sql_operators
from sqlalchemy import exc, log, schema, sql, types as sqltypes, util
from sqlalchemy import processors
class _oursqlBIT(BIT):
def result_processor(self, dialect, coltype):
"""oursql already converts mysql bits, so."""
return None
class MySQLExecutionContext_oursql(MySQLExecutionContext):
@property
def plain_query(self):
return self.execution_options.get('_oursql_plain_query', False)
class MySQLDialect_oursql(MySQLDialect):
driver = 'oursql'
# Py2K
supports_unicode_binds = True
supports_unicode_statements = True
# end Py2K
supports_native_decimal = True
supports_sane_rowcount = True
supports_sane_multi_rowcount = True
execution_ctx_cls = MySQLExecutionContext_oursql
colspecs = util.update_copy(
MySQLDialect.colspecs,
{
sqltypes.Time: sqltypes.Time,
BIT: _oursqlBIT,
}
)
@classmethod
def dbapi(cls):
return __import__('oursql')
def do_execute(self, cursor, statement, parameters, context=None):
"""Provide an implementation of *cursor.execute(statement, parameters)*."""
if context and context.plain_query:
cursor.execute(statement, plain_query=True)
else:
cursor.execute(statement, parameters)
def do_begin(self, connection):
connection.cursor().execute('BEGIN', plain_query=True)
def _xa_query(self, connection, query, xid):
# Py2K
arg = connection.connection._escape_string(xid)
# end Py2K
# Py3K
# charset = self._connection_charset
# arg = connection.connection._escape_string(xid.encode(charset)).decode(charset)
arg = "'%s'" % arg
connection.execution_options(_oursql_plain_query=True).execute(query % arg)
# Because mysql is bad, these methods have to be
# reimplemented to use _PlainQuery. Basically, some queries
# refuse to return any data if they're run through
# the parameterized query API, or refuse to be parameterized
# in the first place.
def do_begin_twophase(self, connection, xid):
self._xa_query(connection, 'XA BEGIN %s', xid)
def do_prepare_twophase(self, connection, xid):
self._xa_query(connection, 'XA END %s', xid)
self._xa_query(connection, 'XA PREPARE %s', xid)
def do_rollback_twophase(self, connection, xid, is_prepared=True,
recover=False):
if not is_prepared:
self._xa_query(connection, 'XA END %s', xid)
self._xa_query(connection, 'XA ROLLBACK %s', xid)
def do_commit_twophase(self, connection, xid, is_prepared=True,
recover=False):
if not is_prepared:
self.do_prepare_twophase(connection, xid)
self._xa_query(connection, 'XA COMMIT %s', xid)
# Q: why didn't we need all these "plain_query" overrides earlier ?
# am i on a newer/older version of OurSQL ?
def has_table(self, connection, table_name, schema=None):
return MySQLDialect.has_table(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
table_name, schema)
def get_table_options(self, connection, table_name, schema=None, **kw):
return MySQLDialect.get_table_options(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
table_name,
schema = schema,
**kw
)
def get_columns(self, connection, table_name, schema=None, **kw):
return MySQLDialect.get_columns(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
table_name,
schema=schema,
**kw
)
def get_view_names(self, connection, schema=None, **kw):
return MySQLDialect.get_view_names(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
schema=schema,
**kw
)
def get_table_names(self, connection, schema=None, **kw):
return MySQLDialect.get_table_names(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
schema
)
def get_schema_names(self, connection, **kw):
return MySQLDialect.get_schema_names(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
**kw
)
def initialize(self, connection):
return MySQLDialect.initialize(
self,
connection.execution_options(_oursql_plain_query=True)
)
def _show_create_table(self, connection, table, charset=None,
full_name=None):
return MySQLDialect._show_create_table(self,
connection.contextual_connect(close_with_result=True).
execution_options(_oursql_plain_query=True),
table, charset, full_name)
def is_disconnect(self, e, connection, cursor):
if isinstance(e, self.dbapi.ProgrammingError):
return e.errno is None and 'cursor' not in e.args[1] and e.args[1].endswith('closed')
else:
return e.errno in (2006, 2013, 2014, 2045, 2055)
def create_connect_args(self, url):
opts = url.translate_connect_args(database='db', username='user',
password='passwd')
opts.update(url.query)
util.coerce_kw_type(opts, 'port', int)
util.coerce_kw_type(opts, 'compress', bool)
util.coerce_kw_type(opts, 'autoping', bool)
util.coerce_kw_type(opts, 'raise_on_warnings', bool)
util.coerce_kw_type(opts, 'default_charset', bool)
if opts.pop('default_charset', False):
opts['charset'] = None
else:
util.coerce_kw_type(opts, 'charset', str)
opts['use_unicode'] = opts.get('use_unicode', True)
util.coerce_kw_type(opts, 'use_unicode', bool)
# FOUND_ROWS must be set in CLIENT_FLAGS to enable
# supports_sane_rowcount.
opts.setdefault('found_rows', True)
ssl = {}
for key in ['ssl_ca', 'ssl_key', 'ssl_cert',
'ssl_capath', 'ssl_cipher']:
if key in opts:
ssl[key[4:]] = opts[key]
util.coerce_kw_type(ssl, key[4:], str)
del opts[key]
if ssl:
opts['ssl'] = ssl
return [[], opts]
def _get_server_version_info(self, connection):
dbapi_con = connection.connection
version = []
r = re.compile('[.\-]')
for n in r.split(dbapi_con.server_info):
try:
version.append(int(n))
except ValueError:
version.append(n)
return tuple(version)
def _extract_error_code(self, exception):
return exception.errno
def _detect_charset(self, connection):
"""Sniff out the character set in use for connection results."""
return connection.connection.charset
def _compat_fetchall(self, rp, charset=None):
"""oursql isn't super-broken like MySQLdb, yaaay."""
return rp.fetchall()
def _compat_fetchone(self, rp, charset=None):
"""oursql isn't super-broken like MySQLdb, yaaay."""
return rp.fetchone()
def _compat_first(self, rp, charset=None):
return rp.first()
dialect = MySQLDialect_oursql
| gpl-3.0 |
solidgoldbomb/letsencrypt | acme/acme/jose/util.py | 13 | 7092 | """JOSE utilities."""
import collections
from cryptography.hazmat.primitives.asymmetric import rsa
import OpenSSL
import six
class abstractclassmethod(classmethod):
# pylint: disable=invalid-name,too-few-public-methods
"""Descriptor for an abstract classmethod.
It augments the :mod:`abc` framework with an abstract
classmethod. This is implemented as :class:`abc.abstractclassmethod`
in the standard Python library starting with version 3.2.
This particular implementation, allegedly based on Python 3.3 source
code, is stolen from
http://stackoverflow.com/questions/11217878/python-2-7-combine-abc-abstractmethod-and-classmethod.
"""
__isabstractmethod__ = True
def __init__(self, target):
target.__isabstractmethod__ = True
super(abstractclassmethod, self).__init__(target)
class ComparableX509(object): # pylint: disable=too-few-public-methods
"""Wrapper for OpenSSL.crypto.X509** objects that supports __eq__.
Wraps around:
- :class:`OpenSSL.crypto.X509`
- :class:`OpenSSL.crypto.X509Req`
"""
def __init__(self, wrapped):
assert isinstance(wrapped, OpenSSL.crypto.X509) or isinstance(
wrapped, OpenSSL.crypto.X509Req)
self._wrapped = wrapped
def __getattr__(self, name):
return getattr(self._wrapped, name)
def _dump(self, filetype=OpenSSL.crypto.FILETYPE_ASN1):
# pylint: disable=missing-docstring,protected-access
if isinstance(self._wrapped, OpenSSL.crypto.X509):
func = OpenSSL.crypto.dump_certificate
else: # assert in __init__ makes sure this is X509Req
func = OpenSSL.crypto.dump_certificate_request
return func(filetype, self._wrapped)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return NotImplemented
return self._dump() == other._dump() # pylint: disable=protected-access
def __hash__(self):
return hash((self.__class__, self._dump()))
def __ne__(self, other):
return not self == other
def __repr__(self):
return '<{0}({1!r})>'.format(self.__class__.__name__, self._wrapped)
class ComparableKey(object): # pylint: disable=too-few-public-methods
"""Comparable wrapper for `cryptography` keys.
See https://github.com/pyca/cryptography/issues/2122.
"""
__hash__ = NotImplemented
def __init__(self, wrapped):
self._wrapped = wrapped
def __getattr__(self, name):
return getattr(self._wrapped, name)
def __eq__(self, other):
# pylint: disable=protected-access
if (not isinstance(other, self.__class__) or
self._wrapped.__class__ is not other._wrapped.__class__):
return NotImplemented
elif hasattr(self._wrapped, 'private_numbers'):
return self.private_numbers() == other.private_numbers()
elif hasattr(self._wrapped, 'public_numbers'):
return self.public_numbers() == other.public_numbers()
else:
return NotImplemented
def __ne__(self, other):
return not self == other
def __repr__(self):
return '<{0}({1!r})>'.format(self.__class__.__name__, self._wrapped)
def public_key(self):
"""Get wrapped public key."""
return self.__class__(self._wrapped.public_key())
class ComparableRSAKey(ComparableKey): # pylint: disable=too-few-public-methods
"""Wrapper for `cryptography` RSA keys.
Wraps around:
- `cryptography.hazmat.primitives.assymetric.RSAPrivateKey`
- `cryptography.hazmat.primitives.assymetric.RSAPublicKey`
"""
def __hash__(self):
# public_numbers() hasn't got stable hash!
# https://github.com/pyca/cryptography/issues/2143
if isinstance(self._wrapped, rsa.RSAPrivateKeyWithSerialization):
priv = self.private_numbers()
pub = priv.public_numbers
return hash((self.__class__, priv.p, priv.q, priv.dmp1,
priv.dmq1, priv.iqmp, pub.n, pub.e))
elif isinstance(self._wrapped, rsa.RSAPublicKeyWithSerialization):
pub = self.public_numbers()
return hash((self.__class__, pub.n, pub.e))
class ImmutableMap(collections.Mapping, collections.Hashable):
# pylint: disable=too-few-public-methods
"""Immutable key to value mapping with attribute access."""
__slots__ = ()
"""Must be overriden in subclasses."""
def __init__(self, **kwargs):
if set(kwargs) != set(self.__slots__):
raise TypeError(
'__init__() takes exactly the following arguments: {0} '
'({1} given)'.format(', '.join(self.__slots__),
', '.join(kwargs) if kwargs else 'none'))
for slot in self.__slots__:
object.__setattr__(self, slot, kwargs.pop(slot))
def update(self, **kwargs):
"""Return updated map."""
items = dict(self)
items.update(kwargs)
return type(self)(**items) # pylint: disable=star-args
def __getitem__(self, key):
try:
return getattr(self, key)
except AttributeError:
raise KeyError(key)
def __iter__(self):
return iter(self.__slots__)
def __len__(self):
return len(self.__slots__)
def __hash__(self):
return hash(tuple(getattr(self, slot) for slot in self.__slots__))
def __setattr__(self, name, value):
raise AttributeError("can't set attribute")
def __repr__(self):
return '{0}({1})'.format(self.__class__.__name__, ', '.join(
'{0}={1!r}'.format(key, value)
for key, value in six.iteritems(self)))
class frozendict(collections.Mapping, collections.Hashable):
# pylint: disable=invalid-name,too-few-public-methods
"""Frozen dictionary."""
__slots__ = ('_items', '_keys')
def __init__(self, *args, **kwargs):
if kwargs and not args:
items = dict(kwargs)
elif len(args) == 1 and isinstance(args[0], collections.Mapping):
items = args[0]
else:
raise TypeError()
# TODO: support generators/iterators
object.__setattr__(self, '_items', items)
object.__setattr__(self, '_keys', tuple(sorted(six.iterkeys(items))))
def __getitem__(self, key):
return self._items[key]
def __iter__(self):
return iter(self._keys)
def __len__(self):
return len(self._items)
def _sorted_items(self):
return tuple((key, self[key]) for key in self._keys)
def __hash__(self):
return hash(self._sorted_items())
def __getattr__(self, name):
try:
return self._items[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
raise AttributeError("can't set attribute")
def __repr__(self):
return 'frozendict({0})'.format(', '.join('{0}={1!r}'.format(
key, value) for key, value in self._sorted_items()))
| apache-2.0 |
pgmillon/ansible | test/units/modules/network/f5/test_bigip_monitor_external.py | 16 | 3552 | # -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
import sys
if sys.version_info < (2, 7):
pytestmark = pytest.mark.skip("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
try:
from library.modules.bigip_monitor_external import ApiParameters
from library.modules.bigip_monitor_external import ModuleParameters
from library.modules.bigip_monitor_external import ModuleManager
from library.modules.bigip_monitor_external import ArgumentSpec
# In Ansible 2.8, Ansible changed import paths.
from test.units.compat import unittest
from test.units.compat.mock import Mock
from test.units.compat.mock import patch
from test.units.modules.utils import set_module_args
except ImportError:
from ansible.modules.network.f5.bigip_monitor_external import ApiParameters
from ansible.modules.network.f5.bigip_monitor_external import ModuleParameters
from ansible.modules.network.f5.bigip_monitor_external import ModuleManager
from ansible.modules.network.f5.bigip_monitor_external import ArgumentSpec
# Ansible 2.8 imports
from units.compat import unittest
from units.compat.mock import Mock
from units.compat.mock import patch
from units.modules.utils import set_module_args
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_module_parameters(self):
args = dict(
name='foo',
parent='parent',
ip='10.10.10.10',
port=80,
interval=20,
timeout=30,
partition='Common'
)
p = ModuleParameters(params=args)
assert p.name == 'foo'
assert p.parent == '/Common/parent'
assert p.ip == '10.10.10.10'
assert p.type == 'external'
assert p.port == 80
assert p.destination == '10.10.10.10:80'
assert p.interval == 20
assert p.timeout == 30
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
def test_create_monitor(self, *args):
set_module_args(dict(
name='foo',
parent='parent',
ip='10.10.10.10',
port=80,
interval=20,
timeout=30,
partition='Common',
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode
)
# Override methods in the specific type of manager
mm = ModuleManager(module=module)
mm.exists = Mock(side_effect=[False, True])
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['parent'] == '/Common/parent'
| gpl-3.0 |
mvaled/OpenUpgrade | addons/account_followup/account_followup.py | 93 | 28777 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp import api
from openerp.osv import fields, osv
from lxml import etree
from openerp.tools.translate import _
class followup(osv.osv):
_name = 'account_followup.followup'
_description = 'Account Follow-up'
_rec_name = 'name'
_columns = {
'followup_line': fields.one2many('account_followup.followup.line', 'followup_id', 'Follow-up', copy=True),
'company_id': fields.many2one('res.company', 'Company', required=True),
'name': fields.related('company_id', 'name', string = "Name", readonly=True, type="char"),
}
_defaults = {
'company_id': lambda s, cr, uid, c: s.pool.get('res.company')._company_default_get(cr, uid, 'account_followup.followup', context=c),
}
_sql_constraints = [('company_uniq', 'unique(company_id)', 'Only one follow-up per company is allowed')]
class followup_line(osv.osv):
def _get_default_template(self, cr, uid, ids, context=None):
try:
return self.pool.get('ir.model.data').get_object_reference(cr, uid, 'account_followup', 'email_template_account_followup_default')[1]
except ValueError:
return False
_name = 'account_followup.followup.line'
_description = 'Follow-up Criteria'
_columns = {
'name': fields.char('Follow-Up Action', required=True),
'sequence': fields.integer('Sequence', help="Gives the sequence order when displaying a list of follow-up lines."),
'delay': fields.integer('Due Days', help="The number of days after the due date of the invoice to wait before sending the reminder. Could be negative if you want to send a polite alert beforehand.", required=True),
'followup_id': fields.many2one('account_followup.followup', 'Follow Ups', required=True, ondelete="cascade"),
'description': fields.text('Printed Message', translate=True),
'send_email':fields.boolean('Send an Email', help="When processing, it will send an email"),
'send_letter':fields.boolean('Send a Letter', help="When processing, it will print a letter"),
'manual_action':fields.boolean('Manual Action', help="When processing, it will set the manual action to be taken for that customer. "),
'manual_action_note':fields.text('Action To Do', placeholder="e.g. Give a phone call, check with others , ..."),
'manual_action_responsible_id':fields.many2one('res.users', 'Assign a Responsible', ondelete='set null'),
'email_template_id':fields.many2one('email.template', 'Email Template', ondelete='set null'),
}
_order = 'delay'
_sql_constraints = [('days_uniq', 'unique(followup_id, delay)', 'Days of the follow-up levels must be different')]
_defaults = {
'send_email': True,
'send_letter': True,
'manual_action':False,
'description': """
Dear %(partner_name)s,
Exception made if there was a mistake of ours, it seems that the following amount stays unpaid. Please, take appropriate measures in order to carry out this payment in the next 8 days.
Would your payment have been carried out after this mail was sent, please ignore this message. Do not hesitate to contact our accounting department.
Best Regards,
""",
'email_template_id': _get_default_template,
}
def _check_description(self, cr, uid, ids, context=None):
for line in self.browse(cr, uid, ids, context=context):
if line.description:
try:
line.description % {'partner_name': '', 'date':'', 'user_signature': '', 'company_name': ''}
except:
return False
return True
_constraints = [
(_check_description, 'Your description is invalid, use the right legend or %% if you want to use the percent character.', ['description']),
]
class account_move_line(osv.osv):
def _get_result(self, cr, uid, ids, name, arg, context=None):
res = {}
for aml in self.browse(cr, uid, ids, context=context):
res[aml.id] = aml.debit - aml.credit
return res
_inherit = 'account.move.line'
_columns = {
'followup_line_id': fields.many2one('account_followup.followup.line', 'Follow-up Level',
ondelete='restrict'), #restrict deletion of the followup line
'followup_date': fields.date('Latest Follow-up', select=True),
'result':fields.function(_get_result, type='float', method=True,
string="Balance") #'balance' field is not the same
}
class res_partner(osv.osv):
def fields_view_get(self, cr, uid, view_id=None, view_type=None, context=None, toolbar=False, submenu=False):
res = super(res_partner, self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context,
toolbar=toolbar, submenu=submenu)
context = context or {}
if view_type == 'form' and context.get('Followupfirst'):
doc = etree.XML(res['arch'], parser=None, base_url=None)
first_node = doc.xpath("//page[@name='followup_tab']")
root = first_node[0].getparent()
root.insert(0, first_node[0])
res['arch'] = etree.tostring(doc, encoding="utf-8")
return res
def _get_latest(self, cr, uid, ids, names, arg, context=None, company_id=None):
res={}
if company_id == None:
company = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id
else:
company = self.pool.get('res.company').browse(cr, uid, company_id, context=context)
for partner in self.browse(cr, uid, ids, context=context):
amls = partner.unreconciled_aml_ids
latest_date = False
latest_level = False
latest_days = False
latest_level_without_lit = False
latest_days_without_lit = False
for aml in amls:
if (aml.company_id == company) and (aml.followup_line_id != False) and (not latest_days or latest_days < aml.followup_line_id.delay):
latest_days = aml.followup_line_id.delay
latest_level = aml.followup_line_id.id
if (aml.company_id == company) and (not latest_date or latest_date < aml.followup_date):
latest_date = aml.followup_date
if (aml.company_id == company) and (aml.blocked == False) and (aml.followup_line_id != False and
(not latest_days_without_lit or latest_days_without_lit < aml.followup_line_id.delay)):
latest_days_without_lit = aml.followup_line_id.delay
latest_level_without_lit = aml.followup_line_id.id
res[partner.id] = {'latest_followup_date': latest_date,
'latest_followup_level_id': latest_level,
'latest_followup_level_id_without_lit': latest_level_without_lit}
return res
@api.cr_uid_ids_context
def do_partner_manual_action(self, cr, uid, partner_ids, context=None):
#partner_ids -> res.partner
for partner in self.browse(cr, uid, partner_ids, context=context):
#Check action: check if the action was not empty, if not add
action_text= ""
if partner.payment_next_action:
action_text = (partner.payment_next_action or '') + "\n" + (partner.latest_followup_level_id_without_lit.manual_action_note or '')
else:
action_text = partner.latest_followup_level_id_without_lit.manual_action_note or ''
#Check date: only change when it did not exist already
action_date = partner.payment_next_action_date or fields.date.context_today(self, cr, uid, context=context)
# Check responsible: if partner has not got a responsible already, take from follow-up
responsible_id = False
if partner.payment_responsible_id:
responsible_id = partner.payment_responsible_id.id
else:
p = partner.latest_followup_level_id_without_lit.manual_action_responsible_id
responsible_id = p and p.id or False
self.write(cr, uid, [partner.id], {'payment_next_action_date': action_date,
'payment_next_action': action_text,
'payment_responsible_id': responsible_id})
def do_partner_print(self, cr, uid, wizard_partner_ids, data, context=None):
#wizard_partner_ids are ids from special view, not from res.partner
if not wizard_partner_ids:
return {}
data['partner_ids'] = wizard_partner_ids
datas = {
'ids': wizard_partner_ids,
'model': 'account_followup.followup',
'form': data
}
return self.pool['report'].get_action(cr, uid, [], 'account_followup.report_followup', data=datas, context=context)
@api.cr_uid_ids_context
def do_partner_mail(self, cr, uid, partner_ids, context=None):
if context is None:
context = {}
ctx = context.copy()
ctx['followup'] = True
#partner_ids are res.partner ids
# If not defined by latest follow-up level, it will be the default template if it can find it
mtp = self.pool.get('email.template')
unknown_mails = 0
for partner in self.browse(cr, uid, partner_ids, context=ctx):
if partner.email and partner.email.strip():
level = partner.latest_followup_level_id_without_lit
if level and level.send_email and level.email_template_id and level.email_template_id.id:
mtp.send_mail(cr, uid, level.email_template_id.id, partner.id, context=ctx)
else:
mail_template_id = self.pool.get('ir.model.data').get_object_reference(cr, uid,
'account_followup', 'email_template_account_followup_default')
mtp.send_mail(cr, uid, mail_template_id[1], partner.id, context=ctx)
else:
unknown_mails = unknown_mails + 1
action_text = _("Email not sent because of email address of partner not filled in")
if partner.payment_next_action_date:
payment_action_date = min(fields.date.context_today(self, cr, uid, context=ctx), partner.payment_next_action_date)
else:
payment_action_date = fields.date.context_today(self, cr, uid, context=ctx)
if partner.payment_next_action:
payment_next_action = partner.payment_next_action + " \n " + action_text
else:
payment_next_action = action_text
self.write(cr, uid, [partner.id], {'payment_next_action_date': payment_action_date,
'payment_next_action': payment_next_action}, context=ctx)
return unknown_mails
def get_followup_table_html(self, cr, uid, ids, context=None):
""" Build the html tables to be included in emails send to partners,
when reminding them their overdue invoices.
:param ids: [id] of the partner for whom we are building the tables
:rtype: string
"""
from report import account_followup_print
assert len(ids) == 1
if context is None:
context = {}
partner = self.browse(cr, uid, ids[0], context=context)
#copy the context to not change global context. Overwrite it because _() looks for the lang in local variable 'context'.
#Set the language to use = the partner language
context = dict(context, lang=partner.lang)
followup_table = ''
if partner.unreconciled_aml_ids:
company = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id
current_date = fields.date.context_today(self, cr, uid, context=context)
rml_parse = account_followup_print.report_rappel(cr, uid, "followup_rml_parser")
final_res = rml_parse._lines_get_with_partner(partner, company.id)
for currency_dict in final_res:
currency = currency_dict.get('line', [{'currency_id': company.currency_id}])[0]['currency_id']
followup_table += '''
<table border="2" width=100%%>
<tr>
<td>''' + _("Invoice Date") + '''</td>
<td>''' + _("Description") + '''</td>
<td>''' + _("Reference") + '''</td>
<td>''' + _("Due Date") + '''</td>
<td>''' + _("Amount") + " (%s)" % (currency.symbol) + '''</td>
<td>''' + _("Lit.") + '''</td>
</tr>
'''
total = 0
for aml in currency_dict['line']:
block = aml['blocked'] and 'X' or ' '
total += aml['balance']
strbegin = "<TD>"
strend = "</TD>"
date = aml['date_maturity'] or aml['date']
if date <= current_date and aml['balance'] > 0:
strbegin = "<TD><B>"
strend = "</B></TD>"
followup_table +="<TR>" + strbegin + str(aml['date']) + strend + strbegin + aml['name'] + strend + strbegin + (aml['ref'] or '') + strend + strbegin + str(date) + strend + strbegin + str(aml['balance']) + strend + strbegin + block + strend + "</TR>"
total = reduce(lambda x, y: x+y['balance'], currency_dict['line'], 0.00)
total = rml_parse.formatLang(total, dp='Account', currency_obj=currency)
followup_table += '''<tr> </tr>
</table>
<center>''' + _("Amount due") + ''' : %s </center>''' % (total)
return followup_table
def write(self, cr, uid, ids, vals, context=None):
if vals.get("payment_responsible_id", False):
for part in self.browse(cr, uid, ids, context=context):
if part.payment_responsible_id <> vals["payment_responsible_id"]:
#Find partner_id of user put as responsible
responsible_partner_id = self.pool.get("res.users").browse(cr, uid, vals['payment_responsible_id'], context=context).partner_id.id
self.pool.get("mail.thread").message_post(cr, uid, 0,
body = _("You became responsible to do the next action for the payment follow-up of") + " <b><a href='#id=" + str(part.id) + "&view_type=form&model=res.partner'> " + part.name + " </a></b>",
type = 'comment',
subtype = "mail.mt_comment", context = context,
model = 'res.partner', res_id = part.id,
partner_ids = [responsible_partner_id])
return super(res_partner, self).write(cr, uid, ids, vals, context=context)
def action_done(self, cr, uid, ids, context=None):
return self.write(cr, uid, ids, {'payment_next_action_date': False, 'payment_next_action':'', 'payment_responsible_id': False}, context=context)
def do_button_print(self, cr, uid, ids, context=None):
assert(len(ids) == 1)
company_id = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id.id
#search if the partner has accounting entries to print. If not, it may not be present in the
#psql view the report is based on, so we need to stop the user here.
if not self.pool.get('account.move.line').search(cr, uid, [
('partner_id', '=', ids[0]),
('account_id.type', '=', 'receivable'),
('reconcile_id', '=', False),
('state', '!=', 'draft'),
('company_id', '=', company_id),
], context=context):
raise osv.except_osv(_('Error!'),_("The partner does not have any accounting entries to print in the overdue report for the current company."))
self.message_post(cr, uid, [ids[0]], body=_('Printed overdue payments report'), context=context)
#build the id of this partner in the psql view. Could be replaced by a search with [('company_id', '=', company_id),('partner_id', '=', ids[0])]
wizard_partner_ids = [ids[0] * 10000 + company_id]
followup_ids = self.pool.get('account_followup.followup').search(cr, uid, [('company_id', '=', company_id)], context=context)
if not followup_ids:
raise osv.except_osv(_('Error!'),_("There is no followup plan defined for the current company."))
data = {
'date': fields.date.today(),
'followup_id': followup_ids[0],
}
#call the print overdue report on this partner
return self.do_partner_print(cr, uid, wizard_partner_ids, data, context=context)
def _get_amounts_and_date(self, cr, uid, ids, name, arg, context=None):
'''
Function that computes values for the followup functional fields. Note that 'payment_amount_due'
is similar to 'credit' field on res.partner except it filters on user's company.
'''
res = {}
company = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id
current_date = fields.date.context_today(self, cr, uid, context=context)
for partner in self.browse(cr, uid, ids, context=context):
worst_due_date = False
amount_due = amount_overdue = 0.0
for aml in partner.unreconciled_aml_ids:
if (aml.company_id == company):
date_maturity = aml.date_maturity or aml.date
if not worst_due_date or date_maturity < worst_due_date:
worst_due_date = date_maturity
amount_due += aml.result
if (date_maturity <= current_date):
amount_overdue += aml.result
res[partner.id] = {'payment_amount_due': amount_due,
'payment_amount_overdue': amount_overdue,
'payment_earliest_due_date': worst_due_date}
return res
def _get_followup_overdue_query(self, cr, uid, args, overdue_only=False, context=None):
'''
This function is used to build the query and arguments to use when making a search on functional fields
* payment_amount_due
* payment_amount_overdue
Basically, the query is exactly the same except that for overdue there is an extra clause in the WHERE.
:param args: arguments given to the search in the usual domain notation (list of tuples)
:param overdue_only: option to add the extra argument to filter on overdue accounting entries or not
:returns: a tuple with
* the query to execute as first element
* the arguments for the execution of this query
:rtype: (string, [])
'''
company_id = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id.id
having_where_clause = ' AND '.join(map(lambda x: '(SUM(bal2) %s %%s)' % (x[1]), args))
having_values = [x[2] for x in args]
query = self.pool.get('account.move.line')._query_get(cr, uid, context=context)
overdue_only_str = overdue_only and 'AND date_maturity <= NOW()' or ''
return ('''SELECT pid AS partner_id, SUM(bal2) FROM
(SELECT CASE WHEN bal IS NOT NULL THEN bal
ELSE 0.0 END AS bal2, p.id as pid FROM
(SELECT (debit-credit) AS bal, partner_id
FROM account_move_line l
WHERE account_id IN
(SELECT id FROM account_account
WHERE type=\'receivable\' AND active)
''' + overdue_only_str + '''
AND reconcile_id IS NULL
AND company_id = %s
AND ''' + query + ''') AS l
RIGHT JOIN res_partner p
ON p.id = partner_id ) AS pl
GROUP BY pid HAVING ''' + having_where_clause, [company_id] + having_values)
def _payment_overdue_search(self, cr, uid, obj, name, args, context=None):
if not args:
return []
query, query_args = self._get_followup_overdue_query(cr, uid, args, overdue_only=True, context=context)
cr.execute(query, query_args)
res = cr.fetchall()
if not res:
return [('id','=','0')]
return [('id','in', [x[0] for x in res])]
def _payment_earliest_date_search(self, cr, uid, obj, name, args, context=None):
if not args:
return []
company_id = self.pool.get('res.users').browse(cr, uid, uid, context=context).company_id.id
having_where_clause = ' AND '.join(map(lambda x: '(MIN(l.date_maturity) %s %%s)' % (x[1]), args))
having_values = [x[2] for x in args]
query = self.pool.get('account.move.line')._query_get(cr, uid, context=context)
cr.execute('SELECT partner_id FROM account_move_line l '\
'WHERE account_id IN '\
'(SELECT id FROM account_account '\
'WHERE type=\'receivable\' AND active) '\
'AND l.company_id = %s '
'AND reconcile_id IS NULL '\
'AND '+query+' '\
'AND partner_id IS NOT NULL '\
'GROUP BY partner_id HAVING '+ having_where_clause,
[company_id] + having_values)
res = cr.fetchall()
if not res:
return [('id','=','0')]
return [('id','in', [x[0] for x in res])]
def _payment_due_search(self, cr, uid, obj, name, args, context=None):
if not args:
return []
query, query_args = self._get_followup_overdue_query(cr, uid, args, overdue_only=False, context=context)
cr.execute(query, query_args)
res = cr.fetchall()
if not res:
return [('id','=','0')]
return [('id','in', [x[0] for x in res])]
def _get_partners(self, cr, uid, ids, context=None):
#this function search for the partners linked to all account.move.line 'ids' that have been changed
partners = set()
for aml in self.browse(cr, uid, ids, context=context):
if aml.partner_id:
partners.add(aml.partner_id.id)
return list(partners)
_inherit = "res.partner"
_columns = {
'payment_responsible_id':fields.many2one('res.users', ondelete='set null', string='Follow-up Responsible',
help="Optionally you can assign a user to this field, which will make him responsible for the action.",
track_visibility="onchange", copy=False),
'payment_note':fields.text('Customer Payment Promise', help="Payment Note", track_visibility="onchange", copy=False),
'payment_next_action':fields.text('Next Action', copy=False,
help="This is the next action to be taken. It will automatically be set when the partner gets a follow-up level that requires a manual action. ",
track_visibility="onchange"),
'payment_next_action_date': fields.date('Next Action Date', copy=False,
help="This is when the manual follow-up is needed. "
"The date will be set to the current date when the partner "
"gets a follow-up level that requires a manual action. "
"Can be practical to set manually e.g. to see if he keeps "
"his promises."),
'unreconciled_aml_ids':fields.one2many('account.move.line', 'partner_id', domain=['&', ('reconcile_id', '=', False), '&',
('account_id.active','=', True), '&', ('account_id.type', '=', 'receivable'), ('state', '!=', 'draft')]),
'latest_followup_date':fields.function(_get_latest, method=True, type='date', string="Latest Follow-up Date",
help="Latest date that the follow-up level of the partner was changed",
store=False, multi="latest"),
'latest_followup_level_id':fields.function(_get_latest, method=True,
type='many2one', relation='account_followup.followup.line', string="Latest Follow-up Level",
help="The maximum follow-up level",
store={
'res.partner': (lambda self, cr, uid, ids, c: ids,[],10),
'account.move.line': (_get_partners, ['followup_line_id'], 10),
},
multi="latest"),
'latest_followup_level_id_without_lit':fields.function(_get_latest, method=True,
type='many2one', relation='account_followup.followup.line', string="Latest Follow-up Level without litigation",
help="The maximum follow-up level without taking into account the account move lines with litigation",
store={
'res.partner': (lambda self, cr, uid, ids, c: ids,[],10),
'account.move.line': (_get_partners, ['followup_line_id'], 10),
},
multi="latest"),
'payment_amount_due':fields.function(_get_amounts_and_date,
type='float', string="Amount Due",
store = False, multi="followup",
fnct_search=_payment_due_search),
'payment_amount_overdue':fields.function(_get_amounts_and_date,
type='float', string="Amount Overdue",
store = False, multi="followup",
fnct_search = _payment_overdue_search),
'payment_earliest_due_date':fields.function(_get_amounts_and_date,
type='date',
string = "Worst Due Date",
multi="followup",
fnct_search=_payment_earliest_date_search),
}
class account_config_settings(osv.TransientModel):
_name = 'account.config.settings'
_inherit = 'account.config.settings'
def open_followup_level_form(self, cr, uid, ids, context=None):
res_ids = self.pool.get('account_followup.followup').search(cr, uid, [], context=context)
return {
'type': 'ir.actions.act_window',
'name': 'Payment Follow-ups',
'res_model': 'account_followup.followup',
'res_id': res_ids and res_ids[0] or False,
'view_mode': 'form,tree',
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
pong3489/TEST_Mission | Lib/site-packages/scipy/stats/distributions.py | 53 | 207806 | # Functions to implement several important functions for
# various Continous and Discrete Probability Distributions
#
# Author: Travis Oliphant 2002-2011 with contributions from
# SciPy Developers 2004-2011
#
import math
import warnings
from copy import copy
from scipy.misc import comb, derivative
from scipy import special
from scipy import optimize
from scipy import integrate
from scipy.special import gammaln as gamln
import inspect
from numpy import alltrue, where, arange, putmask, \
ravel, take, ones, sum, shape, product, repeat, reshape, \
zeros, floor, logical_and, log, sqrt, exp, arctanh, tan, sin, arcsin, \
arctan, tanh, ndarray, cos, cosh, sinh, newaxis, array, log1p, expm1
from numpy import atleast_1d, polyval, ceil, place, extract, \
any, argsort, argmax, vectorize, r_, asarray, nan, inf, pi, isinf, \
power, NINF, empty
import numpy
import numpy as np
import numpy.random as mtrand
from numpy import flatnonzero as nonzero
import vonmises_cython
def _moment(data, n, mu=None):
if mu is None:
mu = data.mean()
return ((data - mu)**n).mean()
def _moment_from_stats(n, mu, mu2, g1, g2, moment_func, args):
if (n==0):
return 1.0
elif (n==1):
if mu is None:
val = moment_func(1,*args)
else:
val = mu
elif (n==2):
if mu2 is None or mu is None:
val = moment_func(2,*args)
else:
val = mu2 + mu*mu
elif (n==3):
if g1 is None or mu2 is None or mu is None:
val = moment_func(3,*args)
else:
mu3 = g1*(mu2**1.5) # 3rd central moment
val = mu3+3*mu*mu2+mu**3 # 3rd non-central moment
elif (n==4):
if g1 is None or g2 is None or mu2 is None or mu is None:
val = moment_func(4,*args)
else:
mu4 = (g2+3.0)*(mu2**2.0) # 4th central moment
mu3 = g1*(mu2**1.5) # 3rd central moment
val = mu4+4*mu*mu3+6*mu*mu*mu2+mu**4
else:
val = moment_func(n, *args)
return val
def _skew(data):
data = np.ravel(data)
mu = data.mean()
m2 = ((data - mu)**2).mean()
m3 = ((data - mu)**3).mean()
return m3 / m2**1.5
def _kurtosis(data):
data = np.ravel(data)
mu = data.mean()
m2 = ((data - mu)**2).mean()
m4 = ((data - mu)**4).mean()
return m4 / m2**2 - 3
__all__ = [
'rv_continuous',
'ksone', 'kstwobign', 'norm', 'alpha', 'anglit', 'arcsine',
'beta', 'betaprime', 'bradford', 'burr', 'fisk', 'cauchy',
'chi', 'chi2', 'cosine', 'dgamma', 'dweibull', 'erlang',
'expon', 'exponweib', 'exponpow', 'fatiguelife', 'foldcauchy',
'f', 'foldnorm', 'frechet_r', 'weibull_min', 'frechet_l',
'weibull_max', 'genlogistic', 'genpareto', 'genexpon', 'genextreme',
'gamma', 'gengamma', 'genhalflogistic', 'gompertz', 'gumbel_r',
'gumbel_l', 'halfcauchy', 'halflogistic', 'halfnorm', 'hypsecant',
'gausshyper', 'invgamma', 'invnorm', 'invgauss', 'invweibull',
'johnsonsb', 'johnsonsu', 'laplace', 'levy', 'levy_l',
'levy_stable', 'logistic', 'loggamma', 'loglaplace', 'lognorm',
'gilbrat', 'maxwell', 'mielke', 'nakagami', 'ncx2', 'ncf', 't',
'nct', 'pareto', 'lomax', 'powerlaw', 'powerlognorm', 'powernorm',
'rdist', 'rayleigh', 'reciprocal', 'rice', 'recipinvgauss',
'semicircular', 'triang', 'truncexpon', 'truncnorm',
'tukeylambda', 'uniform', 'vonmises', 'wald', 'wrapcauchy',
'entropy', 'rv_discrete',
'binom', 'bernoulli', 'nbinom', 'geom', 'hypergeom', 'logser',
'poisson', 'planck', 'boltzmann', 'randint', 'zipf', 'dlaplace',
'skellam'
]
floatinfo = numpy.finfo(float)
errp = special.errprint
arr = asarray
gam = special.gamma
import types
from scipy.misc import doccer
all = alltrue
sgf = vectorize
try:
from new import instancemethod
except ImportError:
# Python 3
def instancemethod(func, obj, cls):
return types.MethodType(func, obj)
# These are the docstring parts used for substitution in specific
# distribution docstrings.
docheaders = {'methods':"""\nMethods\n-------\n""",
'parameters':"""\nParameters\n---------\n""",
'notes':"""\nNotes\n-----\n""",
'examples':"""\nExamples\n--------\n"""}
_doc_rvs = \
"""rvs(%(shapes)s, loc=0, scale=1, size=1)
Random variates.
"""
_doc_pdf = \
"""pdf(x, %(shapes)s, loc=0, scale=1)
Probability density function.
"""
_doc_logpdf = \
"""logpdf(x, %(shapes)s, loc=0, scale=1)
Log of the probability density function.
"""
_doc_pmf = \
"""pmf(x, %(shapes)s, loc=0, scale=1)
Probability mass function.
"""
_doc_logpmf = \
"""logpmf(x, %(shapes)s, loc=0, scale=1)
Log of the probability mass function.
"""
_doc_cdf = \
"""cdf(x, %(shapes)s, loc=0, scale=1)
Cumulative density function.
"""
_doc_logcdf = \
"""logcdf(x, %(shapes)s, loc=0, scale=1)
Log of the cumulative density function.
"""
_doc_sf = \
"""sf(x, %(shapes)s, loc=0, scale=1)
Survival function (1-cdf --- sometimes more accurate).
"""
_doc_logsf = \
"""logsf(x, %(shapes)s, loc=0, scale=1)
Log of the survival function.
"""
_doc_ppf = \
"""ppf(q, %(shapes)s, loc=0, scale=1)
Percent point function (inverse of cdf --- percentiles).
"""
_doc_isf = \
"""isf(q, %(shapes)s, loc=0, scale=1)
Inverse survival function (inverse of sf).
"""
_doc_moment = \
"""moment(n, %(shapes)s, loc=0, scale=1)
Non-central moment of order n
"""
_doc_stats = \
"""stats(%(shapes)s, loc=0, scale=1, moments='mv')
Mean('m'), variance('v'), skew('s'), and/or kurtosis('k').
"""
_doc_entropy = \
"""entropy(%(shapes)s, loc=0, scale=1)
(Differential) entropy of the RV.
"""
_doc_fit = \
"""fit(data, %(shapes)s, loc=0, scale=1)
Parameter estimates for generic data.
"""
_doc_expect = \
"""expect(func, %(shapes)s, loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds)
Expected value of a function (of one argument) with respect to the distribution.
"""
_doc_expect_discrete = \
"""expect(func, %(shapes)s, loc=0, lb=None, ub=None, conditional=False)
Expected value of a function (of one argument) with respect to the distribution.
"""
_doc_median = \
"""median(%(shapes)s, loc=0, scale=1)
Median of the distribution.
"""
_doc_mean = \
"""mean(%(shapes)s, loc=0, scale=1)
Mean of the distribution.
"""
_doc_var = \
"""var(%(shapes)s, loc=0, scale=1)
Variance of the distribution.
"""
_doc_std = \
"""std(%(shapes)s, loc=0, scale=1)
Standard deviation of the distribution.
"""
_doc_interval = \
"""interval(alpha, %(shapes)s, loc=0, scale=1)
Endpoints of the range that contains alpha percent of the distribution
"""
_doc_allmethods = ''.join([docheaders['methods'], _doc_rvs, _doc_pdf,
_doc_logpdf, _doc_cdf, _doc_logcdf, _doc_sf,
_doc_logsf, _doc_ppf, _doc_isf, _doc_moment,
_doc_stats, _doc_entropy, _doc_fit,
_doc_expect, _doc_median,
_doc_mean, _doc_var, _doc_std, _doc_interval])
# Note that the two lines for %(shapes) are searched for and replaced in
# rv_continuous and rv_discrete - update there if the exact string changes
_doc_default_callparams = \
"""
Parameters
----------
x : array-like
quantiles
q : array-like
lower or upper tail probability
%(shapes)s : array-like
shape parameters
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
size : int or tuple of ints, optional
shape of random variates (default computed from input arguments )
moments : str, optional
composed of letters ['mvsk'] specifying which moments to compute where
'm' = mean, 'v' = variance, 's' = (Fisher's) skew and
'k' = (Fisher's) kurtosis. (default='mv')
"""
_doc_default_longsummary = \
"""Continuous random variables are defined from a standard form and may
require some shape parameters to complete its specification. Any
optional keyword parameters can be passed to the methods of the RV
object as given below:
"""
_doc_default_frozen_note = \
"""
Alternatively, the object may be called (as a function) to fix the shape,
location, and scale parameters returning a "frozen" continuous RV object:
rv = %(name)s(%(shapes)s, loc=0, scale=1)
- Frozen RV object with the same methods but holding the given shape,
location, and scale fixed.
"""
_doc_default_example = \
"""Examples
--------
>>> import matplotlib.pyplot as plt
>>> numargs = %(name)s.numargs
>>> [ %(shapes)s ] = [0.9,] * numargs
>>> rv = %(name)s(%(shapes)s)
Display frozen pdf
>>> x = np.linspace(0, np.minimum(rv.dist.b, 3))
>>> h = plt.plot(x, rv.pdf(x))
Check accuracy of cdf and ppf
>>> prb = %(name)s.cdf(x, %(shapes)s)
>>> h = plt.semilogy(np.abs(x - %(name)s.ppf(prb, %(shapes)s)) + 1e-20)
Random number generation
>>> R = %(name)s.rvs(%(shapes)s, size=100)
"""
_doc_default = ''.join([_doc_default_longsummary,
_doc_allmethods,
_doc_default_callparams,
_doc_default_frozen_note,
_doc_default_example])
_doc_default_before_notes = ''.join([_doc_default_longsummary,
_doc_allmethods,
_doc_default_callparams,
_doc_default_frozen_note])
docdict = {'rvs':_doc_rvs,
'pdf':_doc_pdf,
'logpdf':_doc_logpdf,
'cdf':_doc_cdf,
'logcdf':_doc_logcdf,
'sf':_doc_sf,
'logsf':_doc_logsf,
'ppf':_doc_ppf,
'isf':_doc_isf,
'stats':_doc_stats,
'entropy':_doc_entropy,
'fit':_doc_fit,
'moment':_doc_moment,
'expect':_doc_expect,
'interval':_doc_interval,
'mean':_doc_mean,
'std':_doc_std,
'var':_doc_var,
'median':_doc_median,
'allmethods':_doc_allmethods,
'callparams':_doc_default_callparams,
'longsummary':_doc_default_longsummary,
'frozennote':_doc_default_frozen_note,
'example':_doc_default_example,
'default':_doc_default,
'before_notes':_doc_default_before_notes}
# Reuse common content between continous and discrete docs, change some
# minor bits.
docdict_discrete = docdict.copy()
docdict_discrete['pmf'] = _doc_pmf
docdict_discrete['logpmf'] = _doc_logpmf
docdict_discrete['expect'] = _doc_expect_discrete
_doc_disc_methods = ['rvs', 'pmf', 'logpmf', 'cdf', 'logcdf', 'sf', 'logsf',
'ppf', 'isf', 'stats', 'entropy', 'fit', 'expect', 'median',
'mean', 'var', 'std', 'interval']
for obj in _doc_disc_methods:
docdict_discrete[obj] = docdict_discrete[obj].replace(', scale=1', '')
docdict_discrete.pop('pdf')
docdict_discrete.pop('logpdf')
_doc_allmethods = ''.join([docdict_discrete[obj] for obj in
_doc_disc_methods])
docdict_discrete['allmethods'] = docheaders['methods'] + _doc_allmethods
docdict_discrete['longsummary'] = _doc_default_longsummary.replace(\
'Continuous', 'Discrete')
_doc_default_frozen_note = \
"""
Alternatively, the object may be called (as a function) to fix the shape and
location parameters returning a "frozen" continuous RV object:
rv = %(name)s(%(shapes)s, loc=0)
- Frozen RV object with the same methods but holding the given shape and
location fixed.
"""
docdict_discrete['frozennote'] = _doc_default_frozen_note
docdict_discrete['example'] = _doc_default_example.replace('[0.9,]',
'Replace with reasonable value')
_doc_default_disc = ''.join([docdict_discrete['longsummary'],
docdict_discrete['allmethods'],
docdict_discrete['frozennote'],
docdict_discrete['example']])
docdict_discrete['default'] = _doc_default_disc
# clean up all the separate docstring elements, we do not need them anymore
for obj in [s for s in dir() if s.startswith('_doc_')]:
exec('del ' + obj)
del obj
try:
del s
except NameError:
# in Python 3, loop variables are not visible after the loop
pass
def _build_random_array(fun, args, size=None):
# Build an array by applying function fun to
# the arguments in args, creating an array with
# the specified shape.
# Allows an integer shape n as a shorthand for (n,).
if isinstance(size, types.IntType):
size = [size]
if size is not None and len(size) != 0:
n = numpy.multiply.reduce(size)
s = apply(fun, args + (n,))
s.shape = size
return s
else:
n = 1
s = apply(fun, args + (n,))
return s[0]
random = mtrand.random_sample
rand = mtrand.rand
random_integers = mtrand.random_integers
permutation = mtrand.permutation
## Internal class to compute a ppf given a distribution.
## (needs cdf function) and uses brentq from scipy.optimize
## to compute ppf from cdf.
class general_cont_ppf(object):
def __init__(self, dist, xa=-10.0, xb=10.0, xtol=1e-14):
self.dist = dist
self.cdf = eval('%scdf'%dist)
self.xa = xa
self.xb = xb
self.xtol = xtol
self.vecfunc = sgf(self._single_call,otypes='d')
def _tosolve(self, x, q, *args):
return apply(self.cdf, (x, )+args) - q
def _single_call(self, q, *args):
return optimize.brentq(self._tosolve, self.xa, self.xb, args=(q,)+args, xtol=self.xtol)
def __call__(self, q, *args):
return self.vecfunc(q, *args)
# Frozen RV class
class rv_frozen(object):
def __init__(self, dist, *args, **kwds):
self.args = args
self.kwds = kwds
self.dist = dist
def pdf(self, x): #raises AttributeError in frozen discrete distribution
return self.dist.pdf(x, *self.args, **self.kwds)
def logpdf(self, x):
return self.dist.logpdf(x, *self.args, **self.kwds)
def cdf(self, x):
return self.dist.cdf(x, *self.args, **self.kwds)
def logcdf(self, x):
return self.dist.logcdf(x, *self.args, **self.kwds)
def ppf(self, q):
return self.dist.ppf(q, *self.args, **self.kwds)
def isf(self, q):
return self.dist.isf(q, *self.args, **self.kwds)
def rvs(self, size=None):
kwds = self.kwds.copy()
kwds.update({'size':size})
return self.dist.rvs(*self.args, **kwds)
def sf(self, x):
return self.dist.sf(x, *self.args, **self.kwds)
def logsf(self, x):
return self.dist.logsf(x, *self.args, **self.kwds)
def stats(self, moments='mv'):
kwds = self.kwds.copy()
kwds.update({'moments':moments})
return self.dist.stats(*self.args, **kwds)
def median(self):
return self.dist.median(*self.args, **self.kwds)
def mean(self):
return self.dist.mean(*self.args, **self.kwds)
def var(self):
return self.dist.var(*self.args, **self.kwds)
def std(self):
return self.dist.std(*self.args, **self.kwds)
def moment(self, n):
return self.dist.moment(n, *self.args, **self.kwds)
def entropy(self):
return self.dist.entropy(*self.args, **self.kwds)
def pmf(self,k):
return self.dist.pmf(k, *self.args, **self.kwds)
def logpmf(self,k):
return self.dist.logpmf(k, *self.args, **self.kwds)
def interval(self, alpha):
return self.dist.interval(alpha, *self.args, **self.kwds)
## NANs are returned for unsupported parameters.
## location and scale parameters are optional for each distribution.
## The shape parameters are generally required
##
## The loc and scale parameters must be given as keyword parameters.
## These are related to the common symbols in the .lyx file
## skew is third central moment / variance**(1.5)
## kurtosis is fourth central moment / variance**2 - 3
## References::
## Documentation for ranlib, rv2, cdflib and
##
## Eric Wesstein's world of mathematics http://mathworld.wolfram.com/
## http://mathworld.wolfram.com/topics/StatisticalDistributions.html
##
## Documentation to Regress+ by Michael McLaughlin
##
## Engineering and Statistics Handbook (NIST)
## http://www.itl.nist.gov/div898/handbook/index.htm
##
## Documentation for DATAPLOT from NIST
## http://www.itl.nist.gov/div898/software/dataplot/distribu.htm
##
## Norman Johnson, Samuel Kotz, and N. Balakrishnan "Continuous
## Univariate Distributions", second edition,
## Volumes I and II, Wiley & Sons, 1994.
## Each continuous random variable as the following methods
##
## rvs -- Random Variates (alternatively calling the class could produce these)
## pdf -- PDF
## logpdf -- log PDF (more numerically accurate if possible)
## cdf -- CDF
## logcdf -- log of CDF
## sf -- Survival Function (1-CDF)
## logsf --- log of SF
## ppf -- Percent Point Function (Inverse of CDF)
## isf -- Inverse Survival Function (Inverse of SF)
## stats -- Return mean, variance, (Fisher's) skew, or (Fisher's) kurtosis
## nnlf -- negative log likelihood function (to minimize)
## fit -- Model-fitting
##
## Maybe Later
##
## hf --- Hazard Function (PDF / SF)
## chf --- Cumulative hazard function (-log(SF))
## psf --- Probability sparsity function (reciprocal of the pdf) in
## units of percent-point-function (as a function of q).
## Also, the derivative of the percent-point function.
## To define a new random variable you subclass the rv_continuous class
## and re-define the
##
## _pdf method which will be given clean arguments (in between a and b)
## and passing the argument check method
##
## If postive argument checking is not correct for your RV
## then you will also need to re-define
## _argcheck
## Correct, but potentially slow defaults exist for the remaining
## methods but for speed and/or accuracy you can over-ride
##
## _cdf, _ppf, _rvs, _isf, _sf
##
## Rarely would you override _isf and _sf but you could for numerical precision.
##
## Statistics are computed using numerical integration by default.
## For speed you can redefine this using
##
## _stats --- take shape parameters and return mu, mu2, g1, g2
## --- If you can't compute one of these return it as None
##
## --- Can also be defined with a keyword argument moments=<str>
## where <str> is a string composed of 'm', 'v', 's',
## and/or 'k'. Only the components appearing in string
## should be computed and returned in the order 'm', 'v',
## 's', or 'k' with missing values returned as None
##
## OR
##
## You can override
##
## _munp -- takes n and shape parameters and returns
## -- then nth non-central moment of the distribution.
##
def valarray(shape,value=nan,typecode=None):
"""Return an array of all value.
"""
out = reshape(repeat([value],product(shape,axis=0),axis=0),shape)
if typecode is not None:
out = out.astype(typecode)
if not isinstance(out, ndarray):
out = arr(out)
return out
# This should be rewritten
def argsreduce(cond, *args):
"""Return the sequence of ravel(args[i]) where ravel(condition) is
True in 1D.
Examples
--------
>>> import numpy as np
>>> rand = np.random.random_sample
>>> A = rand((4,5))
>>> B = 2
>>> C = rand((1,5))
>>> cond = np.ones(A.shape)
>>> [A1,B1,C1] = argsreduce(cond,A,B,C)
>>> B1.shape
(20,)
>>> cond[2,:] = 0
>>> [A2,B2,C2] = argsreduce(cond,A,B,C)
>>> B2.shape
(15,)
"""
newargs = atleast_1d(*args)
if not isinstance(newargs, list):
newargs = [newargs,]
expand_arr = (cond==cond)
return [extract(cond, arr1 * expand_arr) for arr1 in newargs]
class rv_generic(object):
"""Class which encapsulates common functionality between rv_discrete
and rv_continuous.
"""
def _fix_loc_scale(self, args, loc, scale=1):
N = len(args)
if N > self.numargs:
if N == self.numargs + 1 and loc is None:
# loc is given without keyword
loc = args[-1]
if N == self.numargs + 2 and scale is None:
# loc and scale given without keyword
loc, scale = args[-2:]
args = args[:self.numargs]
if scale is None:
scale = 1.0
if loc is None:
loc = 0.0
return args, loc, scale
def _fix_loc(self, args, loc):
args, loc, scale = self._fix_loc_scale(args, loc)
return args, loc
# These are actually called, and should not be overwritten if you
# want to keep error checking.
def rvs(self,*args,**kwds):
"""
Random variates of given type.
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
size : int or tuple of ints, optional
defining number of random variates (default=1)
Returns
-------
rvs : array-like
random variates of given `size`
"""
kwd_names = ['loc', 'scale', 'size', 'discrete']
loc, scale, size, discrete = map(kwds.get, kwd_names,
[None]*len(kwd_names))
args, loc, scale = self._fix_loc_scale(args, loc, scale)
cond = logical_and(self._argcheck(*args),(scale >= 0))
if not all(cond):
raise ValueError("Domain error in arguments.")
# self._size is total size of all output values
self._size = product(size, axis=0)
if self._size is not None and self._size > 1:
size = numpy.array(size, ndmin=1)
if np.all(scale == 0):
return loc*ones(size, 'd')
vals = self._rvs(*args)
if self._size is not None:
vals = reshape(vals, size)
vals = vals * scale + loc
# Cast to int if discrete
if discrete:
if numpy.isscalar(vals):
vals = int(vals)
else:
vals = vals.astype(int)
return vals
def median(self, *args, **kwds):
"""
Median of the distribution.
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
median : float
the median of the distribution.
See Also
--------
self.ppf --- inverse of the CDF
"""
return self.ppf(0.5, *args, **kwds)
def mean(self, *args, **kwds):
"""
Mean of the distribution
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
mean : float
the mean of the distribution
"""
kwds['moments'] = 'm'
res = self.stats(*args, **kwds)
if isinstance(res, ndarray) and res.ndim == 0:
return res[()]
return res
def var(self, *args, **kwds):
"""
Variance of the distribution
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
var : float
the variance of the distribution
"""
kwds['moments'] = 'v'
res = self.stats(*args, **kwds)
if isinstance(res, ndarray) and res.ndim == 0:
return res[()]
return res
def std(self, *args, **kwds):
"""
Standard deviation of the distribution.
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
std : float
standard deviation of the distribution
"""
kwds['moments'] = 'v'
res = sqrt(self.stats(*args, **kwds))
return res
def interval(self, alpha, *args, **kwds):
"""Confidence interval with equal areas around the median
Parameters
----------
alpha : array-like float in [0,1]
Probability that an rv will be drawn from the returned range
arg1, arg2, ... : array-like
The shape parameter(s) for the distribution (see docstring of the instance
object for more information)
loc: array-like, optioal
location parameter (deafult = 0)
scale : array-like, optional
scale paramter (default = 1)
Returns
-------
a, b: array-like (float)
end-points of range that contain alpha % of the rvs
"""
alpha = arr(alpha)
if any((alpha > 1) | (alpha < 0)):
raise ValueError("alpha must be between 0 and 1 inclusive")
q1 = (1.0-alpha)/2
q2 = (1.0+alpha)/2
a = self.ppf(q1, *args, **kwds)
b = self.ppf(q2, *args, **kwds)
return a, b
class rv_continuous(rv_generic):
"""
A generic continuous random variable class meant for subclassing.
`rv_continuous` is a base class to construct specific distribution classes
and instances from for continuous random variables. It cannot be used
directly as a distribution.
Parameters
----------
momtype : int, optional
The type of generic moment calculation to use: 0 for pdf, 1 (default) for ppf.
a : float, optional
Lower bound of the support of the distribution, default is minus
infinity.
b : float, optional
Upper bound of the support of the distribution, default is plus
infinity.
xa : float, optional
Lower bound for fixed point calculation for generic ppf.
xb : float, optional
Upper bound for fixed point calculation for generic ppf.
xtol : float, optional
The tolerance for fixed point calculation for generic ppf.
badvalue : object, optional
The value in a result arrays that indicates a value that for which
some argument restriction is violated, default is np.nan.
name : str, optional
The name of the instance. This string is used to construct the default
example for distributions.
longname : str, optional
This string is used as part of the first line of the docstring returned
when a subclass has no docstring of its own. Note: `longname` exists
for backwards compatibility, do not use for new subclasses.
shapes : str, optional
The shape of the distribution. For example ``"m, n"`` for a
distribution that takes two integers as the two shape arguments for all
its methods.
extradoc : str, optional, deprecated
This string is used as the last part of the docstring returned when a
subclass has no docstring of its own. Note: `extradoc` exists for
backwards compatibility, do not use for new subclasses.
Methods
-------
rvs(<shape(s)>, loc=0, scale=1, size=1)
random variates
pdf(x, <shape(s)>, loc=0, scale=1)
probability density function
logpdf(x, <shape(s)>, loc=0, scale=1)
log of the probability density function
cdf(x, <shape(s)>, loc=0, scale=1)
cumulative density function
logcdf(x, <shape(s)>, loc=0, scale=1)
log of the cumulative density function
sf(x, <shape(s)>, loc=0, scale=1)
survival function (1-cdf --- sometimes more accurate)
logsf(x, <shape(s)>, loc=0, scale=1)
log of the survival function
ppf(q, <shape(s)>, loc=0, scale=1)
percent point function (inverse of cdf --- quantiles)
isf(q, <shape(s)>, loc=0, scale=1)
inverse survival function (inverse of sf)
moment(n, <shape(s)>, loc=0, scale=1)
non-central n-th moment of the distribution. May not work for array arguments.
stats(<shape(s)>, loc=0, scale=1, moments='mv')
mean('m'), variance('v'), skew('s'), and/or kurtosis('k')
entropy(<shape(s)>, loc=0, scale=1)
(differential) entropy of the RV.
fit(data, <shape(s)>, loc=0, scale=1)
Parameter estimates for generic data
expect(func=None, args=(), loc=0, scale=1, lb=None, ub=None,
conditional=False, **kwds)
Expected value of a function with respect to the distribution.
Additional kwd arguments passed to integrate.quad
median(<shape(s)>, loc=0, scale=1)
Median of the distribution.
mean(<shape(s)>, loc=0, scale=1)
Mean of the distribution.
std(<shape(s)>, loc=0, scale=1)
Standard deviation of the distribution.
var(<shape(s)>, loc=0, scale=1)
Variance of the distribution.
interval(alpha, <shape(s)>, loc=0, scale=1)
Interval that with `alpha` percent probability contains a random
realization of this distribution.
__call__(<shape(s)>, loc=0, scale=1)
Calling a distribution instance creates a frozen RV object with the
same methods but holding the given shape, location, and scale fixed.
See Notes section.
**Parameters for Methods**
x : array-like
quantiles
q : array-like
lower or upper tail probability
<shape(s)> : array-like
shape parameters
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
size : int or tuple of ints, optional
shape of random variates (default computed from input arguments )
moments : string, optional
composed of letters ['mvsk'] specifying which moments to compute where
'm' = mean, 'v' = variance, 's' = (Fisher's) skew and
'k' = (Fisher's) kurtosis. (default='mv')
n : int
order of moment to calculate in method moments
**Methods that can be overwritten by subclasses**
::
_rvs
_pdf
_cdf
_sf
_ppf
_isf
_stats
_munp
_entropy
_argcheck
There are additional (internal and private) generic methods that can
be useful for cross-checking and for debugging, but might work in all
cases when directly called.
Notes
-----
**Frozen Distribution**
Alternatively, the object may be called (as a function) to fix the shape,
location, and scale parameters returning a "frozen" continuous RV object:
rv = generic(<shape(s)>, loc=0, scale=1)
frozen RV object with the same methods but holding the given shape,
location, and scale fixed
**Subclassing**
New random variables can be defined by subclassing rv_continuous class
and re-defining at least the
_pdf or the _cdf method (normalized to location 0 and scale 1)
which will be given clean arguments (in between a and b) and
passing the argument check method
If postive argument checking is not correct for your RV
then you will also need to re-define ::
_argcheck
Correct, but potentially slow defaults exist for the remaining
methods but for speed and/or accuracy you can over-ride ::
_logpdf, _cdf, _logcdf, _ppf, _rvs, _isf, _sf, _logsf
Rarely would you override _isf, _sf, and _logsf but you could.
Statistics are computed using numerical integration by default.
For speed you can redefine this using
_stats
- take shape parameters and return mu, mu2, g1, g2
- If you can't compute one of these, return it as None
- Can also be defined with a keyword argument moments=<str>
where <str> is a string composed of 'm', 'v', 's',
and/or 'k'. Only the components appearing in string
should be computed and returned in the order 'm', 'v',
's', or 'k' with missing values returned as None
OR
You can override
_munp
takes n and shape parameters and returns
the nth non-central moment of the distribution.
Examples
--------
To create a new Gaussian distribution, we would do the following::
class gaussian_gen(rv_continuous):
"Gaussian distribution"
def _pdf:
...
...
"""
def __init__(self, momtype=1, a=None, b=None, xa=-10.0, xb=10.0,
xtol=1e-14, badvalue=None, name=None, longname=None,
shapes=None, extradoc=None):
rv_generic.__init__(self)
if badvalue is None:
badvalue = nan
self.badvalue = badvalue
self.name = name
self.a = a
self.b = b
if a is None:
self.a = -inf
if b is None:
self.b = inf
self.xa = xa
self.xb = xb
self.xtol = xtol
self._size = 1
self.m = 0.0
self.moment_type = momtype
self.expandarr = 1
if not hasattr(self,'numargs'):
#allows more general subclassing with *args
cdf_signature = inspect.getargspec(self._cdf.im_func)
numargs1 = len(cdf_signature[0]) - 2
pdf_signature = inspect.getargspec(self._pdf.im_func)
numargs2 = len(pdf_signature[0]) - 2
self.numargs = max(numargs1, numargs2)
#nin correction
self.vecfunc = sgf(self._ppf_single_call,otypes='d')
self.vecfunc.nin = self.numargs + 1
self.vecentropy = sgf(self._entropy,otypes='d')
self.vecentropy.nin = self.numargs + 1
self.veccdf = sgf(self._cdf_single_call,otypes='d')
self.veccdf.nin = self.numargs + 1
self.shapes = shapes
self.extradoc = extradoc
if momtype == 0:
self.generic_moment = sgf(self._mom0_sc,otypes='d')
else:
self.generic_moment = sgf(self._mom1_sc,otypes='d')
self.generic_moment.nin = self.numargs+1 # Because of the *args argument
# of _mom0_sc, vectorize cannot count the number of arguments correctly.
if longname is None:
if name[0] in ['aeiouAEIOU']:
hstr = "An "
else:
hstr = "A "
longname = hstr + name
# generate docstring for subclass instances
if self.__doc__ is None:
self._construct_default_doc(longname=longname, extradoc=extradoc)
else:
self._construct_doc()
## This only works for old-style classes...
# self.__class__.__doc__ = self.__doc__
def _construct_default_doc(self, longname=None, extradoc=None):
"""Construct instance docstring from the default template."""
if longname is None:
longname = 'A'
if extradoc is None:
extradoc = ''
if extradoc.startswith('\n\n'):
extradoc = extradoc[2:]
self.__doc__ = ''.join(['%s continuous random variable.'%longname,
'\n\n%(before_notes)s\n', docheaders['notes'],
extradoc, '\n%(example)s'])
self._construct_doc()
def _construct_doc(self):
"""Construct the instance docstring with string substitutions."""
tempdict = docdict.copy()
tempdict['name'] = self.name or 'distname'
tempdict['shapes'] = self.shapes or ''
if self.shapes is None:
# remove shapes from call parameters if there are none
for item in ['callparams', 'default', 'before_notes']:
tempdict[item] = tempdict[item].replace(\
"\n%(shapes)s : array-like\n shape parameters", "")
for i in range(2):
if self.shapes is None:
# necessary because we use %(shapes)s in two forms (w w/o ", ")
self.__doc__ = self.__doc__.replace("%(shapes)s, ", "")
self.__doc__ = doccer.docformat(self.__doc__, tempdict)
def _ppf_to_solve(self, x, q,*args):
return apply(self.cdf, (x, )+args)-q
def _ppf_single_call(self, q, *args):
return optimize.brentq(self._ppf_to_solve, self.xa, self.xb, args=(q,)+args, xtol=self.xtol)
# moment from definition
def _mom_integ0(self, x,m,*args):
return x**m * self.pdf(x,*args)
def _mom0_sc(self, m,*args):
return integrate.quad(self._mom_integ0, self.a,
self.b, args=(m,)+args)[0]
# moment calculated using ppf
def _mom_integ1(self, q,m,*args):
return (self.ppf(q,*args))**m
def _mom1_sc(self, m,*args):
return integrate.quad(self._mom_integ1, 0, 1,args=(m,)+args)[0]
## These are the methods you must define (standard form functions)
def _argcheck(self, *args):
# Default check for correct values on args and keywords.
# Returns condition array of 1's where arguments are correct and
# 0's where they are not.
cond = 1
for arg in args:
cond = logical_and(cond,(arr(arg) > 0))
return cond
def _pdf(self,x,*args):
return derivative(self._cdf,x,dx=1e-5,args=args,order=5)
## Could also define any of these
def _logpdf(self, x, *args):
return log(self._pdf(x, *args))
##(return 1-d using self._size to get number)
def _rvs(self, *args):
## Use basic inverse cdf algorithm for RV generation as default.
U = mtrand.sample(self._size)
Y = self._ppf(U,*args)
return Y
def _cdf_single_call(self, x, *args):
return integrate.quad(self._pdf, self.a, x, args=args)[0]
def _cdf(self, x, *args):
return self.veccdf(x,*args)
def _logcdf(self, x, *args):
return log(self._cdf(x, *args))
def _sf(self, x, *args):
return 1.0-self._cdf(x,*args)
def _logsf(self, x, *args):
return log(self._sf(x, *args))
def _ppf(self, q, *args):
return self.vecfunc(q,*args)
def _isf(self, q, *args):
return self._ppf(1.0-q,*args) #use correct _ppf for subclasses
# The actual cacluation functions (no basic checking need be done)
# If these are defined, the others won't be looked at.
# Otherwise, the other set can be defined.
def _stats(self,*args, **kwds):
return None, None, None, None
# Central moments
def _munp(self,n,*args):
return self.generic_moment(n,*args)
def pdf(self,x,*args,**kwds):
"""
Probability density function at x of the given RV.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
pdf : array-like
Probability density function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = arr((x-loc)*1.0/scale)
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x >= self.a) & (x <= self.b)
cond = cond0 & cond1
output = zeros(shape(cond),'d')
putmask(output,(1-cond0)*array(cond1,bool),self.badvalue)
if any(cond):
goodargs = argsreduce(cond, *((x,)+args+(scale,)))
scale, goodargs = goodargs[-1], goodargs[:-1]
output = place(output,cond,self._pdf(*goodargs) / scale)
if output.ndim == 0:
return output[()]
return output
def logpdf(self, x, *args, **kwds):
"""
Log of the probability density function at x of the given RV.
This uses a more numerically accurate calculation if available.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
logpdf : array-like
Log of the probability density function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = arr((x-loc)*1.0/scale)
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x >= self.a) & (x <= self.b)
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
putmask(output,(1-cond0)*array(cond1,bool),self.badvalue)
if any(cond):
goodargs = argsreduce(cond, *((x,)+args+(scale,)))
scale, goodargs = goodargs[-1], goodargs[:-1]
output = place(output,cond,self._logpdf(*goodargs) - log(scale))
if output.ndim == 0:
return output[()]
return output
def cdf(self,x,*args,**kwds):
"""
Cumulative distribution function at x of the given RV.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
cdf : array-like
Cumulative distribution function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = (x-loc)*1.0/scale
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
cond2 = (x >= self.b) & cond0
cond = cond0 & cond1
output = zeros(shape(cond),'d')
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,1.0)
if any(cond): #call only if at least 1 entry
goodargs = argsreduce(cond, *((x,)+args))
output = place(output,cond,self._cdf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def logcdf(self,x,*args,**kwds):
"""
Log of the cumulative distribution function at x of the given RV.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
logcdf : array-like
Log of the cumulative distribution function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = (x-loc)*1.0/scale
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
cond2 = (x >= self.b) & cond0
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,0.0)
if any(cond): #call only if at least 1 entry
goodargs = argsreduce(cond, *((x,)+args))
output = place(output,cond,self._logcdf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def sf(self,x,*args,**kwds):
"""
Survival function (1-cdf) at x of the given RV.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
sf : array-like
Survival function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = (x-loc)*1.0/scale
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
cond2 = cond0 & (x <= self.a)
cond = cond0 & cond1
output = zeros(shape(cond),'d')
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,1.0)
if any(cond):
goodargs = argsreduce(cond, *((x,)+args))
output = place(output,cond,self._sf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def logsf(self,x,*args,**kwds):
"""
Log of the Survival function log(1-cdf) at x of the given RV.
Parameters
----------
x : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
logsf : array-like
Log of the survival function evaluated at x
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
x,loc,scale = map(arr,(x,loc,scale))
args = tuple(map(arr,args))
x = (x-loc)*1.0/scale
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
cond2 = cond0 & (x <= self.a)
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,0.0)
if any(cond):
goodargs = argsreduce(cond, *((x,)+args))
output = place(output,cond,self._logsf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def ppf(self,q,*args,**kwds):
"""
Percent point function (inverse of cdf) at q of the given RV.
Parameters
----------
q : array-like
lower tail probability
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
x : array-like
quantile corresponding to the lower tail probability q.
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
q,loc,scale = map(arr,(q,loc,scale))
args = tuple(map(arr,args))
cond0 = self._argcheck(*args) & (scale > 0) & (loc==loc)
cond1 = (q > 0) & (q < 1)
cond2 = (q==1) & cond0
cond = cond0 & cond1
output = valarray(shape(cond),value=self.a*scale + loc)
output = place(output,(1-cond0)+(1-cond1)*(q!=0.0), self.badvalue)
output = place(output,cond2,self.b*scale + loc)
if any(cond): #call only if at least 1 entry
goodargs = argsreduce(cond, *((q,)+args+(scale,loc)))
scale, loc, goodargs = goodargs[-2], goodargs[-1], goodargs[:-2]
output = place(output,cond,self._ppf(*goodargs)*scale + loc)
if output.ndim == 0:
return output[()]
return output
def isf(self,q,*args,**kwds):
"""
Inverse survival function at q of the given RV.
Parameters
----------
q : array-like
upper tail probability
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
Returns
-------
x : array-like
quantile corresponding to the upper tail probability q.
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
q,loc,scale = map(arr,(q,loc,scale))
args = tuple(map(arr,args))
cond0 = self._argcheck(*args) & (scale > 0) & (loc==loc)
cond1 = (q > 0) & (q < 1)
cond2 = (q==1) & cond0
cond = cond0 & cond1
output = valarray(shape(cond),value=self.b)
#output = place(output,(1-cond0)*(cond1==cond1), self.badvalue)
output = place(output,(1-cond0)*(cond1==cond1)+(1-cond1)*(q!=0.0), self.badvalue)
output = place(output,cond2,self.a)
if any(cond): #call only if at least 1 entry
goodargs = argsreduce(cond, *((q,)+args+(scale,loc))) #PB replace 1-q by q
scale, loc, goodargs = goodargs[-2], goodargs[-1], goodargs[:-2]
output = place(output,cond,self._isf(*goodargs)*scale + loc) #PB use _isf instead of _ppf
if output.ndim == 0:
return output[()]
return output
def stats(self,*args,**kwds):
"""
Some statistics of the given RV
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
moments : string, optional
composed of letters ['mvsk'] defining which moments to compute:
'm' = mean,
'v' = variance,
's' = (Fisher's) skew,
'k' = (Fisher's) kurtosis.
(default='mv')
Returns
-------
stats : sequence
of requested moments.
"""
loc,scale,moments=map(kwds.get,['loc','scale','moments'])
N = len(args)
if N > self.numargs:
if N == self.numargs + 1 and loc is None:
# loc is given without keyword
loc = args[-1]
if N == self.numargs + 2 and scale is None:
# loc and scale given without keyword
loc, scale = args[-2:]
if N == self.numargs + 3 and moments is None:
# loc, scale, and moments
loc, scale, moments = args[-3:]
args = args[:self.numargs]
if scale is None: scale = 1.0
if loc is None: loc = 0.0
if moments is None: moments = 'mv'
loc,scale = map(arr,(loc,scale))
args = tuple(map(arr,args))
cond = self._argcheck(*args) & (scale > 0) & (loc==loc)
signature = inspect.getargspec(self._stats.im_func)
if (signature[2] is not None) or ('moments' in signature[0]):
mu, mu2, g1, g2 = self._stats(*args,**{'moments':moments})
else:
mu, mu2, g1, g2 = self._stats(*args)
if g1 is None:
mu3 = None
else:
mu3 = g1*np.power(mu2,1.5) #(mu2**1.5) breaks down for nan and inf
default = valarray(shape(cond), self.badvalue)
output = []
# Use only entries that are valid in calculation
if any(cond):
goodargs = argsreduce(cond, *(args+(scale,loc)))
scale, loc, goodargs = goodargs[-2], goodargs[-1], goodargs[:-2]
if 'm' in moments:
if mu is None:
mu = self._munp(1.0,*goodargs)
out0 = default.copy()
out0 = place(out0,cond,mu*scale+loc)
output.append(out0)
if 'v' in moments:
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
mu2 = mu2p - mu*mu
if np.isinf(mu):
#if mean is inf then var is also inf
mu2 = np.inf
out0 = default.copy()
out0 = place(out0,cond,mu2*scale*scale)
output.append(out0)
if 's' in moments:
if g1 is None:
mu3p = self._munp(3.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
mu2 = mu2p - mu*mu
mu3 = mu3p - 3*mu*mu2 - mu**3
g1 = mu3 / mu2**1.5
out0 = default.copy()
out0 = place(out0,cond,g1)
output.append(out0)
if 'k' in moments:
if g2 is None:
mu4p = self._munp(4.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
mu2 = mu2p - mu*mu
if mu3 is None:
mu3p = self._munp(3.0,*goodargs)
mu3 = mu3p - 3*mu*mu2 - mu**3
mu4 = mu4p - 4*mu*mu3 - 6*mu*mu*mu2 - mu**4
g2 = mu4 / mu2**2.0 - 3.0
out0 = default.copy()
out0 = place(out0,cond,g2)
output.append(out0)
else: #no valid args
output = []
for _ in moments:
out0 = default.copy()
output.append(out0)
if len(output) == 1:
return output[0]
else:
return tuple(output)
def moment(self, n, *args, **kwds):
"""
n'th order non-central moment of distribution
Parameters
----------
n: int, n>=1
order of moment
arg1, arg2, arg3,... : float
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : float, optional
location parameter (default=0)
scale : float, optional
scale parameter (default=1)
"""
loc = kwds.get('loc', 0)
scale = kwds.get('scale', 1)
if not (self._argcheck(*args) and (scale > 0)):
return nan
if (floor(n) != n):
raise ValueError("Moment must be an integer.")
if (n < 0): raise ValueError("Moment must be positive.")
mu, mu2, g1, g2 = None, None, None, None
if (n > 0) and (n < 5):
signature = inspect.getargspec(self._stats.im_func)
if (signature[2] is not None) or ('moments' in signature[0]):
mdict = {'moments':{1:'m',2:'v',3:'vs',4:'vk'}[n]}
else:
mdict = {}
mu, mu2, g1, g2 = self._stats(*args,**mdict)
val = _moment_from_stats(n, mu, mu2, g1, g2, self._munp, args)
# Convert to transformed X = L + S*Y
# so E[X^n] = E[(L+S*Y)^n] = L^n sum(comb(n,k)*(S/L)^k E[Y^k],k=0...n)
if loc == 0:
return scale**n * val
else:
result = 0
fac = float(scale) / float(loc)
for k in range(n):
valk = _moment_from_stats(k, mu, mu2, g1, g2, self._munp, args)
result += comb(n,k,exact=True)*(fac**k) * valk
result += fac**n * val
return result * loc**n
def _nnlf(self, x, *args):
return -sum(self._logpdf(x, *args),axis=0)
def nnlf(self, theta, x):
# - sum (log pdf(x, theta),axis=0)
# where theta are the parameters (including loc and scale)
#
try:
loc = theta[-2]
scale = theta[-1]
args = tuple(theta[:-2])
except IndexError:
raise ValueError("Not enough input arguments.")
if not self._argcheck(*args) or scale <= 0:
return inf
x = arr((x-loc) / scale)
cond0 = (x <= self.a) | (x >= self.b)
if (any(cond0)):
return inf
else:
N = len(x)
return self._nnlf(x, *args) + N*log(scale)
# return starting point for fit (shape arguments + loc + scale)
def _fitstart(self, data, args=None):
if args is None:
args = (1.0,)*self.numargs
return args + self.fit_loc_scale(data, *args)
# Return the (possibly reduced) function to optimize in order to find MLE
# estimates for the .fit method
def _reduce_func(self, args, kwds):
args = list(args)
Nargs = len(args) - 2
fixedn = []
index = range(Nargs) + [-2, -1]
names = ['f%d' % n for n in range(Nargs)] + ['floc', 'fscale']
x0 = args[:]
for n, key in zip(index, names):
if kwds.has_key(key):
fixedn.append(n)
args[n] = kwds[key]
del x0[n]
if len(fixedn) == 0:
func = self.nnlf
restore = None
else:
if len(fixedn) == len(index):
raise ValueError("All parameters fixed. There is nothing to optimize.")
def restore(args, theta):
# Replace with theta for all numbers not in fixedn
# This allows the non-fixed values to vary, but
# we still call self.nnlf with all parameters.
i = 0
for n in range(Nargs):
if n not in fixedn:
args[n] = theta[i]
i += 1
return args
def func(theta, x):
newtheta = restore(args[:], theta)
return self.nnlf(newtheta, x)
return x0, func, restore, args
def fit(self, data, *args, **kwds):
"""
Return MLEs for shape, location, and scale parameters from data.
MLE stands for Maximum Likelihood Estimate. Starting estimates for
the fit are given by input arguments; for any arguments not provided
with starting estimates, ``self._fitstart(data)`` is called to generate
such.
One can hold some parameters fixed to specific values by passing in
keyword arguments ``f0``, ``f1``, ..., ``fn`` (for shape parameters)
and ``floc`` and ``fscale`` (for location and scale parameters,
respectively).
Parameters
----------
data : array_like
Data to use in calculating the MLEs
args : floats, optional
Starting value(s) for any shape-characterizing arguments (those not
provided will be determined by a call to ``_fitstart(data)``).
No default value.
kwds : floats, optional
Starting values for the location and scale parameters; no default.
Special keyword arguments are recognized as holding certain
parameters fixed:
f0...fn : hold respective shape parameters fixed.
floc : hold location parameter fixed to specified value.
fscale : hold scale parameter fixed to specified value.
optimizer : The optimizer to use. The optimizer must take func,
and starting position as the first two arguments,
plus args (for extra arguments to pass to the
function to be optimized) and disp=0 to suppress
output as keyword arguments.
Returns
-------
shape, loc, scale : tuple of floats
MLEs for any shape statistics, followed by those for location and
scale.
"""
Narg = len(args)
if Narg > self.numargs:
raise ValueError("Too many input arguments.")
start = [None]*2
if (Narg < self.numargs) or not (kwds.has_key('loc') and
kwds.has_key('scale')):
start = self._fitstart(data) # get distribution specific starting locations
args += start[Narg:-2]
loc = kwds.get('loc', start[-2])
scale = kwds.get('scale', start[-1])
args += (loc, scale)
x0, func, restore, args = self._reduce_func(args, kwds)
optimizer = kwds.get('optimizer', optimize.fmin)
# convert string to function in scipy.optimize
if not callable(optimizer) and isinstance(optimizer, (str, unicode)):
if not optimizer.startswith('fmin_'):
optimizer = "fmin_"+optimizer
if optimizer == 'fmin_':
optimizer = 'fmin'
try:
optimizer = getattr(optimize, optimizer)
except AttributeError:
raise ValueError("%s is not a valid optimizer" % optimizer)
vals = optimizer(func,x0,args=(ravel(data),),disp=0)
if restore is not None:
vals = restore(args, vals)
vals = tuple(vals)
return vals
def fit_loc_scale(self, data, *args):
"""
Estimate loc and scale parameters from data using 1st and 2nd moments
"""
mu, mu2 = self.stats(*args,**{'moments':'mv'})
muhat = arr(data).mean()
mu2hat = arr(data).var()
Shat = sqrt(mu2hat / mu2)
Lhat = muhat - Shat*mu
return Lhat, Shat
@np.deprecate
def est_loc_scale(self, data, *args):
"""This function is deprecated, use self.fit_loc_scale(data) instead. """
return self.fit_loc_scale(data, *args)
def freeze(self,*args,**kwds):
return rv_frozen(self,*args,**kwds)
def __call__(self, *args, **kwds):
return self.freeze(*args, **kwds)
def _entropy(self, *args):
def integ(x):
val = self._pdf(x, *args)
return val*log(val)
entr = -integrate.quad(integ,self.a,self.b)[0]
if not np.isnan(entr):
return entr
else: # try with different limits if integration problems
low,upp = self.ppf([0.001,0.999],*args)
if np.isinf(self.b):
upper = upp
else:
upper = self.b
if np.isinf(self.a):
lower = low
else:
lower = self.a
return -integrate.quad(integ,lower,upper)[0]
def entropy(self, *args, **kwds):
"""
Differential entropy of the RV.
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale : array-like, optional
scale parameter (default=1)
"""
loc,scale=map(kwds.get,['loc','scale'])
args, loc, scale = self._fix_loc_scale(args, loc, scale)
args = tuple(map(arr,args))
cond0 = self._argcheck(*args) & (scale > 0) & (loc==loc)
output = zeros(shape(cond0),'d')
output = place(output,(1-cond0),self.badvalue)
goodargs = argsreduce(cond0, *args)
#I don't know when or why vecentropy got broken when numargs == 0
if self.numargs == 0:
output = place(output,cond0,self._entropy()+log(scale))
else:
output = place(output,cond0,self.vecentropy(*goodargs)+log(scale))
return output
def expect(self, func=None, args=(), loc=0, scale=1, lb=None, ub=None,
conditional=False, **kwds):
"""calculate expected value of a function with respect to the distribution
location and scale only tested on a few examples
Parameters
----------
all parameters are keyword parameters
func : function (default: identity mapping)
Function for which integral is calculated. Takes only one argument.
args : tuple
argument (parameters) of the distribution
lb, ub : numbers
lower and upper bound for integration, default is set to the support
of the distribution
conditional : boolean (False)
If true then the integral is corrected by the conditional probability
of the integration interval. The return value is the expectation
of the function, conditional on being in the given interval.
Additional keyword arguments are passed to the integration routine.
Returns
-------
expected value : float
Notes
-----
This function has not been checked for it's behavior when the integral is
not finite. The integration behavior is inherited from integrate.quad.
"""
lockwds = {'loc': loc,
'scale':scale}
if func is None:
def fun(x, *args):
return x*self.pdf(x, *args, **lockwds)
else:
def fun(x, *args):
return func(x)*self.pdf(x, *args, **lockwds)
if lb is None:
lb = loc + self.a * scale
if ub is None:
ub = loc + self.b * scale
if conditional:
invfac = (self.sf(lb, *args, **lockwds)
- self.sf(ub, *args, **lockwds))
else:
invfac = 1.0
kwds['args'] = args
return integrate.quad(fun, lb, ub, **kwds)[0] / invfac
_EULER = 0.577215664901532860606512090082402431042 # -special.psi(1)
_ZETA3 = 1.202056903159594285399738161511449990765 # special.zeta(3,1) Apery's constant
## Kolmogorov-Smirnov one-sided and two-sided test statistics
class ksone_gen(rv_continuous):
def _cdf(self,x,n):
return 1.0-special.smirnov(n,x)
def _ppf(self,q,n):
return special.smirnovi(n,1.0-q)
ksone = ksone_gen(a=0.0,name='ksone', longname="Kolmogorov-Smirnov "\
"A one-sided test statistic.", shapes="n",
extradoc="""
General Kolmogorov-Smirnov one-sided test.
"""
)
class kstwobign_gen(rv_continuous):
def _cdf(self,x):
return 1.0-special.kolmogorov(x)
def _sf(self,x):
return special.kolmogorov(x)
def _ppf(self,q):
return special.kolmogi(1.0-q)
kstwobign = kstwobign_gen(a=0.0,name='kstwobign', longname='Kolmogorov-Smirnov two-sided (for large N)', extradoc="""
Kolmogorov-Smirnov two-sided test for large N
"""
)
## Normal distribution
# loc = mu, scale = std
# Keep these implementations out of the class definition so they can be reused
# by other distributions.
_norm_pdf_C = math.sqrt(2*pi)
_norm_pdf_logC = math.log(_norm_pdf_C)
def _norm_pdf(x):
return exp(-x**2/2.0) / _norm_pdf_C
def _norm_logpdf(x):
return -x**2 / 2.0 - _norm_pdf_logC
def _norm_cdf(x):
return special.ndtr(x)
def _norm_logcdf(x):
return log(special.ndtr(x))
def _norm_ppf(q):
return special.ndtri(q)
class norm_gen(rv_continuous):
def _rvs(self):
return mtrand.standard_normal(self._size)
def _pdf(self,x):
return _norm_pdf(x)
def _logpdf(self, x):
return _norm_logpdf(x)
def _cdf(self,x):
return _norm_cdf(x)
def _logcdf(self, x):
return _norm_logcdf(x)
def _sf(self, x):
return _norm_cdf(-x)
def _logsf(self, x):
return _norm_logcdf(-x)
def _ppf(self,q):
return _norm_ppf(q)
def _isf(self,q):
return -_norm_ppf(q)
def _stats(self):
return 0.0, 1.0, 0.0, 0.0
def _entropy(self):
return 0.5*(log(2*pi)+1)
norm = norm_gen(name='norm',longname='A normal',extradoc="""
Normal distribution
The location (loc) keyword specifies the mean.
The scale (scale) keyword specifies the standard deviation.
normal.pdf(x) = exp(-x**2/2)/sqrt(2*pi)
""")
## Alpha distribution
##
class alpha_gen(rv_continuous):
def _pdf(self, x, a):
return 1.0/(x**2)/special.ndtr(a)*_norm_pdf(a-1.0/x)
def _logpdf(self, x, a):
return -2*log(x) + _norm_logpdf(a-1.0/x) - log(special.ndtr(a))
def _cdf(self, x, a):
return special.ndtr(a-1.0/x) / special.ndtr(a)
def _ppf(self, q, a):
return 1.0/arr(a-special.ndtri(q*special.ndtr(a)))
def _stats(self, a):
return [inf]*2 + [nan]*2
alpha = alpha_gen(a=0.0,name='alpha',shapes='a',extradoc="""
Alpha distribution
alpha.pdf(x,a) = 1/(x**2*Phi(a)*sqrt(2*pi)) * exp(-1/2 * (a-1/x)**2)
where Phi(alpha) is the normal CDF, x > 0, and a > 0.
""")
## Anglit distribution
##
class anglit_gen(rv_continuous):
def _pdf(self, x):
return cos(2*x)
def _cdf(self, x):
return sin(x+pi/4)**2.0
def _ppf(self, q):
return (arcsin(sqrt(q))-pi/4)
def _stats(self):
return 0.0, pi*pi/16-0.5, 0.0, -2*(pi**4 - 96)/(pi*pi-8)**2
def _entropy(self):
return 1-log(2)
anglit = anglit_gen(a=-pi/4,b=pi/4,name='anglit', extradoc="""
Anglit distribution
anglit.pdf(x) = sin(2*x+pi/2) = cos(2*x) for -pi/4 <= x <= pi/4
""")
## Arcsine distribution
##
class arcsine_gen(rv_continuous):
def _pdf(self, x):
return 1.0/pi/sqrt(x*(1-x))
def _cdf(self, x):
return 2.0/pi*arcsin(sqrt(x))
def _ppf(self, q):
return sin(pi/2.0*q)**2.0
def _stats(self):
#mup = 0.5, 3.0/8.0, 15.0/48.0, 35.0/128.0
mu = 0.5
mu2 = 1.0/8
g1 = 0
g2 = -3.0/2.0
return mu, mu2, g1, g2
def _entropy(self):
return -0.24156447527049044468
arcsine = arcsine_gen(a=0.0,b=1.0,name='arcsine',extradoc="""
Arcsine distribution
arcsine.pdf(x) = 1/(pi*sqrt(x*(1-x)))
for 0 < x < 1.
""")
## Beta distribution
##
class beta_gen(rv_continuous):
def _rvs(self, a, b):
return mtrand.beta(a,b,self._size)
def _pdf(self, x, a, b):
Px = (1.0-x)**(b-1.0) * x**(a-1.0)
Px /= special.beta(a,b)
return Px
def _logpdf(self, x, a, b):
lPx = (b-1.0)*log(1.0-x) + (a-1.0)*log(x)
lPx -= log(special.beta(a,b))
return lPx
def _cdf(self, x, a, b):
return special.btdtr(a,b,x)
def _ppf(self, q, a, b):
return special.btdtri(a,b,q)
def _stats(self, a, b):
mn = a *1.0 / (a + b)
var = (a*b*1.0)/(a+b+1.0)/(a+b)**2.0
g1 = 2.0*(b-a)*sqrt((1.0+a+b)/(a*b)) / (2+a+b)
g2 = 6.0*(a**3 + a**2*(1-2*b) + b**2*(1+b) - 2*a*b*(2+b))
g2 /= a*b*(a+b+2)*(a+b+3)
return mn, var, g1, g2
def _fitstart(self, data):
g1 = _skew(data)
g2 = _kurtosis(data)
def func(x):
a, b = x
sk = 2*(b-a)*sqrt(a + b + 1) / (a + b + 2) / sqrt(a*b)
ku = a**3 - a**2*(2*b-1) + b**2*(b+1) - 2*a*b*(b+2)
ku /= a*b*(a+b+2)*(a+b+3)
ku *= 6
return [sk-g1, ku-g2]
a, b = optimize.fsolve(func, (1.0, 1.0))
return super(beta_gen, self)._fitstart(data, args=(a,b))
def fit(self, data, *args, **kwds):
floc = kwds.get('floc', None)
fscale = kwds.get('fscale', None)
if floc is not None and fscale is not None:
# special case
data = (ravel(data)-floc)/fscale
xbar = data.mean()
v = data.var(ddof=0)
fac = xbar*(1-xbar)/v - 1
a = xbar * fac
b = (1-xbar) * fac
return a, b, floc, fscale
else: # do general fit
return super(beta_gen, self).fit(data, *args, **kwds)
beta = beta_gen(a=0.0, b=1.0, name='beta',shapes='a, b',extradoc="""
Beta distribution
beta.pdf(x, a, b) = gamma(a+b)/(gamma(a)*gamma(b)) * x**(a-1) * (1-x)**(b-1)
for 0 < x < 1, a, b > 0.
""")
## Beta Prime
class betaprime_gen(rv_continuous):
def _rvs(self, a, b):
u1 = gamma.rvs(a,size=self._size)
u2 = gamma.rvs(b,size=self._size)
return (u1 / u2)
def _pdf(self, x, a, b):
return 1.0/special.beta(a,b)*x**(a-1.0)/(1+x)**(a+b)
def _logpdf(self, x, a, b):
return (a-1.0)*log(x) - (a+b)*log(1+x) - log(special.beta(a,b))
def _cdf_skip(self, x, a, b):
# remove for now: special.hyp2f1 is incorrect for large a
x = where(x==1.0, 1.0-1e-6,x)
return pow(x,a)*special.hyp2f1(a+b,a,1+a,-x)/a/special.beta(a,b)
def _munp(self, n, a, b):
if (n == 1.0):
return where(b > 1, a/(b-1.0), inf)
elif (n == 2.0):
return where(b > 2, a*(a+1.0)/((b-2.0)*(b-1.0)), inf)
elif (n == 3.0):
return where(b > 3, a*(a+1.0)*(a+2.0)/((b-3.0)*(b-2.0)*(b-1.0)),
inf)
elif (n == 4.0):
return where(b > 4,
a*(a+1.0)*(a+2.0)*(a+3.0)/((b-4.0)*(b-3.0) \
*(b-2.0)*(b-1.0)), inf)
else:
raise NotImplementedError
betaprime = betaprime_gen(a=0.0, b=500.0, name='betaprime', shapes='a, b',
extradoc="""
Beta prime distribution
betaprime.pdf(x, a, b) = gamma(a+b)/(gamma(a)*gamma(b))
* x**(a-1) * (1-x)**(-a-b)
for x > 0, a, b > 0.
""")
## Bradford
##
class bradford_gen(rv_continuous):
def _pdf(self, x, c):
return c / (c*x + 1.0) / log(1.0+c)
def _cdf(self, x, c):
return log(1.0+c*x) / log(c+1.0)
def _ppf(self, q, c):
return ((1.0+c)**q-1)/c
def _stats(self, c, moments='mv'):
k = log(1.0+c)
mu = (c-k)/(c*k)
mu2 = ((c+2.0)*k-2.0*c)/(2*c*k*k)
g1 = None
g2 = None
if 's' in moments:
g1 = sqrt(2)*(12*c*c-9*c*k*(c+2)+2*k*k*(c*(c+3)+3))
g1 /= sqrt(c*(c*(k-2)+2*k))*(3*c*(k-2)+6*k)
if 'k' in moments:
g2 = c**3*(k-3)*(k*(3*k-16)+24)+12*k*c*c*(k-4)*(k-3) \
+ 6*c*k*k*(3*k-14) + 12*k**3
g2 /= 3*c*(c*(k-2)+2*k)**2
return mu, mu2, g1, g2
def _entropy(self, c):
k = log(1+c)
return k/2.0 - log(c/k)
bradford = bradford_gen(a=0.0, b=1.0, name='bradford', longname="A Bradford",
shapes='c', extradoc="""
Bradford distribution
bradford.pdf(x,c) = c/(k*(1+c*x))
for 0 < x < 1, c > 0 and k = log(1+c).
""")
## Burr
# burr with d=1 is called the fisk distribution
class burr_gen(rv_continuous):
def _pdf(self, x, c, d):
return c*d*(x**(-c-1.0))*((1+x**(-c*1.0))**(-d-1.0))
def _cdf(self, x, c, d):
return (1+x**(-c*1.0))**(-d**1.0)
def _ppf(self, q, c, d):
return (q**(-1.0/d)-1)**(-1.0/c)
def _stats(self, c, d, moments='mv'):
g2c, g2cd = gam(1-2.0/c), gam(2.0/c+d)
g1c, g1cd = gam(1-1.0/c), gam(1.0/c+d)
gd = gam(d)
k = gd*g2c*g2cd - g1c**2 * g1cd**2
mu = g1c*g1cd / gd
mu2 = k / gd**2.0
g1, g2 = None, None
g3c, g3cd = None, None
if 's' in moments:
g3c, g3cd = gam(1-3.0/c), gam(3.0/c+d)
g1 = 2*g1c**3 * g1cd**3 + gd*gd*g3c*g3cd - 3*gd*g2c*g1c*g1cd*g2cd
g1 /= sqrt(k**3)
if 'k' in moments:
if g3c is None:
g3c = gam(1-3.0/c)
if g3cd is None:
g3cd = gam(3.0/c+d)
g4c, g4cd = gam(1-4.0/c), gam(4.0/c+d)
g2 = 6*gd*g2c*g2cd * g1c**2 * g1cd**2 + gd**3 * g4c*g4cd
g2 -= 3*g1c**4 * g1cd**4 -4*gd**2*g3c*g1c*g1cd*g3cd
return mu, mu2, g1, g2
burr = burr_gen(a=0.0, name='burr', longname="Burr",
shapes="c, d", extradoc="""
Burr distribution
burr.pdf(x,c,d) = c*d * x**(-c-1) * (1+x**(-c))**(-d-1)
for x > 0.
""")
# Fisk distribution
# burr is a generalization
class fisk_gen(burr_gen):
def _pdf(self, x, c):
return burr_gen._pdf(self, x, c, 1.0)
def _cdf(self, x, c):
return burr_gen._cdf(self, x, c, 1.0)
def _ppf(self, x, c):
return burr_gen._ppf(self, x, c, 1.0)
def _stats(self, c):
return burr_gen._stats(self, c, 1.0)
def _entropy(self, c):
return 2 - log(c)
fisk = fisk_gen(a=0.0, name='fisk', longname="Fisk",
shapes='c', extradoc="""
Fisk distribution.
Also known as the log-logistic distribution.
Burr distribution with d=1.
"""
)
## Cauchy
# median = loc
class cauchy_gen(rv_continuous):
def _pdf(self, x):
return 1.0/pi/(1.0+x*x)
def _cdf(self, x):
return 0.5 + 1.0/pi*arctan(x)
def _ppf(self, q):
return tan(pi*q-pi/2.0)
def _sf(self, x):
return 0.5 - 1.0/pi*arctan(x)
def _isf(self, q):
return tan(pi/2.0-pi*q)
def _stats(self):
return inf, inf, nan, nan
def _entropy(self):
return log(4*pi)
cauchy = cauchy_gen(name='cauchy',longname='Cauchy',extradoc="""
Cauchy distribution
cauchy.pdf(x) = 1/(pi*(1+x**2))
This is the t distribution with one degree of freedom.
"""
)
## Chi
## (positive square-root of chi-square)
## chi(1, loc, scale) = halfnormal
## chi(2, 0, scale) = Rayleigh
## chi(3, 0, scale) = MaxWell
class chi_gen(rv_continuous):
def _rvs(self, df):
return sqrt(chi2.rvs(df,size=self._size))
def _pdf(self, x, df):
return x**(df-1.)*exp(-x*x*0.5)/(2.0)**(df*0.5-1)/gam(df*0.5)
def _cdf(self, x, df):
return special.gammainc(df*0.5,0.5*x*x)
def _ppf(self, q, df):
return sqrt(2*special.gammaincinv(df*0.5,q))
def _stats(self, df):
mu = sqrt(2)*special.gamma(df/2.0+0.5)/special.gamma(df/2.0)
mu2 = df - mu*mu
g1 = (2*mu**3.0 + mu*(1-2*df))/arr(mu2**1.5)
g2 = 2*df*(1.0-df)-6*mu**4 + 4*mu**2 * (2*df-1)
g2 /= arr(mu2**2.0)
return mu, mu2, g1, g2
chi = chi_gen(a=0.0,name='chi',shapes='df',extradoc="""
Chi distribution
chi.pdf(x,df) = x**(df-1)*exp(-x**2/2)/(2**(df/2-1)*gamma(df/2))
for x > 0.
"""
)
## Chi-squared (gamma-distributed with loc=0 and scale=2 and shape=df/2)
class chi2_gen(rv_continuous):
def _rvs(self, df):
return mtrand.chisquare(df,self._size)
def _pdf(self, x, df):
return exp(self._logpdf(x, df))
def _logpdf(self, x, df):
#term1 = (df/2.-1)*log(x)
#term1[(df==2)*(x==0)] = 0
#avoid 0*log(0)==nan
return (df/2.-1)*log(x+1e-300) - x/2. - gamln(df/2.) - (log(2)*df)/2.
## Px = x**(df/2.0-1)*exp(-x/2.0)
## Px /= special.gamma(df/2.0)* 2**(df/2.0)
## return log(Px)
def _cdf(self, x, df):
return special.chdtr(df, x)
def _sf(self, x, df):
return special.chdtrc(df, x)
def _isf(self, p, df):
return special.chdtri(df, p)
def _ppf(self, p, df):
return self._isf(1.0-p, df)
def _stats(self, df):
mu = df
mu2 = 2*df
g1 = 2*sqrt(2.0/df)
g2 = 12.0/df
return mu, mu2, g1, g2
chi2 = chi2_gen(a=0.0,name='chi2',longname='A chi-squared',shapes='df',
extradoc="""
Chi-squared distribution
chi2.pdf(x,df) = 1/(2*gamma(df/2)) * (x/2)**(df/2-1) * exp(-x/2)
"""
)
## Cosine (Approximation to the Normal)
class cosine_gen(rv_continuous):
def _pdf(self, x):
return 1.0/2/pi*(1+cos(x))
def _cdf(self, x):
return 1.0/2/pi*(pi + x + sin(x))
def _stats(self):
return 0.0, pi*pi/3.0-2.0, 0.0, -6.0*(pi**4-90)/(5.0*(pi*pi-6)**2)
def _entropy(self):
return log(4*pi)-1.0
cosine = cosine_gen(a=-pi,b=pi,name='cosine',extradoc="""
Cosine distribution (approximation to the normal)
cosine.pdf(x) = 1/(2*pi) * (1+cos(x))
for -pi <= x <= pi.
""")
## Double Gamma distribution
class dgamma_gen(rv_continuous):
def _rvs(self, a):
u = random(size=self._size)
return (gamma.rvs(a,size=self._size)*where(u>=0.5,1,-1))
def _pdf(self, x, a):
ax = abs(x)
return 1.0/(2*special.gamma(a))*ax**(a-1.0) * exp(-ax)
def _logpdf(self, x, a):
ax = abs(x)
return (a-1.0)*log(ax) - ax - log(2) - gamln(a)
def _cdf(self, x, a):
fac = 0.5*special.gammainc(a,abs(x))
return where(x>0,0.5+fac,0.5-fac)
def _sf(self, x, a):
fac = 0.5*special.gammainc(a,abs(x))
#return where(x>0,0.5-0.5*fac,0.5+0.5*fac)
return where(x>0,0.5-fac,0.5+fac)
def _ppf(self, q, a):
fac = special.gammainccinv(a,1-abs(2*q-1))
return where(q>0.5, fac, -fac)
def _stats(self, a):
mu2 = a*(a+1.0)
return 0.0, mu2, 0.0, (a+2.0)*(a+3.0)/mu2-3.0
dgamma = dgamma_gen(name='dgamma',longname="A double gamma",
shapes='a',extradoc="""
Double gamma distribution
dgamma.pdf(x,a) = 1/(2*gamma(a))*abs(x)**(a-1)*exp(-abs(x))
for a > 0.
"""
)
## Double Weibull distribution
##
class dweibull_gen(rv_continuous):
def _rvs(self, c):
u = random(size=self._size)
return weibull_min.rvs(c, size=self._size)*(where(u>=0.5,1,-1))
def _pdf(self, x, c):
ax = abs(x)
Px = c/2.0*ax**(c-1.0)*exp(-ax**c)
return Px
def _logpdf(self, x, c):
ax = abs(x)
return log(c) - log(2.0) + (c-1.0)*log(ax) - ax**c
def _cdf(self, x, c):
Cx1 = 0.5*exp(-abs(x)**c)
return where(x > 0, 1-Cx1, Cx1)
def _ppf_skip(self, q, c):
fac = where(q<=0.5,2*q,2*q-1)
fac = pow(arr(log(1.0/fac)),1.0/c)
return where(q>0.5,fac,-fac)
def _stats(self, c):
var = gam(1+2.0/c)
return 0.0, var, 0.0, gam(1+4.0/c)/var
dweibull = dweibull_gen(name='dweibull',longname="A double Weibull",
shapes='c',extradoc="""
Double Weibull distribution
dweibull.pdf(x,c) = c/2*abs(x)**(c-1)*exp(-abs(x)**c)
"""
)
## ERLANG
##
## Special case of the Gamma distribution with shape parameter an integer.
##
class erlang_gen(rv_continuous):
def _rvs(self, n):
return gamma.rvs(n,size=self._size)
def _arg_check(self, n):
return (n > 0) & (floor(n)==n)
def _pdf(self, x, n):
Px = (x)**(n-1.0)*exp(-x)/special.gamma(n)
return Px
def _logpdf(self, x, n):
return (n-1.0)*log(x) - x - gamln(n)
def _cdf(self, x, n):
return special.gdtr(1.0,n,x)
def _sf(self, x, n):
return special.gdtrc(1.0,n,x)
def _ppf(self, q, n):
return special.gdtrix(1.0, n, q)
def _stats(self, n):
n = n*1.0
return n, n, 2/sqrt(n), 6/n
def _entropy(self, n):
return special.psi(n)*(1-n) + 1 + gamln(n)
erlang = erlang_gen(a=0.0,name='erlang',longname='An Erlang',
shapes='n',extradoc="""
Erlang distribution (Gamma with integer shape parameter)
"""
)
## Exponential (gamma distributed with a=1.0, loc=loc and scale=scale)
## scale == 1.0 / lambda
class expon_gen(rv_continuous):
def _rvs(self):
return mtrand.standard_exponential(self._size)
def _pdf(self, x):
return exp(-x)
def _logpdf(self, x):
return -x
def _cdf(self, x):
return -expm1(-x)
def _ppf(self, q):
return -log1p(-q)
def _sf(self,x):
return exp(-x)
def _logsf(self, x):
return -x
def _isf(self,q):
return -log(q)
def _stats(self):
return 1.0, 1.0, 2.0, 6.0
def _entropy(self):
return 1.0
expon = expon_gen(a=0.0,name='expon',longname="An exponential",
extradoc="""
Exponential distribution
expon.pdf(x) = exp(-x)
for x >= 0.
scale = 1.0 / lambda
"""
)
## Exponentiated Weibull
class exponweib_gen(rv_continuous):
def _pdf(self, x, a, c):
exc = exp(-x**c)
return a*c*(1-exc)**arr(a-1) * exc * x**(c-1)
def _logpdf(self, x, a, c):
exc = exp(-x**c)
return log(a) + log(c) + (a-1.)*log(1-exc) - x**c + (c-1.0)*log(x)
def _cdf(self, x, a, c):
exm1c = -expm1(-x**c)
return arr((exm1c)**a)
def _ppf(self, q, a, c):
return (-log1p(-q**(1.0/a)))**arr(1.0/c)
exponweib = exponweib_gen(a=0.0,name='exponweib',
longname="An exponentiated Weibull",
shapes="a, c",extradoc="""
Exponentiated Weibull distribution
exponweib.pdf(x,a,c) = a*c*(1-exp(-x**c))**(a-1)*exp(-x**c)*x**(c-1)
for x > 0, a, c > 0.
"""
)
## Exponential Power
class exponpow_gen(rv_continuous):
def _pdf(self, x, b):
xbm1 = arr(x**(b-1.0))
xb = xbm1 * x
return exp(1)*b*xbm1 * exp(xb - exp(xb))
def _logpdf(self, x, b):
xb = x**(b-1.0)*x
return 1 + log(b) + (b-1.0)*log(x) + xb - exp(xb)
def _cdf(self, x, b):
xb = arr(x**b)
return -expm1(-expm1(xb))
def _sf(self, x, b):
xb = arr(x**b)
return exp(-expm1(xb))
def _isf(self, x, b):
return (log1p(-log(x)))**(1./b)
def _ppf(self, q, b):
return pow(log1p(-log1p(-q)), 1.0/b)
exponpow = exponpow_gen(a=0.0,name='exponpow',longname="An exponential power",
shapes='b',extradoc="""
Exponential Power distribution
exponpow.pdf(x,b) = b*x**(b-1) * exp(1+x**b - exp(x**b))
for x >= 0, b > 0.
"""
)
## Fatigue-Life (Birnbaum-Sanders)
class fatiguelife_gen(rv_continuous):
def _rvs(self, c):
z = norm.rvs(size=self._size)
x = 0.5*c*z
x2 = x*x
t = 1.0 + 2*x2 + 2*x*sqrt(1 + x2)
return t
def _pdf(self, x, c):
return (x+1)/arr(2*c*sqrt(2*pi*x**3))*exp(-(x-1)**2/arr((2.0*x*c**2)))
def _logpdf(self, x, c):
return log(x+1) - (x-1)**2 / (2.0*x*c**2) - log(2*c) - 0.5*(log(2*pi) + 3*log(x))
def _cdf(self, x, c):
return special.ndtr(1.0/c*(sqrt(x)-1.0/arr(sqrt(x))))
def _ppf(self, q, c):
tmp = c*special.ndtri(q)
return 0.25*(tmp + sqrt(tmp**2 + 4))**2
def _stats(self, c):
c2 = c*c
mu = c2 / 2.0 + 1
den = 5*c2 + 4
mu2 = c2*den /4.0
g1 = 4*c*sqrt(11*c2+6.0)/den**1.5
g2 = 6*c2*(93*c2+41.0) / den**2.0
return mu, mu2, g1, g2
fatiguelife = fatiguelife_gen(a=0.0,name='fatiguelife',
longname="A fatigue-life (Birnbaum-Sanders)",
shapes='c',extradoc="""
Fatigue-life (Birnbaum-Sanders) distribution
fatiguelife.pdf(x,c) = (x+1)/(2*c*sqrt(2*pi*x**3)) * exp(-(x-1)**2/(2*x*c**2))
for x > 0.
"""
)
## Folded Cauchy
class foldcauchy_gen(rv_continuous):
def _rvs(self, c):
return abs(cauchy.rvs(loc=c,size=self._size))
def _pdf(self, x, c):
return 1.0/pi*(1.0/(1+(x-c)**2) + 1.0/(1+(x+c)**2))
def _cdf(self, x, c):
return 1.0/pi*(arctan(x-c) + arctan(x+c))
def _stats(self, c):
return inf, inf, nan, nan
# setting xb=1000 allows to calculate ppf for up to q=0.9993
foldcauchy = foldcauchy_gen(a=0.0, name='foldcauchy',xb=1000,
longname = "A folded Cauchy",
shapes='c',extradoc="""
A folded Cauchy distributions
foldcauchy.pdf(x,c) = 1/(pi*(1+(x-c)**2)) + 1/(pi*(1+(x+c)**2))
for x >= 0.
"""
)
## F
class f_gen(rv_continuous):
def _rvs(self, dfn, dfd):
return mtrand.f(dfn, dfd, self._size)
def _pdf(self, x, dfn, dfd):
# n = arr(1.0*dfn)
# m = arr(1.0*dfd)
# Px = m**(m/2) * n**(n/2) * x**(n/2-1)
# Px /= (m+n*x)**((n+m)/2)*special.beta(n/2,m/2)
return exp(self._logpdf(x, dfn, dfd))
def _logpdf(self, x, dfn, dfd):
n = 1.0*dfn
m = 1.0*dfd
lPx = m/2*log(m) + n/2*log(n) + (n/2-1)*log(x)
lPx -= ((n+m)/2)*log(m+n*x) + special.betaln(n/2,m/2)
return lPx
def _cdf(self, x, dfn, dfd):
return special.fdtr(dfn, dfd, x)
def _sf(self, x, dfn, dfd):
return special.fdtrc(dfn, dfd, x)
def _ppf(self, q, dfn, dfd):
return special.fdtri(dfn, dfd, q)
def _stats(self, dfn, dfd):
v2 = arr(dfd*1.0)
v1 = arr(dfn*1.0)
mu = where (v2 > 2, v2 / arr(v2 - 2), inf)
mu2 = 2*v2*v2*(v2+v1-2)/(v1*(v2-2)**2 * (v2-4))
mu2 = where(v2 > 4, mu2, inf)
g1 = 2*(v2+2*v1-2)/(v2-6)*sqrt((2*v2-4)/(v1*(v2+v1-2)))
g1 = where(v2 > 6, g1, nan)
g2 = 3/(2*v2-16)*(8+g1*g1*(v2-6))
g2 = where(v2 > 8, g2, nan)
return mu, mu2, g1, g2
f = f_gen(a=0.0,name='f',longname='An F',shapes="dfn, dfd",
extradoc="""
F distribution
df2**(df2/2) * df1**(df1/2) * x**(df1/2-1)
F.pdf(x,df1,df2) = --------------------------------------------
(df2+df1*x)**((df1+df2)/2) * B(df1/2, df2/2)
for x > 0.
"""
)
## Folded Normal
## abs(Z) where (Z is normal with mu=L and std=S so that c=abs(L)/S)
##
## note: regress docs have scale parameter correct, but first parameter
## he gives is a shape parameter A = c * scale
## Half-normal is folded normal with shape-parameter c=0.
class foldnorm_gen(rv_continuous):
def _rvs(self, c):
return abs(norm.rvs(loc=c,size=self._size))
def _pdf(self, x, c):
return sqrt(2.0/pi)*cosh(c*x)*exp(-(x*x+c*c)/2.0)
def _cdf(self, x, c,):
return special.ndtr(x-c) + special.ndtr(x+c) - 1.0
def _stats(self, c):
fac = special.erf(c/sqrt(2))
mu = sqrt(2.0/pi)*exp(-0.5*c*c)+c*fac
mu2 = c*c + 1 - mu*mu
c2 = c*c
g1 = sqrt(2/pi)*exp(-1.5*c2)*(4-pi*exp(c2)*(2*c2+1.0))
g1 += 2*c*fac*(6*exp(-c2) + 3*sqrt(2*pi)*c*exp(-c2/2.0)*fac + \
pi*c*(fac*fac-1))
g1 /= pi*mu2**1.5
g2 = c2*c2+6*c2+3+6*(c2+1)*mu*mu - 3*mu**4
g2 -= 4*exp(-c2/2.0)*mu*(sqrt(2.0/pi)*(c2+2)+c*(c2+3)*exp(c2/2.0)*fac)
g2 /= mu2**2.0
return mu, mu2, g1, g2
foldnorm = foldnorm_gen(a=0.0,name='foldnorm',longname='A folded normal',
shapes='c',extradoc="""
Folded normal distribution
foldnormal.pdf(x,c) = sqrt(2/pi) * cosh(c*x) * exp(-(x**2+c**2)/2)
for c >= 0.
"""
)
## Extreme Value Type II or Frechet
## (defined in Regress+ documentation as Extreme LB) as
## a limiting value distribution.
##
class frechet_r_gen(rv_continuous):
def _pdf(self, x, c):
return c*pow(x,c-1)*exp(-pow(x,c))
def _logpdf(self, x, c):
return log(c) + (c-1)*log(x) - pow(x,c)
def _cdf(self, x, c):
return -expm1(-pow(x,c))
def _ppf(self, q, c):
return pow(-log1p(-q),1.0/c)
def _munp(self, n, c):
return special.gamma(1.0+n*1.0/c)
def _entropy(self, c):
return -_EULER / c - log(c) + _EULER + 1
frechet_r = frechet_r_gen(a=0.0,name='frechet_r',longname="A Frechet right",
shapes='c',extradoc="""
A Frechet (right) distribution (also called Weibull minimum)
frechet_r.pdf(x,c) = c*x**(c-1)*exp(-x**c)
for x > 0, c > 0.
"""
)
weibull_min = frechet_r_gen(a=0.0,name='weibull_min',
longname="A Weibull minimum",
shapes='c',extradoc="""
A Weibull minimum distribution (also called a Frechet (right) distribution)
weibull_min.pdf(x,c) = c*x**(c-1)*exp(-x**c)
for x > 0, c > 0.
"""
)
class frechet_l_gen(rv_continuous):
def _pdf(self, x, c):
return c*pow(-x,c-1)*exp(-pow(-x,c))
def _cdf(self, x, c):
return exp(-pow(-x,c))
def _ppf(self, q, c):
return -pow(-log(q),1.0/c)
def _munp(self, n, c):
val = special.gamma(1.0+n*1.0/c)
if (int(n) % 2): sgn = -1
else: sgn = 1
return sgn*val
def _entropy(self, c):
return -_EULER / c - log(c) + _EULER + 1
frechet_l = frechet_l_gen(b=0.0,name='frechet_l',longname="A Frechet left",
shapes='c',extradoc="""
A Frechet (left) distribution (also called Weibull maximum)
frechet_l.pdf(x,c) = c * (-x)**(c-1) * exp(-(-x)**c)
for x < 0, c > 0.
"""
)
weibull_max = frechet_l_gen(b=0.0,name='weibull_max',
longname="A Weibull maximum",
shapes='c',extradoc="""
A Weibull maximum distribution (also called a Frechet (left) distribution)
weibull_max.pdf(x,c) = c * (-x)**(c-1) * exp(-(-x)**c)
for x < 0, c > 0.
"""
)
## Generalized Logistic
##
class genlogistic_gen(rv_continuous):
def _pdf(self, x, c):
Px = c*exp(-x)/(1+exp(-x))**(c+1.0)
return Px
def _logpdf(self, x, c):
return log(c) - x - (c+1.0)*log1p(exp(-x))
def _cdf(self, x, c):
Cx = (1+exp(-x))**(-c)
return Cx
def _ppf(self, q, c):
vals = -log(pow(q,-1.0/c)-1)
return vals
def _stats(self, c):
zeta = special.zeta
mu = _EULER + special.psi(c)
mu2 = pi*pi/6.0 + zeta(2,c)
g1 = -2*zeta(3,c) + 2*_ZETA3
g1 /= mu2**1.5
g2 = pi**4/15.0 + 6*zeta(4,c)
g2 /= mu2**2.0
return mu, mu2, g1, g2
genlogistic = genlogistic_gen(name='genlogistic',
longname="A generalized logistic",
shapes='c',extradoc="""
Generalized logistic distribution
genlogistic.pdf(x,c) = c*exp(-x) / (1+exp(-x))**(c+1)
for x > 0, c > 0.
"""
)
## Generalized Pareto
class genpareto_gen(rv_continuous):
def _argcheck(self, c):
c = arr(c)
self.b = where(c < 0, 1.0/abs(c), inf)
return where(c==0, 0, 1)
def _pdf(self, x, c):
Px = pow(1+c*x,arr(-1.0-1.0/c))
return Px
def _logpdf(self, x, c):
return (-1.0-1.0/c) * np.log1p(c*x)
def _cdf(self, x, c):
return 1.0 - pow(1+c*x,arr(-1.0/c))
def _ppf(self, q, c):
vals = 1.0/c * (pow(1-q, -c)-1)
return vals
def _munp(self, n, c):
k = arange(0,n+1)
val = (-1.0/c)**n * sum(comb(n,k)*(-1)**k / (1.0-c*k),axis=0)
return where(c*n < 1, val, inf)
def _entropy(self, c):
if (c > 0):
return 1+c
else:
self.b = -1.0 / c
return rv_continuous._entropy(self, c)
genpareto = genpareto_gen(a=0.0,name='genpareto',
longname="A generalized Pareto",
shapes='c',extradoc="""
Generalized Pareto distribution
genpareto.pdf(x,c) = (1+c*x)**(-1-1/c)
for c != 0, and for x >= 0 for all c, and x < 1/abs(c) for c < 0.
"""
)
## Generalized Exponential
class genexpon_gen(rv_continuous):
def _pdf(self, x, a, b, c):
return (a+b*(-expm1(-c*x)))*exp((-a-b)*x+b*(-expm1(-c*x))/c)
def _cdf(self, x, a, b, c):
return -expm1((-a-b)*x + b*(-expm1(-c*x))/c)
def _logpdf(self, x, a, b, c):
return np.log(a+b*(-expm1(-c*x))) + (-a-b)*x+b*(-expm1(-c*x))/c
genexpon = genexpon_gen(a=0.0,name='genexpon',
longname='A generalized exponential',
shapes='a, b, c',extradoc="""
Generalized exponential distribution (Ryu 1993)
f(x,a,b,c) = (a+b*(1-exp(-c*x))) * exp(-a*x-b*x+b/c*(1-exp(-c*x)))
for x >= 0, a,b,c > 0.
a, b, c are the first, second and third shape parameters.
References
----------
"The Exponential Distribution: Theory, Methods and Applications",
N. Balakrishnan, Asit P. Basu
"""
)
## Generalized Extreme Value
## c=0 is just gumbel distribution.
## This version does now accept c==0
## Use gumbel_r for c==0
# new version by Per Brodtkorb, see ticket:767
# also works for c==0, special case is gumbel_r
# increased precision for small c
class genextreme_gen(rv_continuous):
def _argcheck(self, c):
min = np.minimum
max = np.maximum
sml = floatinfo.machar.xmin
#self.b = where(c > 0, 1.0 / c,inf)
#self.a = where(c < 0, 1.0 / c, -inf)
self.b = where(c > 0, 1.0 / max(c, sml),inf)
self.a = where(c < 0, 1.0 / min(c,-sml), -inf)
return where(abs(c)==inf, 0, 1) #True #(c!=0)
def _pdf(self, x, c):
## ex2 = 1-c*x
## pex2 = pow(ex2,1.0/c)
## p2 = exp(-pex2)*pex2/ex2
## return p2
cx = c*x
logex2 = where((c==0)*(x==x),0.0,log1p(-cx))
logpex2 = where((c==0)*(x==x),-x,logex2/c)
pex2 = exp(logpex2)
# % Handle special cases
logpdf = where((cx==1) | (cx==-inf),-inf,-pex2+logpex2-logex2)
putmask(logpdf,(c==1) & (x==1),0.0) # logpdf(c==1 & x==1) = 0; % 0^0 situation
return exp(logpdf)
def _cdf(self, x, c):
#return exp(-pow(1-c*x,1.0/c))
loglogcdf = where((c==0)*(x==x),-x,log1p(-c*x)/c)
return exp(-exp(loglogcdf))
def _ppf(self, q, c):
#return 1.0/c*(1.-(-log(q))**c)
x = -log(-log(q))
return where((c==0)*(x==x),x,-expm1(-c*x)/c)
def _stats(self,c):
g = lambda n : gam(n*c+1)
g1 = g(1)
g2 = g(2)
g3 = g(3);
g4 = g(4)
g2mg12 = where(abs(c)<1e-7,(c*pi)**2.0/6.0,g2-g1**2.0)
gam2k = where(abs(c)<1e-7,pi**2.0/6.0, expm1(gamln(2.0*c+1.0)-2*gamln(c+1.0))/c**2.0);
eps = 1e-14
gamk = where(abs(c)<eps,-_EULER,expm1(gamln(c+1))/c)
m = where(c<-1.0,nan,-gamk)
v = where(c<-0.5,nan,g1**2.0*gam2k)
#% skewness
sk1 = where(c<-1./3,nan,np.sign(c)*(-g3+(g2+2*g2mg12)*g1)/((g2mg12)**(3./2.)));
sk = where(abs(c)<=eps**0.29,12*sqrt(6)*_ZETA3/pi**3,sk1)
#% The kurtosis is:
ku1 = where(c<-1./4,nan,(g4+(-4*g3+3*(g2+g2mg12)*g1)*g1)/((g2mg12)**2))
ku = where(abs(c)<=(eps)**0.23,12.0/5.0,ku1-3.0)
return m,v,sk,ku
def _munp(self, n, c):
k = arange(0,n+1)
vals = 1.0/c**n * sum(comb(n,k) * (-1)**k * special.gamma(c*k + 1),axis=0)
return where(c*n > -1, vals, inf)
genextreme = genextreme_gen(name='genextreme',
longname="A generalized extreme value",
shapes='c',extradoc="""
Generalized extreme value (see gumbel_r for c=0)
genextreme.pdf(x,c) = exp(-exp(-x))*exp(-x) for c==0
genextreme.pdf(x,c) = exp(-(1-c*x)**(1/c))*(1-c*x)**(1/c-1)
for x <= 1/c, c > 0
"""
)
## Gamma (Use MATLAB and MATHEMATICA (b=theta=scale, a=alpha=shape) definition)
## gamma(a, loc, scale) with a an integer is the Erlang distribution
## gamma(1, loc, scale) is the Exponential distribution
## gamma(df/2, 0, 2) is the chi2 distribution with df degrees of freedom.
class gamma_gen(rv_continuous):
def _rvs(self, a):
return mtrand.standard_gamma(a, self._size)
def _pdf(self, x, a):
return x**(a-1)*exp(-x)/special.gamma(a)
def _logpdf(self, x, a):
return (a-1)*log(x) - x - gamln(a)
def _cdf(self, x, a):
return special.gammainc(a, x)
def _ppf(self, q, a):
return special.gammaincinv(a,q)
def _stats(self, a):
return a, a, 2.0/sqrt(a), 6.0/a
def _entropy(self, a):
return special.psi(a)*(1-a) + 1 + gamln(a)
def _fitstart(self, data):
a = 4 / _skew(data)**2
return super(gamma_gen, self)._fitstart(data, args=(a,))
def fit(self, data, *args, **kwds):
floc = kwds.get('floc', None)
if floc == 0:
xbar = ravel(data).mean()
logx_bar = ravel(log(data)).mean()
s = log(xbar) - logx_bar
def func(a):
return log(a) - special.digamma(a) - s
aest = (3-s + math.sqrt((s-3)**2 + 24*s)) / (12*s)
xa = aest*(1-0.4)
xb = aest*(1+0.4)
a = optimize.brentq(func, xa, xb, disp=0)
scale = xbar / a
return a, floc, scale
else:
return super(gamma_gen, self).fit(data, *args, **kwds)
gamma = gamma_gen(a=0.0,name='gamma',longname='A gamma',
shapes='a',extradoc="""
Gamma distribution
For a = integer, this is the Erlang distribution, and for a=1 it is the
exponential distribution.
gamma.pdf(x,a) = x**(a-1)*exp(-x)/gamma(a)
for x >= 0, a > 0.
"""
)
# Generalized Gamma
class gengamma_gen(rv_continuous):
def _argcheck(self, a, c):
return (a > 0) & (c != 0)
def _pdf(self, x, a, c):
return abs(c)* exp((c*a-1)*log(x)-x**c- gamln(a))
def _cdf(self, x, a, c):
val = special.gammainc(a,x**c)
cond = c + 0*val
return where(cond>0,val,1-val)
def _ppf(self, q, a, c):
val1 = special.gammaincinv(a,q)
val2 = special.gammaincinv(a,1.0-q)
ic = 1.0/c
cond = c+0*val1
return where(cond > 0,val1**ic,val2**ic)
def _munp(self, n, a, c):
return special.gamma(a+n*1.0/c) / special.gamma(a)
def _entropy(self, a,c):
val = special.psi(a)
return a*(1-val) + 1.0/c*val + gamln(a)-log(abs(c))
gengamma = gengamma_gen(a=0.0, name='gengamma',
longname='A generalized gamma',
shapes="a, c", extradoc="""
Generalized gamma distribution
gengamma.pdf(x,a,c) = abs(c)*x**(c*a-1)*exp(-x**c)/gamma(a)
for x > 0, a > 0, and c != 0.
"""
)
## Generalized Half-Logistic
##
class genhalflogistic_gen(rv_continuous):
def _argcheck(self, c):
self.b = 1.0 / c
return (c > 0)
def _pdf(self, x, c):
limit = 1.0/c
tmp = arr(1-c*x)
tmp0 = tmp**(limit-1)
tmp2 = tmp0*tmp
return 2*tmp0 / (1+tmp2)**2
def _cdf(self, x, c):
limit = 1.0/c
tmp = arr(1-c*x)
tmp2 = tmp**(limit)
return (1.0-tmp2) / (1+tmp2)
def _ppf(self, q, c):
return 1.0/c*(1-((1.0-q)/(1.0+q))**c)
def _entropy(self,c):
return 2 - (2*c+1)*log(2)
genhalflogistic = genhalflogistic_gen(a=0.0, name='genhalflogistic',
longname="A generalized half-logistic",
shapes='c',extradoc="""
Generalized half-logistic
genhalflogistic.pdf(x,c) = 2*(1-c*x)**(1/c-1) / (1+(1-c*x)**(1/c))**2
for 0 <= x <= 1/c, and c > 0.
"""
)
## Gompertz (Truncated Gumbel)
## Defined for x>=0
class gompertz_gen(rv_continuous):
def _pdf(self, x, c):
ex = exp(x)
return c*ex*exp(-c*(ex-1))
def _cdf(self, x, c):
return 1.0-exp(-c*(exp(x)-1))
def _ppf(self, q, c):
return log(1-1.0/c*log(1-q))
def _entropy(self, c):
return 1.0 - log(c) - exp(c)*special.expn(1,c)
gompertz = gompertz_gen(a=0.0, name='gompertz',
longname="A Gompertz (truncated Gumbel) distribution",
shapes='c',extradoc="""
Gompertz (truncated Gumbel) distribution
gompertz.pdf(x,c) = c*exp(x) * exp(-c*(exp(x)-1))
for x >= 0, c > 0.
"""
)
## Gumbel, Log-Weibull, Fisher-Tippett, Gompertz
## The left-skewed gumbel distribution.
## and right-skewed are available as gumbel_l and gumbel_r
class gumbel_r_gen(rv_continuous):
def _pdf(self, x):
ex = exp(-x)
return ex*exp(-ex)
def _logpdf(self, x):
return -x - exp(-x)
def _cdf(self, x):
return exp(-exp(-x))
def _logcdf(self, x):
return -exp(-x)
def _ppf(self, q):
return -log(-log(q))
def _stats(self):
return _EULER, pi*pi/6.0, \
12*sqrt(6)/pi**3 * _ZETA3, 12.0/5
def _entropy(self):
return 1.0608407169541684911
gumbel_r = gumbel_r_gen(name='gumbel_r',longname="A (right-skewed) Gumbel",
extradoc="""
Right-skewed Gumbel (Log-Weibull, Fisher-Tippett, Gompertz) distribution
gumbel_r.pdf(x) = exp(-(x+exp(-x)))
"""
)
class gumbel_l_gen(rv_continuous):
def _pdf(self, x):
ex = exp(x)
return ex*exp(-ex)
def _logpdf(self, x):
return x - exp(x)
def _cdf(self, x):
return 1.0-exp(-exp(x))
def _ppf(self, q):
return log(-log(1-q))
def _stats(self):
return -_EULER, pi*pi/6.0, \
-12*sqrt(6)/pi**3 * _ZETA3, 12.0/5
def _entropy(self):
return 1.0608407169541684911
gumbel_l = gumbel_l_gen(name='gumbel_l',longname="A left-skewed Gumbel",
extradoc="""
Left-skewed Gumbel distribution
gumbel_l.pdf(x) = exp(x - exp(x))
"""
)
# Half-Cauchy
class halfcauchy_gen(rv_continuous):
def _pdf(self, x):
return 2.0/pi/(1.0+x*x)
def _logpdf(self, x):
return np.log(2.0/pi) - np.log1p(x*x)
def _cdf(self, x):
return 2.0/pi*arctan(x)
def _ppf(self, q):
return tan(pi/2*q)
def _stats(self):
return inf, inf, nan, nan
def _entropy(self):
return log(2*pi)
halfcauchy = halfcauchy_gen(a=0.0,name='halfcauchy',
longname="A Half-Cauchy",extradoc="""
Half-Cauchy distribution
halfcauchy.pdf(x) = 2/(pi*(1+x**2))
for x >= 0.
"""
)
## Half-Logistic
##
class halflogistic_gen(rv_continuous):
def _pdf(self, x):
return 0.5/(cosh(x/2.0))**2.0
def _cdf(self, x):
return tanh(x/2.0)
def _ppf(self, q):
return 2*arctanh(q)
def _munp(self, n):
if n==1: return 2*log(2)
if n==2: return pi*pi/3.0
if n==3: return 9*_ZETA3
if n==4: return 7*pi**4 / 15.0
return 2*(1-pow(2.0,1-n))*special.gamma(n+1)*special.zeta(n,1)
def _entropy(self):
return 2-log(2)
halflogistic = halflogistic_gen(a=0.0, name='halflogistic',
longname="A half-logistic",
extradoc="""
Half-logistic distribution
halflogistic.pdf(x) = 2*exp(-x)/(1+exp(-x))**2 = 1/2*sech(x/2)**2
for x >= 0.
"""
)
## Half-normal = chi(1, loc, scale)
class halfnorm_gen(rv_continuous):
def _rvs(self):
return abs(norm.rvs(size=self._size))
def _pdf(self, x):
return sqrt(2.0/pi)*exp(-x*x/2.0)
def _logpdf(self, x):
return 0.5 * np.log(2.0/pi) - x*x/2.0
def _cdf(self, x):
return special.ndtr(x)*2-1.0
def _ppf(self, q):
return special.ndtri((1+q)/2.0)
def _stats(self):
return sqrt(2.0/pi), 1-2.0/pi, sqrt(2)*(4-pi)/(pi-2)**1.5, \
8*(pi-3)/(pi-2)**2
def _entropy(self):
return 0.5*log(pi/2.0)+0.5
halfnorm = halfnorm_gen(a=0.0, name='halfnorm',
longname="A half-normal",
extradoc="""
Half-normal distribution
halfnorm.pdf(x) = sqrt(2/pi) * exp(-x**2/2)
for x > 0.
"""
)
## Hyperbolic Secant
class hypsecant_gen(rv_continuous):
def _pdf(self, x):
return 1.0/(pi*cosh(x))
def _cdf(self, x):
return 2.0/pi*arctan(exp(x))
def _ppf(self, q):
return log(tan(pi*q/2.0))
def _stats(self):
return 0, pi*pi/4, 0, 2
def _entropy(self):
return log(2*pi)
hypsecant = hypsecant_gen(name='hypsecant',longname="A hyperbolic secant",
extradoc="""
Hyperbolic secant distribution
hypsecant.pdf(x) = 1/pi * sech(x)
"""
)
## Gauss Hypergeometric
class gausshyper_gen(rv_continuous):
def _argcheck(self, a, b, c, z):
return (a > 0) & (b > 0) & (c==c) & (z==z)
def _pdf(self, x, a, b, c, z):
Cinv = gam(a)*gam(b)/gam(a+b)*special.hyp2f1(c,a,a+b,-z)
return 1.0/Cinv * x**(a-1.0) * (1.0-x)**(b-1.0) / (1.0+z*x)**c
def _munp(self, n, a, b, c, z):
fac = special.beta(n+a,b) / special.beta(a,b)
num = special.hyp2f1(c,a+n,a+b+n,-z)
den = special.hyp2f1(c,a,a+b,-z)
return fac*num / den
gausshyper = gausshyper_gen(a=0.0, b=1.0, name='gausshyper',
longname="A Gauss hypergeometric",
shapes="a, b, c, z",
extradoc="""
Gauss hypergeometric distribution
gausshyper.pdf(x,a,b,c,z) = C * x**(a-1) * (1-x)**(b-1) * (1+z*x)**(-c)
for 0 <= x <= 1, a > 0, b > 0, and
C = 1/(B(a,b)F[2,1](c,a;a+b;-z))
"""
)
## Inverted Gamma
# special case of generalized gamma with c=-1
#
class invgamma_gen(rv_continuous):
def _pdf(self, x, a):
return exp(self._logpdf(x,a))
def _logpdf(self, x, a):
return (-(a+1)*log(x)-gamln(a) - 1.0/x)
def _cdf(self, x, a):
return 1.0-special.gammainc(a, 1.0/x)
def _ppf(self, q, a):
return 1.0/special.gammaincinv(a,1-q)
def _munp(self, n, a):
return exp(gamln(a-n) - gamln(a))
def _entropy(self, a):
return a - (a+1.0)*special.psi(a) + gamln(a)
invgamma = invgamma_gen(a=0.0, name='invgamma',longname="An inverted gamma",
shapes='a',extradoc="""
Inverted gamma distribution
invgamma.pdf(x,a) = x**(-a-1)/gamma(a) * exp(-1/x)
for x > 0, a > 0.
"""
)
## Inverse Normal Distribution
# scale is gamma from DATAPLOT and B from Regress
_invnorm_msg = \
"""The `invnorm` distribution will be renamed to `invgauss` after scipy 0.9"""
class invnorm_gen(rv_continuous):
def _rvs(self, mu):
warnings.warn(_invnorm_msg, DeprecationWarning)
return mtrand.wald(mu, 1.0, size=self._size)
def _pdf(self, x, mu):
warnings.warn(_invnorm_msg, DeprecationWarning)
return 1.0/sqrt(2*pi*x**3.0)*exp(-1.0/(2*x)*((x-mu)/mu)**2)
def _logpdf(self, x, mu):
warnings.warn(_invnorm_msg, DeprecationWarning)
return -0.5*log(2*pi) - 1.5*log(x) - ((x-mu)/mu)**2/(2*x)
def _cdf(self, x, mu):
warnings.warn(_invnorm_msg, DeprecationWarning)
fac = sqrt(1.0/x)
C1 = norm.cdf(fac*(x-mu)/mu)
C1 += exp(2.0/mu)*norm.cdf(-fac*(x+mu)/mu)
return C1
def _stats(self, mu):
warnings.warn(_invnorm_msg, DeprecationWarning)
return mu, mu**3.0, 3*sqrt(mu), 15*mu
invnorm = invnorm_gen(a=0.0, name='invnorm', longname="An inverse normal",
shapes="mu",extradoc="""
Inverse normal distribution
NOTE: `invnorm` will be renamed to `invgauss` after scipy 0.9
invnorm.pdf(x,mu) = 1/sqrt(2*pi*x**3) * exp(-(x-mu)**2/(2*x*mu**2))
for x > 0.
"""
)
## Inverse Gaussian Distribution (used to be called 'invnorm'
# scale is gamma from DATAPLOT and B from Regress
class invgauss_gen(rv_continuous):
def _rvs(self, mu):
return mtrand.wald(mu, 1.0, size=self._size)
def _pdf(self, x, mu):
return 1.0/sqrt(2*pi*x**3.0)*exp(-1.0/(2*x)*((x-mu)/mu)**2)
def _logpdf(self, x, mu):
return -0.5*log(2*pi) - 1.5*log(x) - ((x-mu)/mu)**2/(2*x)
def _cdf(self, x, mu):
fac = sqrt(1.0/x)
C1 = norm.cdf(fac*(x-mu)/mu)
C1 += exp(2.0/mu)*norm.cdf(-fac*(x+mu)/mu)
return C1
def _stats(self, mu):
return mu, mu**3.0, 3*sqrt(mu), 15*mu
invgauss = invgauss_gen(a=0.0, name='invgauss', longname="An inverse Gaussian",
shapes="mu",extradoc="""
Inverse Gaussian distribution
invgauss.pdf(x,mu) = 1/sqrt(2*pi*x**3) * exp(-(x-mu)**2/(2*x*mu**2))
for x > 0.
"""
)
## Inverted Weibull
class invweibull_gen(rv_continuous):
def _pdf(self, x, c):
xc1 = x**(-c-1.0)
#xc2 = xc1*x
xc2 = x**(-c)
xc2 = exp(-xc2)
return c*xc1*xc2
def _cdf(self, x, c):
xc1 = x**(-c)
return exp(-xc1)
def _ppf(self, q, c):
return pow(-log(q),arr(-1.0/c))
def _entropy(self, c):
return 1+_EULER + _EULER / c - log(c)
invweibull = invweibull_gen(a=0,name='invweibull',
longname="An inverted Weibull",
shapes='c',extradoc="""
Inverted Weibull distribution
invweibull.pdf(x,c) = c*x**(-c-1)*exp(-x**(-c))
for x > 0, c > 0.
"""
)
## Johnson SB
class johnsonsb_gen(rv_continuous):
def _argcheck(self, a, b):
return (b > 0) & (a==a)
def _pdf(self, x, a, b):
trm = norm.pdf(a+b*log(x/(1.0-x)))
return b*1.0/(x*(1-x))*trm
def _cdf(self, x, a, b):
return norm.cdf(a+b*log(x/(1.0-x)))
def _ppf(self, q, a, b):
return 1.0/(1+exp(-1.0/b*(norm.ppf(q)-a)))
johnsonsb = johnsonsb_gen(a=0.0,b=1.0,name='johnsonb',
longname="A Johnson SB",
shapes="a, b",extradoc="""
Johnson SB distribution
johnsonsb.pdf(x,a,b) = b/(x*(1-x)) * phi(a + b*log(x/(1-x)))
for 0 < x < 1 and a,b > 0, and phi is the normal pdf.
"""
)
## Johnson SU
class johnsonsu_gen(rv_continuous):
def _argcheck(self, a, b):
return (b > 0) & (a==a)
def _pdf(self, x, a, b):
x2 = x*x
trm = norm.pdf(a+b*log(x+sqrt(x2+1)))
return b*1.0/sqrt(x2+1.0)*trm
def _cdf(self, x, a, b):
return norm.cdf(a+b*log(x+sqrt(x*x+1)))
def _ppf(self, q, a, b):
return sinh((norm.ppf(q)-a)/b)
johnsonsu = johnsonsu_gen(name='johnsonsu',longname="A Johnson SU",
shapes="a, b", extradoc="""
Johnson SU distribution
johnsonsu.pdf(x,a,b) = b/sqrt(x**2+1) * phi(a + b*log(x+sqrt(x**2+1)))
for all x, a,b > 0, and phi is the normal pdf.
"""
)
## Laplace Distribution
class laplace_gen(rv_continuous):
def _rvs(self):
return mtrand.laplace(0, 1, size=self._size)
def _pdf(self, x):
return 0.5*exp(-abs(x))
def _cdf(self, x):
return where(x > 0, 1.0-0.5*exp(-x), 0.5*exp(x))
def _ppf(self, q):
return where(q > 0.5, -log(2*(1-q)), log(2*q))
def _stats(self):
return 0, 2, 0, 3
def _entropy(self):
return log(2)+1
laplace = laplace_gen(name='laplace', longname="A Laplace",
extradoc="""
Laplacian distribution
laplace.pdf(x) = 1/2*exp(-abs(x))
"""
)
## Levy Distribution
class levy_gen(rv_continuous):
def _pdf(self, x):
return 1/sqrt(2*pi*x)/x*exp(-1/(2*x))
def _cdf(self, x):
return 2*(1-norm._cdf(1/sqrt(x)))
def _ppf(self, q):
val = norm._ppf(1-q/2.0)
return 1.0/(val*val)
def _stats(self):
return inf, inf, nan, nan
levy = levy_gen(a=0.0,name="levy", longname = "A Levy", extradoc="""
Levy distribution
levy.pdf(x) = 1/(x*sqrt(2*pi*x)) * exp(-1/(2*x))
for x > 0.
This is the same as the Levy-stable distribution with a=1/2 and b=1.
"""
)
## Left-skewed Levy Distribution
class levy_l_gen(rv_continuous):
def _pdf(self, x):
ax = abs(x)
return 1/sqrt(2*pi*ax)/ax*exp(-1/(2*ax))
def _cdf(self, x):
ax = abs(x)
return 2*norm._cdf(1/sqrt(ax))-1
def _ppf(self, q):
val = norm._ppf((q+1.0)/2)
return -1.0/(val*val)
def _stats(self):
return inf, inf, nan, nan
levy_l = levy_l_gen(b=0.0,name="levy_l", longname = "A left-skewed Levy", extradoc="""
Left-skewed Levy distribution
levy_l.pdf(x) = 1/(abs(x)*sqrt(2*pi*abs(x))) * exp(-1/(2*abs(x)))
for x < 0.
This is the same as the Levy-stable distribution with a=1/2 and b=-1.
"""
)
## Levy-stable Distribution (only random variates)
class levy_stable_gen(rv_continuous):
def _rvs(self, alpha, beta):
sz = self._size
TH = uniform.rvs(loc=-pi/2.0,scale=pi,size=sz)
W = expon.rvs(size=sz)
if alpha==1:
return 2/pi*(pi/2+beta*TH)*tan(TH)-beta*log((pi/2*W*cos(TH))/(pi/2+beta*TH))
# else
ialpha = 1.0/alpha
aTH = alpha*TH
if beta==0:
return W/(cos(TH)/tan(aTH)+sin(TH))*((cos(aTH)+sin(aTH)*tan(TH))/W)**ialpha
# else
val0 = beta*tan(pi*alpha/2)
th0 = arctan(val0)/alpha
val3 = W/(cos(TH)/tan(alpha*(th0+TH))+sin(TH))
res3 = val3*((cos(aTH)+sin(aTH)*tan(TH)-val0*(sin(aTH)-cos(aTH)*tan(TH)))/W)**ialpha
return res3
def _argcheck(self, alpha, beta):
if beta == -1:
self.b = 0.0
elif beta == 1:
self.a = 0.0
return (alpha > 0) & (alpha <= 2) & (beta <= 1) & (beta >= -1)
def _pdf(self, x, alpha, beta):
raise NotImplementedError
levy_stable = levy_stable_gen(name='levy_stable', longname="A Levy-stable",
shapes="alpha, beta", extradoc="""
Levy-stable distribution (only random variates available -- ignore other docs)
"""
)
## Logistic (special case of generalized logistic with c=1)
## Sech-squared
class logistic_gen(rv_continuous):
def _rvs(self):
return mtrand.logistic(size=self._size)
def _pdf(self, x):
ex = exp(-x)
return ex / (1+ex)**2.0
def _cdf(self, x):
return 1.0/(1+exp(-x))
def _ppf(self, q):
return -log(1.0/q-1)
def _stats(self):
return 0, pi*pi/3.0, 0, 6.0/5.0
def _entropy(self):
return 1.0
logistic = logistic_gen(name='logistic', longname="A logistic",
extradoc="""
Logistic distribution
logistic.pdf(x) = exp(-x)/(1+exp(-x))**2
"""
)
## Log Gamma
#
class loggamma_gen(rv_continuous):
def _rvs(self, c):
return log(mtrand.gamma(c, size=self._size))
def _pdf(self, x, c):
return exp(c*x-exp(x)-gamln(c))
def _cdf(self, x, c):
return special.gammainc(c, exp(x))
def _ppf(self, q, c):
return log(special.gammaincinv(c,q))
def _munp(self,n,*args):
# use generic moment calculation using ppf
return self._mom0_sc(n,*args)
loggamma = loggamma_gen(name='loggamma', longname="A log gamma", shapes='c',
extradoc="""
Log gamma distribution
loggamma.pdf(x,c) = exp(c*x-exp(x)) / gamma(c)
for all x, c > 0.
"""
)
## Log-Laplace (Log Double Exponential)
##
class loglaplace_gen(rv_continuous):
def _pdf(self, x, c):
cd2 = c/2.0
c = where(x < 1, c, -c)
return cd2*x**(c-1)
def _cdf(self, x, c):
return where(x < 1, 0.5*x**c, 1-0.5*x**(-c))
def _ppf(self, q, c):
return where(q < 0.5, (2.0*q)**(1.0/c), (2*(1.0-q))**(-1.0/c))
def _entropy(self, c):
return log(2.0/c) + 1.0
loglaplace = loglaplace_gen(a=0.0, name='loglaplace',
longname="A log-Laplace",shapes='c',
extradoc="""
Log-Laplace distribution (Log Double Exponential)
loglaplace.pdf(x,c) = c/2*x**(c-1) for 0 < x < 1
= c/2*x**(-c-1) for x >= 1
for c > 0.
"""
)
## Lognormal (Cobb-Douglass)
## std is a shape parameter and is the variance of the underlying
## distribution.
## the mean of the underlying distribution is log(scale)
class lognorm_gen(rv_continuous):
def _rvs(self, s):
return exp(s * norm.rvs(size=self._size))
def _pdf(self, x, s):
Px = exp(-log(x)**2 / (2*s**2))
return Px / (s*x*sqrt(2*pi))
def _cdf(self, x, s):
return norm.cdf(log(x)/s)
def _ppf(self, q, s):
return exp(s*norm._ppf(q))
def _stats(self, s):
p = exp(s*s)
mu = sqrt(p)
mu2 = p*(p-1)
g1 = sqrt((p-1))*(2+p)
g2 = numpy.polyval([1,2,3,0,-6.0],p)
return mu, mu2, g1, g2
def _entropy(self, s):
return 0.5*(1+log(2*pi)+2*log(s))
lognorm = lognorm_gen(a=0.0, name='lognorm',
longname='A lognormal', shapes='s',
extradoc="""
Lognormal distribution
lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2)
for x > 0, s > 0.
If log x is normally distributed with mean mu and variance sigma**2,
then x is log-normally distributed with shape paramter sigma and scale
parameter exp(mu).
"""
)
# Gibrat's distribution is just lognormal with s=1
class gilbrat_gen(lognorm_gen):
def _rvs(self):
return lognorm_gen._rvs(self, 1.0)
def _pdf(self, x):
return lognorm_gen._pdf(self, x, 1.0)
def _cdf(self, x):
return lognorm_gen._cdf(self, x, 1.0)
def _ppf(self, q):
return lognorm_gen._ppf(self, q, 1.0)
def _stats(self):
return lognorm_gen._stats(self, 1.0)
def _entropy(self):
return 0.5*log(2*pi) + 0.5
gilbrat = gilbrat_gen(a=0.0, name='gilbrat', longname='A Gilbrat',
extradoc="""
Gilbrat distribution
gilbrat.pdf(x) = 1/(x*sqrt(2*pi)) * exp(-1/2*(log(x))**2)
"""
)
# MAXWELL
class maxwell_gen(rv_continuous):
"""A Maxwell continuous random variable.
%(before_notes)s
Notes
-----
A special case of a `chi` distribution, with ``df = 3``, ``loc = 0.0``,
and given ``scale = 1.0 / sqrt(a)``, where a is the parameter used in
the Mathworld description [1]_.
Probability density function. Given by :math:`\sqrt(2/\pi)x^2 exp(-x^2/2)`
for ``x > 0``.
References
----------
.. [1] http://mathworld.wolfram.com/MaxwellDistribution.html
%(example)s
"""
def _rvs(self):
return chi.rvs(3.0,size=self._size)
def _pdf(self, x):
return sqrt(2.0/pi)*x*x*exp(-x*x/2.0)
def _cdf(self, x):
return special.gammainc(1.5,x*x/2.0)
def _ppf(self, q):
return sqrt(2*special.gammaincinv(1.5,q))
def _stats(self):
val = 3*pi-8
return 2*sqrt(2.0/pi), 3-8/pi, sqrt(2)*(32-10*pi)/val**1.5, \
(-12*pi*pi + 160*pi - 384) / val**2.0
def _entropy(self):
return _EULER + 0.5*log(2*pi)-0.5
maxwell = maxwell_gen(a=0.0, name='maxwell', extradoc="""
Maxwell distribution
maxwell.pdf(x) = sqrt(2/pi) * x**2 * exp(-x**2/2)
for x > 0.
"""
)
# Mielke's Beta-Kappa
class mielke_gen(rv_continuous):
def _pdf(self, x, k, s):
return k*x**(k-1.0) / (1.0+x**s)**(1.0+k*1.0/s)
def _cdf(self, x, k, s):
return x**k / (1.0+x**s)**(k*1.0/s)
def _ppf(self, q, k, s):
qsk = pow(q,s*1.0/k)
return pow(qsk/(1.0-qsk),1.0/s)
mielke = mielke_gen(a=0.0, name='mielke', longname="A Mielke's Beta-Kappa",
shapes="k, s", extradoc="""
Mielke's Beta-Kappa distribution
mielke.pdf(x,k,s) = k*x**(k-1) / (1+x**s)**(1+k/s)
for x > 0.
"""
)
# Nakagami (cf Chi)
class nakagami_gen(rv_continuous):
def _pdf(self, x, nu):
return 2*nu**nu/gam(nu)*(x**(2*nu-1.0))*exp(-nu*x*x)
def _cdf(self, x, nu):
return special.gammainc(nu,nu*x*x)
def _ppf(self, q, nu):
return sqrt(1.0/nu*special.gammaincinv(nu,q))
def _stats(self, nu):
mu = gam(nu+0.5)/gam(nu)/sqrt(nu)
mu2 = 1.0-mu*mu
g1 = mu*(1-4*nu*mu2)/2.0/nu/mu2**1.5
g2 = -6*mu**4*nu + (8*nu-2)*mu**2-2*nu + 1
g2 /= nu*mu2**2.0
return mu, mu2, g1, g2
nakagami = nakagami_gen(a=0.0, name="nakagami", longname="A Nakagami",
shapes='nu', extradoc="""
Nakagami distribution
nakagami.pdf(x,nu) = 2*nu**nu/gamma(nu) * x**(2*nu-1) * exp(-nu*x**2)
for x > 0, nu > 0.
"""
)
# Non-central chi-squared
# nc is lambda of definition, df is nu
class ncx2_gen(rv_continuous):
def _rvs(self, df, nc):
return mtrand.noncentral_chisquare(df,nc,self._size)
def _pdf(self, x, df, nc):
a = arr(df/2.0)
Px = exp(-nc/2.0)*special.hyp0f1(a,nc*x/4.0)
Px *= exp(-x/2.0)*x**(a-1) / arr(2**a * special.gamma(a))
return Px
def _cdf(self, x, df, nc):
return special.chndtr(x,df,nc)
def _ppf(self, q, df, nc):
return special.chndtrix(q,df,nc)
def _stats(self, df, nc):
val = df + 2.0*nc
return df + nc, 2*val, sqrt(8)*(val+nc)/val**1.5, \
12.0*(val+2*nc)/val**2.0
ncx2 = ncx2_gen(a=0.0, name='ncx2', longname="A non-central chi-squared",
shapes="df, nc", extradoc="""
Non-central chi-squared distribution
ncx2.pdf(x,df,nc) = exp(-(nc+df)/2)*1/2*(x/nc)**((df-2)/4)
* I[(df-2)/2](sqrt(nc*x))
for x > 0.
"""
)
# Non-central F
class ncf_gen(rv_continuous):
def _rvs(self, dfn, dfd, nc):
return mtrand.noncentral_f(dfn,dfd,nc,self._size)
def _pdf_skip(self, x, dfn, dfd, nc):
n1,n2 = dfn, dfd
term = -nc/2+nc*n1*x/(2*(n2+n1*x)) + gamln(n1/2.)+gamln(1+n2/2.)
term -= gamln((n1+n2)/2.0)
Px = exp(term)
Px *= n1**(n1/2) * n2**(n2/2) * x**(n1/2-1)
Px *= (n2+n1*x)**(-(n1+n2)/2)
Px *= special.assoc_laguerre(-nc*n1*x/(2.0*(n2+n1*x)),n2/2,n1/2-1)
Px /= special.beta(n1/2,n2/2)
#this function does not have a return
# drop it for now, the generic function seems to work ok
def _cdf(self, x, dfn, dfd, nc):
return special.ncfdtr(dfn,dfd,nc,x)
def _ppf(self, q, dfn, dfd, nc):
return special.ncfdtri(dfn, dfd, nc, q)
def _munp(self, n, dfn, dfd, nc):
val = (dfn *1.0/dfd)**n
term = gamln(n+0.5*dfn) + gamln(0.5*dfd-n) - gamln(dfd*0.5)
val *= exp(-nc / 2.0+term)
val *= special.hyp1f1(n+0.5*dfn, 0.5*dfn, 0.5*nc)
return val
def _stats(self, dfn, dfd, nc):
mu = where(dfd <= 2, inf, dfd / (dfd-2.0)*(1+nc*1.0/dfn))
mu2 = where(dfd <=4, inf, 2*(dfd*1.0/dfn)**2.0 * \
((dfn+nc/2.0)**2.0 + (dfn+nc)*(dfd-2.0)) / \
((dfd-2.0)**2.0 * (dfd-4.0)))
return mu, mu2, None, None
ncf = ncf_gen(a=0.0, name='ncf', longname="A non-central F distribution",
shapes="dfn, dfd, nc", extradoc="""
Non-central F distribution
ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2)))
* df1**(df1/2) * df2**(df2/2) * x**(df1/2-1)
* (df2+df1*x)**(-(df1+df2)/2)
* gamma(df1/2)*gamma(1+df2/2)
* L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2)))
/ (B(v1/2, v2/2) * gamma((v1+v2)/2))
for df1, df2, nc > 0.
"""
)
## Student t distribution
class t_gen(rv_continuous):
def _rvs(self, df):
return mtrand.standard_t(df, size=self._size)
#Y = f.rvs(df, df, size=self._size)
#sY = sqrt(Y)
#return 0.5*sqrt(df)*(sY-1.0/sY)
def _pdf(self, x, df):
r = arr(df*1.0)
Px = exp(gamln((r+1)/2)-gamln(r/2))
Px /= sqrt(r*pi)*(1+(x**2)/r)**((r+1)/2)
return Px
def _logpdf(self, x, df):
r = df*1.0
lPx = gamln((r+1)/2)-gamln(r/2)
lPx -= 0.5*log(r*pi) + (r+1)/2*log(1+(x**2)/r)
return lPx
def _cdf(self, x, df):
return special.stdtr(df, x)
def _sf(self, x, df):
return special.stdtr(df, -x)
def _ppf(self, q, df):
return special.stdtrit(df, q)
def _isf(self, q, df):
return -special.stdtrit(df, q)
def _stats(self, df):
mu2 = where(df > 2, df / (df-2.0), inf)
g1 = where(df > 3, 0.0, nan)
g2 = where(df > 4, 6.0/(df-4.0), nan)
return 0, mu2, g1, g2
t = t_gen(name='t',longname="Student's T",
shapes="df", extradoc="""
Student's T distribution
gamma((df+1)/2)
t.pdf(x,df) = -----------------------------------------------
sqrt(pi*df)*gamma(df/2)*(1+x**2/df)**((df+1)/2)
for df > 0.
"""
)
## Non-central T distribution
class nct_gen(rv_continuous):
def _rvs(self, df, nc):
return norm.rvs(loc=nc,size=self._size)*sqrt(df) / sqrt(chi2.rvs(df,size=self._size))
def _pdf(self, x, df, nc):
n = df*1.0
nc = nc*1.0
x2 = x*x
ncx2 = nc*nc*x2
fac1 = n + x2
trm1 = n/2.*log(n) + gamln(n+1)
trm1 -= n*log(2)+nc*nc/2.+(n/2.)*log(fac1)+gamln(n/2.)
Px = exp(trm1)
valF = ncx2 / (2*fac1)
trm1 = sqrt(2)*nc*x*special.hyp1f1(n/2+1,1.5,valF)
trm1 /= arr(fac1*special.gamma((n+1)/2))
trm2 = special.hyp1f1((n+1)/2,0.5,valF)
trm2 /= arr(sqrt(fac1)*special.gamma(n/2+1))
Px *= trm1+trm2
return Px
def _cdf(self, x, df, nc):
return special.nctdtr(df, nc, x)
def _ppf(self, q, df, nc):
return special.nctdtrit(df, nc, q)
def _stats(self, df, nc, moments='mv'):
mu, mu2, g1, g2 = None, None, None, None
val1 = gam((df-1.0)/2.0)
val2 = gam(df/2.0)
if 'm' in moments:
mu = nc*sqrt(df/2.0)*val1/val2
if 'v' in moments:
var = (nc*nc+1.0)*df/(df-2.0)
var -= nc*nc*df* val1**2 / 2.0 / val2**2
mu2 = var
if 's' in moments:
g1n = 2*nc*sqrt(df)*val1*((nc*nc*(2*df-7)-3)*val2**2 \
-nc*nc*(df-2)*(df-3)*val1**2)
g1d = (df-3)*sqrt(2*df*(nc*nc+1)/(df-2) - \
nc*nc*df*(val1/val2)**2) * val2 * \
(nc*nc*(df-2)*val1**2 - \
2*(nc*nc+1)*val2**2)
g1 = g1n/g1d
if 'k' in moments:
g2n = 2*(-3*nc**4*(df-2)**2 *(df-3) *(df-4)*val1**4 + \
2**(6-2*df) * nc*nc*(df-2)*(df-4)* \
(nc*nc*(2*df-7)-3)*pi* gam(df+1)**2 - \
4*(nc**4*(df-5)-6*nc*nc-3)*(df-3)*val2**4)
g2d = (df-3)*(df-4)*(nc*nc*(df-2)*val1**2 - \
2*(nc*nc+1)*val2)**2
g2 = g2n / g2d
return mu, mu2, g1, g2
nct = nct_gen(name="nct", longname="A Noncentral T",
shapes="df, nc", extradoc="""
Non-central Student T distribution
df**(df/2) * gamma(df+1)
nct.pdf(x,df,nc) = --------------------------------------------------
2**df*exp(nc**2/2)*(df+x**2)**(df/2) * gamma(df/2)
for df > 0, nc > 0.
"""
)
# Pareto
class pareto_gen(rv_continuous):
def _pdf(self, x, b):
return b * x**(-b-1)
def _cdf(self, x, b):
return 1 - x**(-b)
def _ppf(self, q, b):
return pow(1-q, -1.0/b)
def _stats(self, b, moments='mv'):
mu, mu2, g1, g2 = None, None, None, None
if 'm' in moments:
mask = b > 1
bt = extract(mask,b)
mu = valarray(shape(b),value=inf)
mu = place(mu, mask, bt / (bt-1.0))
if 'v' in moments:
mask = b > 2
bt = extract( mask,b)
mu2 = valarray(shape(b), value=inf)
mu2 = place(mu2, mask, bt / (bt-2.0) / (bt-1.0)**2)
if 's' in moments:
mask = b > 3
bt = extract( mask,b)
g1 = valarray(shape(b), value=nan)
vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b))
g1 = place(g1, mask, vals)
if 'k' in moments:
mask = b > 4
bt = extract( mask,b)
g2 = valarray(shape(b), value=nan)
vals = 6.0*polyval([1.0,1.0,-6,-2],bt)/ \
polyval([1.0,-7.0,12.0,0.0],bt)
g2 = place(g2, mask, vals)
return mu, mu2, g1, g2
def _entropy(self, c):
return 1 + 1.0/c - log(c)
pareto = pareto_gen(a=1.0, name="pareto", longname="A Pareto",
shapes="b", extradoc="""
Pareto distribution
pareto.pdf(x,b) = b/x**(b+1)
for x >= 1, b > 0.
"""
)
# LOMAX (Pareto of the second kind.)
# Special case of Pareto of the first kind (location=-1.0)
class lomax_gen(rv_continuous):
def _pdf(self, x, c):
return c*1.0/(1.0+x)**(c+1.0)
def _logpdf(self, x, c):
return log(c) - (c+1)*log(1+x)
def _cdf(self, x, c):
return 1.0-1.0/(1.0+x)**c
def _sf(self, x, c):
return 1.0/(1.0+x)**c
def _logsf(self, x, c):
return -c*log(1+x)
def _ppf(self, q, c):
return pow(1.0-q,-1.0/c)-1
def _stats(self, c):
mu, mu2, g1, g2 = pareto.stats(c, loc=-1.0, moments='mvsk')
return mu, mu2, g1, g2
def _entropy(self, c):
return 1+1.0/c-log(c)
lomax = lomax_gen(a=0.0, name="lomax",
longname="A Lomax (Pareto of the second kind)",
shapes="c", extradoc="""
Lomax (Pareto of the second kind) distribution
lomax.pdf(x,c) = c / (1+x)**(c+1)
for x >= 0, c > 0.
"""
)
## Power-function distribution
## Special case of beta dist. with d =1.0
class powerlaw_gen(rv_continuous):
def _pdf(self, x, a):
return a*x**(a-1.0)
def _logpdf(self, x, a):
return log(a) + (a-1)*log(x)
def _cdf(self, x, a):
return x**(a*1.0)
def _logcdf(self, x, a):
return a*log(x)
def _ppf(self, q, a):
return pow(q, 1.0/a)
def _stats(self, a):
return a/(a+1.0), a*(a+2.0)/(a+1.0)**2, \
2*(1.0-a)*sqrt((a+2.0)/(a*(a+3.0))), \
6*polyval([1,-1,-6,2],a)/(a*(a+3.0)*(a+4))
def _entropy(self, a):
return 1 - 1.0/a - log(a)
powerlaw = powerlaw_gen(a=0.0, b=1.0, name="powerlaw",
longname="A power-function",
shapes="a", extradoc="""
Power-function distribution
powerlaw.pdf(x,a) = a*x**(a-1)
for 0 <= x <= 1, a > 0.
"""
)
# Power log normal
class powerlognorm_gen(rv_continuous):
def _pdf(self, x, c, s):
return c/(x*s)*norm.pdf(log(x)/s)*pow(norm.cdf(-log(x)/s),c*1.0-1.0)
def _cdf(self, x, c, s):
return 1.0 - pow(norm.cdf(-log(x)/s),c*1.0)
def _ppf(self, q, c, s):
return exp(-s*norm.ppf(pow(1.0-q,1.0/c)))
powerlognorm = powerlognorm_gen(a=0.0, name="powerlognorm",
longname="A power log-normal",
shapes="c, s", extradoc="""
Power log-normal distribution
powerlognorm.pdf(x,c,s) = c/(x*s) * phi(log(x)/s) * (Phi(-log(x)/s))**(c-1)
where phi is the normal pdf, and Phi is the normal cdf, and x > 0, s,c > 0.
"""
)
# Power Normal
class powernorm_gen(rv_continuous):
def _pdf(self, x, c):
return c*_norm_pdf(x)* \
(_norm_cdf(-x)**(c-1.0))
def _logpdf(self, x, c):
return log(c) + _norm_logpdf(x) + (c-1)*_norm_logcdf(-x)
def _cdf(self, x, c):
return 1.0-_norm_cdf(-x)**(c*1.0)
def _ppf(self, q, c):
return -norm.ppf(pow(1.0-q,1.0/c))
powernorm = powernorm_gen(name='powernorm', longname="A power normal",
shapes="c", extradoc="""
Power normal distribution
powernorm.pdf(x,c) = c * phi(x)*(Phi(-x))**(c-1)
where phi is the normal pdf, and Phi is the normal cdf, and x > 0, c > 0.
"""
)
# R-distribution ( a general-purpose distribution with a
# variety of shapes.
# FIXME: PPF does not work.
class rdist_gen(rv_continuous):
def _pdf(self, x, c):
return np.power((1.0-x*x),c/2.0-1) / special.beta(0.5,c/2.0)
def _cdf_skip(self, x, c):
#error inspecial.hyp2f1 for some values see tickets 758, 759
return 0.5 + x/special.beta(0.5,c/2.0)* \
special.hyp2f1(0.5,1.0-c/2.0,1.5,x*x)
def _munp(self, n, c):
return (1-(n % 2))*special.beta((n+1.0)/2,c/2.0)
rdist = rdist_gen(a=-1.0,b=1.0, name="rdist", longname="An R-distributed",
shapes="c", extradoc="""
R-distribution
rdist.pdf(x,c) = (1-x**2)**(c/2-1) / B(1/2, c/2)
for -1 <= x <= 1, c > 0.
"""
)
# Rayleigh distribution (this is chi with df=2 and loc=0.0)
# scale is the mode.
class rayleigh_gen(rv_continuous):
def _rvs(self):
return chi.rvs(2,size=self._size)
def _pdf(self, r):
return r*exp(-r*r/2.0)
def _cdf(self, r):
return 1.0-exp(-r*r/2.0)
def _ppf(self, q):
return sqrt(-2*log(1-q))
def _stats(self):
val = 4-pi
return np.sqrt(pi/2), val/2, 2*(pi-3)*sqrt(pi)/val**1.5, \
6*pi/val-16/val**2
def _entropy(self):
return _EULER/2.0 + 1 - 0.5*log(2)
rayleigh = rayleigh_gen(a=0.0, name="rayleigh",
longname="A Rayleigh",
extradoc="""
Rayleigh distribution
rayleigh.pdf(r) = r * exp(-r**2/2)
for x >= 0.
"""
)
# Reciprocal Distribution
class reciprocal_gen(rv_continuous):
def _argcheck(self, a, b):
self.a = a
self.b = b
self.d = log(b*1.0 / a)
return (a > 0) & (b > 0) & (b > a)
def _pdf(self, x, a, b):
# argcheck should be called before _pdf
return 1.0/(x*self.d)
def _logpdf(self, x, a, b):
return -log(x) - log(self.d)
def _cdf(self, x, a, b):
return (log(x)-log(a)) / self.d
def _ppf(self, q, a, b):
return a*pow(b*1.0/a,q)
def _munp(self, n, a, b):
return 1.0/self.d / n * (pow(b*1.0,n) - pow(a*1.0,n))
def _entropy(self,a,b):
return 0.5*log(a*b)+log(log(b/a))
reciprocal = reciprocal_gen(name="reciprocal",
longname="A reciprocal",
shapes="a, b", extradoc="""
Reciprocal distribution
reciprocal.pdf(x,a,b) = 1/(x*log(b/a))
for a <= x <= b, a,b > 0.
"""
)
# Rice distribution
# FIXME: PPF does not work.
class rice_gen(rv_continuous):
def _pdf(self, x, b):
return x*exp(-(x*x+b*b)/2.0)*special.i0(x*b)
def _logpdf(self, x, b):
return log(x) - (x*x + b*b)/2.0 + log(special.i0(x*b))
def _munp(self, n, b):
nd2 = n/2.0
n1 = 1+nd2
b2 = b*b/2.0
return 2.0**(nd2)*exp(-b2)*special.gamma(n1) * \
special.hyp1f1(n1,1,b2)
rice = rice_gen(a=0.0, name="rice", longname="A Rice",
shapes="b", extradoc="""
Rician distribution
rice.pdf(x,b) = x * exp(-(x**2+b**2)/2) * I[0](x*b)
for x > 0, b > 0.
"""
)
# Reciprocal Inverse Gaussian
# FIXME: PPF does not work.
class recipinvgauss_gen(rv_continuous):
def _rvs(self, mu): #added, taken from invgauss
return 1.0/mtrand.wald(mu, 1.0, size=self._size)
def _pdf(self, x, mu):
return 1.0/sqrt(2*pi*x)*exp(-(1-mu*x)**2.0 / (2*x*mu**2.0))
def _logpdf(self, x, mu):
return -(1-mu*x)**2.0 / (2*x*mu**2.0) - 0.5*log(2*pi*x)
def _cdf(self, x, mu):
trm1 = 1.0/mu - x
trm2 = 1.0/mu + x
isqx = 1.0/sqrt(x)
return 1.0-_norm_cdf(isqx*trm1)-exp(2.0/mu)*_norm_cdf(-isqx*trm2)
# xb=50 or something large is necessary for stats to converge without exception
recipinvgauss = recipinvgauss_gen(a=0.0, xb=50, name='recipinvgauss',
longname="A reciprocal inverse Gaussian",
shapes="mu", extradoc="""
Reciprocal inverse Gaussian
recipinvgauss.pdf(x, mu) = 1/sqrt(2*pi*x) * exp(-(1-mu*x)**2/(2*x*mu**2))
for x >= 0.
"""
)
# Semicircular
class semicircular_gen(rv_continuous):
def _pdf(self, x):
return 2.0/pi*sqrt(1-x*x)
def _cdf(self, x):
return 0.5+1.0/pi*(x*sqrt(1-x*x) + arcsin(x))
def _stats(self):
return 0, 0.25, 0, -1.0
def _entropy(self):
return 0.64472988584940017414
semicircular = semicircular_gen(a=-1.0,b=1.0, name="semicircular",
longname="A semicircular",
extradoc="""
Semicircular distribution
semicircular.pdf(x) = 2/pi * sqrt(1-x**2)
for -1 <= x <= 1.
"""
)
# Triangular
# up-sloping line from loc to (loc + c*scale) and then downsloping line from
# loc + c*scale to loc + scale
# _trstr = "Left must be <= mode which must be <= right with left < right"
class triang_gen(rv_continuous):
def _rvs(self, c):
return mtrand.triangular(0, c, 1, self._size)
def _argcheck(self, c):
return (c >= 0) & (c <= 1)
def _pdf(self, x, c):
return where(x < c, 2*x/c, 2*(1-x)/(1-c))
def _cdf(self, x, c):
return where(x < c, x*x/c, (x*x-2*x+c)/(c-1))
def _ppf(self, q, c):
return where(q < c, sqrt(c*q), 1-sqrt((1-c)*(1-q)))
def _stats(self, c):
return (c+1.0)/3.0, (1.0-c+c*c)/18, sqrt(2)*(2*c-1)*(c+1)*(c-2) / \
(5*(1.0-c+c*c)**1.5), -3.0/5.0
def _entropy(self,c):
return 0.5-log(2)
triang = triang_gen(a=0.0, b=1.0, name="triang", longname="A Triangular",
shapes="c", extradoc="""
Triangular distribution
up-sloping line from loc to (loc + c*scale) and then downsloping
for (loc + c*scale) to (loc+scale).
- standard form is in the range [0,1] with c the mode.
- location parameter shifts the start to loc
- scale changes the width from 1 to scale
"""
)
# Truncated Exponential
class truncexpon_gen(rv_continuous):
def _argcheck(self, b):
self.b = b
return (b > 0)
def _pdf(self, x, b):
return exp(-x)/(1-exp(-b))
def _logpdf(self, x, b):
return -x - log(1-exp(-b))
def _cdf(self, x, b):
return (1.0-exp(-x))/(1-exp(-b))
def _ppf(self, q, b):
return -log(1-q+q*exp(-b))
def _munp(self, n, b):
#wrong answer with formula, same as in continuous.pdf
#return gam(n+1)-special.gammainc(1+n,b)
if n == 1:
return (1-(b+1)*exp(-b))/(-expm1(-b))
elif n == 2:
return 2*(1-0.5*(b*b+2*b+2)*exp(-b))/(-expm1(-b))
else:
#return generic for higher moments
#return rv_continuous._mom1_sc(self,n, b)
return self._mom1_sc(n, b)
def _entropy(self, b):
eB = exp(b)
return log(eB-1)+(1+eB*(b-1.0))/(1.0-eB)
truncexpon = truncexpon_gen(a=0.0, name='truncexpon',
longname="A truncated exponential",
shapes="b", extradoc="""
Truncated exponential distribution
truncexpon.pdf(x,b) = exp(-x)/(1-exp(-b))
for 0 < x < b.
"""
)
# Truncated Normal
class truncnorm_gen(rv_continuous):
def _argcheck(self, a, b):
self.a = a
self.b = b
self._nb = _norm_cdf(b)
self._na = _norm_cdf(a)
self._delta = self._nb - self._na
self._logdelta = log(self._delta)
return (a != b)
# All of these assume that _argcheck is called first
# and no other thread calls _pdf before.
def _pdf(self, x, a, b):
return _norm_pdf(x) / self._delta
def _logpdf(self, x, a, b):
return _norm_logpdf(x) - self._logdelta
def _cdf(self, x, a, b):
return (_norm_cdf(x) - self._na) / self._delta
def _ppf(self, q, a, b):
return norm._ppf(q*self._nb + self._na*(1.0-q))
def _stats(self, a, b):
nA, nB = self._na, self._nb
d = nB - nA
pA, pB = _norm_pdf(a), _norm_pdf(b)
mu = (pA - pB) / d #correction sign
mu2 = 1 + (a*pA - b*pB) / d - mu*mu
return mu, mu2, None, None
truncnorm = truncnorm_gen(name='truncnorm', longname="A truncated normal",
shapes="a, b", extradoc="""
Truncated Normal distribution.
The standard form of this distribution is a standard normal truncated to the
range [a,b] --- notice that a and b are defined over the domain
of the standard normal. To convert clip values for a specific mean and
standard deviation use a,b = (myclip_a-my_mean)/my_std, (myclip_b-my_mean)/my_std
"""
)
# Tukey-Lambda
# A flexible distribution ranging from Cauchy (lam=-1)
# to logistic (lam=0.0)
# to approx Normal (lam=0.14)
# to u-shape (lam = 0.5)
# to Uniform from -1 to 1 (lam = 1)
# FIXME: RVS does not work.
class tukeylambda_gen(rv_continuous):
def _argcheck(self, lam):
# lam in RR.
return np.ones(np.shape(lam), dtype=bool)
def _pdf(self, x, lam):
Fx = arr(special.tklmbda(x,lam))
Px = Fx**(lam-1.0) + (arr(1-Fx))**(lam-1.0)
Px = 1.0/arr(Px)
return where((lam <= 0) | (abs(x) < 1.0/arr(lam)), Px, 0.0)
def _cdf(self, x, lam):
return special.tklmbda(x, lam)
def _ppf(self, q, lam):
q = q*1.0
vals1 = (q**lam - (1-q)**lam)/lam
vals2 = log(q/(1-q))
return where((lam == 0)&(q==q), vals2, vals1)
def _stats(self, lam):
mu2 = 2*gam(lam+1.5)-lam*pow(4,-lam)*sqrt(pi)*gam(lam)*(1-2*lam)
mu2 /= lam*lam*(1+2*lam)*gam(1+1.5)
mu4 = 3*gam(lam)*gam(lam+0.5)*pow(2,-2*lam) / lam**3 / gam(2*lam+1.5)
mu4 += 2.0/lam**4 / (1+4*lam)
mu4 -= 2*sqrt(3)*gam(lam)*pow(2,-6*lam)*pow(3,3*lam) * \
gam(lam+1.0/3)*gam(lam+2.0/3) / (lam**3.0 * gam(2*lam+1.5) * \
gam(lam+0.5))
g2 = mu4 / mu2 / mu2 - 3.0
return 0, mu2, 0, g2
def _entropy(self, lam):
def integ(p):
return log(pow(p,lam-1)+pow(1-p,lam-1))
return integrate.quad(integ,0,1)[0]
tukeylambda = tukeylambda_gen(name='tukeylambda', longname="A Tukey-Lambda",
shapes="lam", extradoc="""
Tukey-Lambda distribution
A flexible distribution ranging from Cauchy (lam=-1)
to logistic (lam=0.0)
to approx Normal (lam=0.14)
to u-shape (lam = 0.5)
to Uniform from -1 to 1 (lam = 1)
"""
)
# Uniform
# loc to loc + scale
class uniform_gen(rv_continuous):
def _rvs(self):
return mtrand.uniform(0.0,1.0,self._size)
def _pdf(self, x):
return 1.0*(x==x)
def _cdf(self, x):
return x
def _ppf(self, q):
return q
def _stats(self):
return 0.5, 1.0/12, 0, -1.2
def _entropy(self):
return 0.0
uniform = uniform_gen(a=0.0,b=1.0, name='uniform', longname="A uniform",
extradoc="""
Uniform distribution
constant between loc and loc+scale
"""
)
# Von-Mises
# if x is not in range or loc is not in range it assumes they are angles
# and converts them to [-pi, pi] equivalents.
eps = numpy.finfo(float).eps
class vonmises_gen(rv_continuous):
def _rvs(self, b):
return mtrand.vonmises(0.0, b, size=self._size)
def _pdf(self, x, b):
return exp(b*cos(x)) / (2*pi*special.i0(b))
def _cdf(self, x, b):
return vonmises_cython.von_mises_cdf(b,x)
def _stats_skip(self, b):
return 0, None, 0, None
vonmises = vonmises_gen(name='vonmises', longname="A Von Mises",
shapes="b", extradoc="""
Von Mises distribution
if x is not in range or loc is not in range it assumes they are angles
and converts them to [-pi, pi] equivalents.
vonmises.pdf(x,b) = exp(b*cos(x)) / (2*pi*I[0](b))
for -pi <= x <= pi, b > 0.
"""
)
## Wald distribution (Inverse Normal with shape parameter mu=1.0)
class wald_gen(invgauss_gen):
"""A Wald continuous random variable.
%(before_notes)s
Notes
-----
The probability density function, `pdf`, is defined by
``1/sqrt(2*pi*x**3) * exp(-(x-1)**2/(2*x))``, for ``x > 0``.
%(example)s
"""
def _rvs(self):
return mtrand.wald(1.0, 1.0, size=self._size)
def _pdf(self, x):
return invgauss._pdf(x, 1.0)
def _logpdf(self, x):
return invgauss._logpdf(x, 1.0)
def _cdf(self, x):
return invgauss._cdf(x, 1.0)
def _stats(self):
return 1.0, 1.0, 3.0, 15.0
wald = wald_gen(a=0.0, name="wald", extradoc="""
Wald distribution
wald.pdf(x) = 1/sqrt(2*pi*x**3) * exp(-(x-1)**2/(2*x))
for x > 0.
"""
)
## Weibull
## See Frechet
# Wrapped Cauchy
class wrapcauchy_gen(rv_continuous):
def _argcheck(self, c):
return (c > 0) & (c < 1)
def _pdf(self, x, c):
return (1.0-c*c)/(2*pi*(1+c*c-2*c*cos(x)))
def _cdf(self, x, c):
output = 0.0*x
val = (1.0+c)/(1.0-c)
c1 = x<pi
c2 = 1-c1
xp = extract( c1,x)
#valp = extract(c1,val)
xn = extract( c2,x)
#valn = extract(c2,val)
if (any(xn)):
valn = extract(c2, np.ones_like(x)*val)
xn = 2*pi - xn
yn = tan(xn/2.0)
on = 1.0-1.0/pi*arctan(valn*yn)
output = place(output, c2, on)
if (any(xp)):
valp = extract(c1, np.ones_like(x)*val)
yp = tan(xp/2.0)
op = 1.0/pi*arctan(valp*yp)
output = place(output, c1, op)
return output
def _ppf(self, q, c):
val = (1.0-c)/(1.0+c)
rcq = 2*arctan(val*tan(pi*q))
rcmq = 2*pi-2*arctan(val*tan(pi*(1-q)))
return where(q < 1.0/2, rcq, rcmq)
def _entropy(self, c):
return log(2*pi*(1-c*c))
wrapcauchy = wrapcauchy_gen(a=0.0,b=2*pi, name='wrapcauchy',
longname="A wrapped Cauchy",
shapes="c", extradoc="""
Wrapped Cauchy distribution
wrapcauchy.pdf(x,c) = (1-c**2) / (2*pi*(1+c**2-2*c*cos(x)))
for 0 <= x <= 2*pi, 0 < c < 1.
"""
)
### DISCRETE DISTRIBUTIONS
###
def entropy(pk,qk=None):
"""S = entropy(pk,qk=None)
calculate the entropy of a distribution given the p_k values
S = -sum(pk * log(pk), axis=0)
If qk is not None, then compute a relative entropy
S = sum(pk * log(pk / qk), axis=0)
Routine will normalize pk and qk if they don't sum to 1
"""
pk = arr(pk)
pk = 1.0* pk / sum(pk,axis=0)
if qk is None:
vec = where(pk == 0, 0.0, pk*log(pk))
else:
qk = arr(qk)
if len(qk) != len(pk):
raise ValueError("qk and pk must have same length.")
qk = 1.0*qk / sum(qk,axis=0)
# If qk is zero anywhere, then unless pk is zero at those places
# too, the relative entropy is infinite.
if any(take(pk,nonzero(qk==0.0),axis=0)!=0.0, 0):
return inf
vec = where (pk == 0, 0.0, -pk*log(pk / qk))
return -sum(vec,axis=0)
## Handlers for generic case where xk and pk are given
def _drv_pmf(self, xk, *args):
try:
return self.P[xk]
except KeyError:
return 0.0
def _drv_cdf(self, xk, *args):
indx = argmax((self.xk>xk),axis=-1)-1
return self.F[self.xk[indx]]
def _drv_ppf(self, q, *args):
indx = argmax((self.qvals>=q),axis=-1)
return self.Finv[self.qvals[indx]]
def _drv_nonzero(self, k, *args):
return 1
def _drv_moment(self, n, *args):
n = arr(n)
return sum(self.xk**n[newaxis,...] * self.pk, axis=0)
def _drv_moment_gen(self, t, *args):
t = arr(t)
return sum(exp(self.xk * t[newaxis,...]) * self.pk, axis=0)
def _drv2_moment(self, n, *args):
'''non-central moment of discrete distribution'''
#many changes, originally not even a return
tot = 0.0
diff = 1e100
#pos = self.a
pos = max(0.0, 1.0*self.a)
count = 0
#handle cases with infinite support
ulimit = max(1000, (min(self.b,1000) + max(self.a,-1000))/2.0 )
llimit = min(-1000, (min(self.b,1000) + max(self.a,-1000))/2.0 )
while (pos <= self.b) and ((pos <= ulimit) or \
(diff > self.moment_tol)):
diff = np.power(pos, n) * self.pmf(pos,*args)
# use pmf because _pmf does not check support in randint
# and there might be problems ? with correct self.a, self.b at this stage
tot += diff
pos += self.inc
count += 1
if self.a < 0: #handle case when self.a = -inf
diff = 1e100
pos = -self.inc
while (pos >= self.a) and ((pos >= llimit) or \
(diff > self.moment_tol)):
diff = np.power(pos, n) * self.pmf(pos,*args)
#using pmf instead of _pmf, see above
tot += diff
pos -= self.inc
count += 1
return tot
def _drv2_ppfsingle(self, q, *args): # Use basic bisection algorithm
b = self.invcdf_b
a = self.invcdf_a
if isinf(b): # Be sure ending point is > q
b = max(100*q,10)
while 1:
if b >= self.b: qb = 1.0; break
qb = self._cdf(b,*args)
if (qb < q): b += 10
else: break
else:
qb = 1.0
if isinf(a): # be sure starting point < q
a = min(-100*q,-10)
while 1:
if a <= self.a: qb = 0.0; break
qa = self._cdf(a,*args)
if (qa > q): a -= 10
else: break
else:
qa = self._cdf(a, *args)
while 1:
if (qa == q):
return a
if (qb == q):
return b
if b == a+1:
#testcase: return wrong number at lower index
#python -c "from scipy.stats import zipf;print zipf.ppf(0.01,2)" wrong
#python -c "from scipy.stats import zipf;print zipf.ppf([0.01,0.61,0.77,0.83],2)"
#python -c "from scipy.stats import logser;print logser.ppf([0.1,0.66, 0.86,0.93],0.6)"
if qa > q:
return a
else:
return b
c = int((a+b)/2.0)
qc = self._cdf(c, *args)
if (qc < q):
a = c
qa = qc
elif (qc > q):
b = c
qb = qc
else:
return c
def reverse_dict(dict):
newdict = {}
sorted_keys = copy(dict.keys())
sorted_keys.sort()
for key in sorted_keys[::-1]:
newdict[dict[key]] = key
return newdict
def make_dict(keys, values):
d = {}
for key, value in zip(keys, values):
d[key] = value
return d
# Must over-ride one of _pmf or _cdf or pass in
# x_k, p(x_k) lists in initialization
class rv_discrete(rv_generic):
"""
A generic discrete random variable class meant for subclassing.
`rv_discrete` is a base class to construct specific distribution classes
and instances from for discrete random variables. rv_discrete can be used
to construct an arbitrary distribution with defined by a list of support
points and the corresponding probabilities.
Parameters
----------
a : float, optional
Lower bound of the support of the distribution, default: 0
b : float, optional
Upper bound of the support of the distribution, default: plus infinity
moment_tol : float, optional
The tolerance for the generic calculation of moments
values : tuple of two array_like
(xk, pk) where xk are points (integers) with positive probability pk
with sum(pk) = 1
inc : integer
increment for the support of the distribution, default: 1
other values have not been tested
badvalue : object, optional
The value in (masked) arrays that indicates a value that should be
ignored.
name : str, optional
The name of the instance. This string is used to construct the default
example for distributions.
longname : str, optional
This string is used as part of the first line of the docstring returned
when a subclass has no docstring of its own. Note: `longname` exists
for backwards compatibility, do not use for new subclasses.
shapes : str, optional
The shape of the distribution. For example ``"m, n"`` for a
distribution that takes two integers as the first two arguments for all
its methods.
extradoc : str, optional
This string is used as the last part of the docstring returned when a
subclass has no docstring of its own. Note: `extradoc` exists for
backwards compatibility, do not use for new subclasses.
Methods
-------
generic.rvs(<shape(s)>, loc=0, size=1)
random variates
generic.pmf(x, <shape(s)>, loc=0)
probability mass function
logpmf(x, <shape(s)>, loc=0)
log of the probability density function
generic.cdf(x, <shape(s)>, loc=0)
cumulative density function
generic.logcdf(x, <shape(s)>, loc=0)
log of the cumulative density function
generic.sf(x, <shape(s)>, loc=0)
survival function (1-cdf --- sometimes more accurate)
generic.logsf(x, <shape(s)>, loc=0, scale=1)
log of the survival function
generic.ppf(q, <shape(s)>, loc=0)
percent point function (inverse of cdf --- percentiles)
generic.isf(q, <shape(s)>, loc=0)
inverse survival function (inverse of sf)
generic.moment(n, <shape(s)>, loc=0)
non-central n-th moment of the distribution. May not work for array arguments.
generic.stats(<shape(s)>, loc=0, moments='mv')
mean('m', axis=0), variance('v'), skew('s'), and/or kurtosis('k')
generic.entropy(<shape(s)>, loc=0)
entropy of the RV
generic.fit(data, <shape(s)>, loc=0)
Parameter estimates for generic data
generic.expect(func=None, args=(), loc=0, lb=None, ub=None, conditional=False)
Expected value of a function with respect to the distribution.
Additional kwd arguments passed to integrate.quad
generic.median(<shape(s)>, loc=0)
Median of the distribution.
generic.mean(<shape(s)>, loc=0)
Mean of the distribution.
generic.std(<shape(s)>, loc=0)
Standard deviation of the distribution.
generic.var(<shape(s)>, loc=0)
Variance of the distribution.
generic.interval(alpha, <shape(s)>, loc=0)
Interval that with `alpha` percent probability contains a random
realization of this distribution.
generic(<shape(s)>, loc=0)
calling a distribution instance returns a frozen distribution
Notes
-----
Alternatively, the object may be called (as a function) to fix
the shape and location parameters returning a
"frozen" discrete RV object:
myrv = generic(<shape(s)>, loc=0)
- frozen RV object with the same methods but holding the given shape
and location fixed.
You can construct an aribtrary discrete rv where P{X=xk} = pk
by passing to the rv_discrete initialization method (through the
values=keyword) a tuple of sequences (xk, pk) which describes only those
values of X (xk) that occur with nonzero probability (pk).
To create a new discrete distribution, we would do the following::
class poisson_gen(rv_continuous):
#"Poisson distribution"
def _pmf(self, k, mu):
...
and create an instance
poisson = poisson_gen(name="poisson", shapes="mu", longname='A Poisson')
The docstring can be created from a template.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> numargs = generic.numargs
>>> [ <shape(s)> ] = ['Replace with resonable value', ]*numargs
Display frozen pmf:
>>> rv = generic(<shape(s)>)
>>> x = np.arange(0, np.min(rv.dist.b, 3)+1)
>>> h = plt.plot(x, rv.pmf(x))
Check accuracy of cdf and ppf:
>>> prb = generic.cdf(x, <shape(s)>)
>>> h = plt.semilogy(np.abs(x-generic.ppf(prb, <shape(s)>))+1e-20)
Random number generation:
>>> R = generic.rvs(<shape(s)>, size=100)
Custom made discrete distribution:
>>> vals = [arange(7), (0.1, 0.2, 0.3, 0.1, 0.1, 0.1, 0.1)]
>>> custm = rv_discrete(name='custm', values=vals)
>>> h = plt.plot(vals[0], custm.pmf(vals[0]))
"""
def __init__(self, a=0, b=inf, name=None, badvalue=None,
moment_tol=1e-8,values=None,inc=1,longname=None,
shapes=None, extradoc=None):
super(rv_generic,self).__init__()
if badvalue is None:
badvalue = nan
self.badvalue = badvalue
self.a = a
self.b = b
self.invcdf_a = a # what's the difference to self.a, .b
self.invcdf_b = b
self.name = name
self.moment_tol = moment_tol
self.inc = inc
self._cdfvec = sgf(self._cdfsingle,otypes='d')
self.return_integers = 1
self.vecentropy = vectorize(self._entropy)
self.shapes = shapes
self.extradoc = extradoc
if values is not None:
self.xk, self.pk = values
self.return_integers = 0
indx = argsort(ravel(self.xk))
self.xk = take(ravel(self.xk),indx, 0)
self.pk = take(ravel(self.pk),indx, 0)
self.a = self.xk[0]
self.b = self.xk[-1]
self.P = make_dict(self.xk, self.pk)
self.qvals = numpy.cumsum(self.pk,axis=0)
self.F = make_dict(self.xk, self.qvals)
self.Finv = reverse_dict(self.F)
self._ppf = instancemethod(sgf(_drv_ppf,otypes='d'),
self, rv_discrete)
self._pmf = instancemethod(sgf(_drv_pmf,otypes='d'),
self, rv_discrete)
self._cdf = instancemethod(sgf(_drv_cdf,otypes='d'),
self, rv_discrete)
self._nonzero = instancemethod(_drv_nonzero, self, rv_discrete)
self.generic_moment = instancemethod(_drv_moment,
self, rv_discrete)
self.moment_gen = instancemethod(_drv_moment_gen,
self, rv_discrete)
self.numargs=0
else:
cdf_signature = inspect.getargspec(self._cdf.im_func)
numargs1 = len(cdf_signature[0]) - 2
pmf_signature = inspect.getargspec(self._pmf.im_func)
numargs2 = len(pmf_signature[0]) - 2
self.numargs = max(numargs1, numargs2)
#nin correction needs to be after we know numargs
#correct nin for generic moment vectorization
self.vec_generic_moment = sgf(_drv2_moment, otypes='d')
self.vec_generic_moment.nin = self.numargs + 2
self.generic_moment = instancemethod(self.vec_generic_moment,
self, rv_discrete)
#correct nin for ppf vectorization
_vppf = sgf(_drv2_ppfsingle,otypes='d')
_vppf.nin = self.numargs + 2 # +1 is for self
self._vecppf = instancemethod(_vppf,
self, rv_discrete)
#now that self.numargs is defined, we can adjust nin
self._cdfvec.nin = self.numargs + 1
# generate docstring for subclass instances
if longname is None:
if name[0] in ['aeiouAEIOU']:
hstr = "An "
else:
hstr = "A "
longname = hstr + name
if self.__doc__ is None:
self._construct_default_doc(longname=longname, extradoc=extradoc)
else:
self._construct_doc()
## This only works for old-style classes...
# self.__class__.__doc__ = self.__doc__
def _construct_default_doc(self, longname=None, extradoc=None):
"""Construct instance docstring from the rv_discrete template."""
if extradoc is None:
extradoc = ''
if extradoc.startswith('\n\n'):
extradoc = extradoc[2:]
self.__doc__ = ''.join(['%s discrete random variable.'%longname,
'\n\n%(before_notes)s\n', docheaders['notes'],
extradoc, '\n%(example)s'])
self._construct_doc()
def _construct_doc(self):
"""Construct the instance docstring with string substitutions."""
tempdict = docdict_discrete.copy()
tempdict['name'] = self.name or 'distname'
tempdict['shapes'] = self.shapes or ''
if self.shapes is None:
# remove shapes from call parameters if there are none
for item in ['callparams', 'default', 'before_notes']:
tempdict[item] = tempdict[item].replace(\
"\n%(shapes)s : array-like\n shape parameters", "")
for i in range(2):
if self.shapes is None:
# necessary because we use %(shapes)s in two forms (w w/o ", ")
self.__doc__ = self.__doc__.replace("%(shapes)s, ", "")
self.__doc__ = doccer.docformat(self.__doc__, tempdict)
def _rvs(self, *args):
return self._ppf(mtrand.random_sample(self._size),*args)
def _nonzero(self, k, *args):
return floor(k)==k
def _argcheck(self, *args):
cond = 1
for arg in args:
cond &= (arg > 0)
return cond
def _pmf(self, k, *args):
return self._cdf(k,*args) - self._cdf(k-1,*args)
def _logpmf(self, k, *args):
return log(self._pmf(k, *args))
def _cdfsingle(self, k, *args):
m = arange(int(self.a),k+1)
return sum(self._pmf(m,*args),axis=0)
def _cdf(self, x, *args):
k = floor(x)
return self._cdfvec(k,*args)
def _logcdf(self, x, *args):
return log(self._cdf(x, *args))
def _sf(self, x, *args):
return 1.0-self._cdf(x,*args)
def _logsf(self, x, *args):
return log(self._sf(x, *args))
def _ppf(self, q, *args):
return self._vecppf(q, *args)
def _isf(self, q, *args):
return self._ppf(1-q,*args)
def _stats(self, *args):
return None, None, None, None
def _munp(self, n, *args):
return self.generic_moment(n, *args)
def rvs(self, *args, **kwargs):
"""
Random variates of given type.
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
size : int or tuple of ints, optional
defining number of random variates (default=1)
Returns
-------
rvs : array-like
random variates of given `size`
"""
kwargs['discrete'] = True
return super(rv_discrete, self).rvs(*args, **kwargs)
def pmf(self, k,*args, **kwds):
"""
Probability mass function at k of the given RV.
Parameters
----------
k : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
pmf : array-like
Probability mass function evaluated at k
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr((k-loc))
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k <= self.b) & self._nonzero(k,*args)
cond = cond0 & cond1
output = zeros(shape(cond),'d')
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._pmf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def logpmf(self, k,*args, **kwds):
"""
Log of the probability mass function at k of the given RV.
Parameters
----------
k : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
logpmf : array-like
Log of the probability mass function evaluated at k
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr((k-loc))
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k <= self.b) & self._nonzero(k,*args)
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._logpmf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def cdf(self, k, *args, **kwds):
"""
Cumulative distribution function at k of the given RV
Parameters
----------
k : array-like, int
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
cdf : array-like
Cumulative distribution function evaluated at k
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr((k-loc))
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k < self.b)
cond2 = (k >= self.b)
cond = cond0 & cond1
output = zeros(shape(cond),'d')
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2*(cond0==cond0), 1.0)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._cdf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def logcdf(self, k, *args, **kwds):
"""
Log of the cumulative distribution function at k of the given RV
Parameters
----------
k : array-like, int
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
logcdf : array-like
Log of the cumulative distribution function evaluated at k
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr((k-loc))
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k < self.b)
cond2 = (k >= self.b)
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2*(cond0==cond0), 0.0)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._logcdf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def sf(self,k,*args,**kwds):
"""
Survival function (1-cdf) at k of the given RV
Parameters
----------
k : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
sf : array-like
Survival function evaluated at k
"""
loc= kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr(k-loc)
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k <= self.b)
cond2 = (k < self.a) & cond0
cond = cond0 & cond1
output = zeros(shape(cond),'d')
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,1.0)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._sf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def logsf(self,k,*args,**kwds):
"""
Log of the survival function (1-cdf) at k of the given RV
Parameters
----------
k : array-like
quantiles
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
sf : array-like
Survival function evaluated at k
"""
loc= kwds.get('loc')
args, loc = self._fix_loc(args, loc)
k,loc = map(arr,(k,loc))
args = tuple(map(arr,args))
k = arr(k-loc)
cond0 = self._argcheck(*args)
cond1 = (k >= self.a) & (k <= self.b)
cond2 = (k < self.a) & cond0
cond = cond0 & cond1
output = empty(shape(cond),'d')
output.fill(NINF)
output = place(output,(1-cond0)*(cond1==cond1),self.badvalue)
output = place(output,cond2,0.0)
if any(cond):
goodargs = argsreduce(cond, *((k,)+args))
output = place(output,cond,self._logsf(*goodargs))
if output.ndim == 0:
return output[()]
return output
def ppf(self,q,*args,**kwds):
"""
Percent point function (inverse of cdf) at q of the given RV
Parameters
----------
q : array-like
lower tail probability
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
scale: array-like, optional
scale parameter (default=1)
Returns
-------
k : array-like
quantile corresponding to the lower tail probability, q.
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
q,loc = map(arr,(q,loc))
args = tuple(map(arr,args))
cond0 = self._argcheck(*args) & (loc == loc)
cond1 = (q > 0) & (q < 1)
cond2 = (q==1) & cond0
cond = cond0 & cond1
output = valarray(shape(cond),value=self.badvalue,typecode='d')
#output type 'd' to handle nin and inf
output = place(output,(q==0)*(cond==cond), self.a-1)
output = place(output,cond2,self.b)
if any(cond):
goodargs = argsreduce(cond, *((q,)+args+(loc,)))
loc, goodargs = goodargs[-1], goodargs[:-1]
output = place(output,cond,self._ppf(*goodargs) + loc)
if output.ndim == 0:
return output[()]
return output
def isf(self,q,*args,**kwds):
"""
Inverse survival function (1-sf) at q of the given RV
Parameters
----------
q : array-like
upper tail probability
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
Returns
-------
k : array-like
quantile corresponding to the upper tail probability, q.
"""
loc = kwds.get('loc')
args, loc = self._fix_loc(args, loc)
q,loc = map(arr,(q,loc))
args = tuple(map(arr,args))
cond0 = self._argcheck(*args) & (loc == loc)
cond1 = (q > 0) & (q < 1)
cond2 = (q==1) & cond0
cond = cond0 & cond1
#old:
## output = valarray(shape(cond),value=self.b,typecode='d')
## #typecode 'd' to handle nin and inf
## output = place(output,(1-cond0)*(cond1==cond1), self.badvalue)
## output = place(output,cond2,self.a-1)
#same problem as with ppf
# copied from ppf and changed
output = valarray(shape(cond),value=self.badvalue,typecode='d')
#output type 'd' to handle nin and inf
output = place(output,(q==0)*(cond==cond), self.b)
output = place(output,cond2,self.a-1)
# call place only if at least 1 valid argument
if any(cond):
goodargs = argsreduce(cond, *((q,)+args+(loc,)))
loc, goodargs = goodargs[-1], goodargs[:-1]
output = place(output,cond,self._isf(*goodargs) + loc) #PB same as ticket 766
if output.ndim == 0:
return output[()]
return output
def stats(self, *args, **kwds):
"""
Some statistics of the given discrete RV
Parameters
----------
arg1, arg2, arg3,... : array-like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array-like, optional
location parameter (default=0)
moments : string, optional
composed of letters ['mvsk'] defining which moments to compute:
'm' = mean,
'v' = variance,
's' = (Fisher's) skew,
'k' = (Fisher's) kurtosis.
(default='mv')
Returns
-------
stats : sequence
of requested moments.
"""
loc,moments=map(kwds.get,['loc','moments'])
N = len(args)
if N > self.numargs:
if N == self.numargs + 1 and loc is None: # loc is given without keyword
loc = args[-1]
if N == self.numargs + 2 and moments is None: # loc, scale, and moments
loc, moments = args[-2:]
args = args[:self.numargs]
if loc is None: loc = 0.0
if moments is None: moments = 'mv'
loc = arr(loc)
args = tuple(map(arr,args))
cond = self._argcheck(*args) & (loc==loc)
signature = inspect.getargspec(self._stats.im_func)
if (signature[2] is not None) or ('moments' in signature[0]):
mu, mu2, g1, g2 = self._stats(*args,**{'moments':moments})
else:
mu, mu2, g1, g2 = self._stats(*args)
if g1 is None:
mu3 = None
else:
mu3 = g1*(mu2**1.5)
default = valarray(shape(cond), self.badvalue)
output = []
# Use only entries that are valid in calculation
goodargs = argsreduce(cond, *(args+(loc,)))
loc, goodargs = goodargs[-1], goodargs[:-1]
if 'm' in moments:
if mu is None:
mu = self._munp(1.0,*goodargs)
out0 = default.copy()
out0 = place(out0,cond,mu+loc)
output.append(out0)
if 'v' in moments:
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
mu2 = mu2p - mu*mu
out0 = default.copy()
out0 = place(out0,cond,mu2)
output.append(out0)
if 's' in moments:
if g1 is None:
mu3p = self._munp(3.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
mu2 = mu2p - mu*mu
mu3 = mu3p - 3*mu*mu2 - mu**3
g1 = mu3 / mu2**1.5
out0 = default.copy()
out0 = place(out0,cond,g1)
output.append(out0)
if 'k' in moments:
if g2 is None:
mu4p = self._munp(4.0,*goodargs)
if mu is None:
mu = self._munp(1.0,*goodargs)
if mu2 is None:
mu2p = self._munp(2.0,*goodargs)
mu2 = mu2p - mu*mu
if mu3 is None:
mu3p = self._munp(3.0,*goodargs)
mu3 = mu3p - 3*mu*mu2 - mu**3
mu4 = mu4p - 4*mu*mu3 - 6*mu*mu*mu2 - mu**4
g2 = mu4 / mu2**2.0 - 3.0
out0 = default.copy()
out0 = place(out0,cond,g2)
output.append(out0)
if len(output) == 1:
return output[0]
else:
return tuple(output)
def moment(self, n, *args, **kwds): # Non-central moments in standard form.
"""
n'th non-central moment of the distribution
Parameters
----------
n: int, n>=1
order of moment
arg1, arg2, arg3,...: float
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : float, optional
location parameter (default=0)
scale : float, optional
scale parameter (default=1)
"""
loc = kwds.get('loc', 0)
scale = kwds.get('scale', 1)
if not (self._argcheck(*args) and (scale > 0)):
return nan
if (floor(n) != n):
raise ValueError("Moment must be an integer.")
if (n < 0): raise ValueError("Moment must be positive.")
mu, mu2, g1, g2 = None, None, None, None
if (n > 0) and (n < 5):
signature = inspect.getargspec(self._stats.im_func)
if (signature[2] is not None) or ('moments' in signature[0]):
dict = {'moments':{1:'m',2:'v',3:'vs',4:'vk'}[n]}
else:
dict = {}
mu, mu2, g1, g2 = self._stats(*args,**dict)
val = _moment_from_stats(n, mu, mu2, g1, g2, self._munp, args)
# Convert to transformed X = L + S*Y
# so E[X^n] = E[(L+S*Y)^n] = L^n sum(comb(n,k)*(S/L)^k E[Y^k],k=0...n)
if loc == 0:
return scale**n * val
else:
result = 0
fac = float(scale) / float(loc)
for k in range(n):
valk = _moment_from_stats(k, mu, mu2, g1, g2, self._munp, args)
result += comb(n,k,exact=True)*(fac**k) * valk
result += fac**n * val
return result * loc**n
def freeze(self, *args, **kwds):
return rv_frozen(self, *args, **kwds)
def _entropy(self, *args):
if hasattr(self,'pk'):
return entropy(self.pk)
else:
mu = int(self.stats(*args, **{'moments':'m'}))
val = self.pmf(mu,*args)
if (val==0.0): ent = 0.0
else: ent = -val*log(val)
k = 1
term = 1.0
while (abs(term) > eps):
val = self.pmf(mu+k,*args)
if val == 0.0: term = 0.0
else: term = -val * log(val)
val = self.pmf(mu-k,*args)
if val != 0.0: term -= val*log(val)
k += 1
ent += term
return ent
def entropy(self, *args, **kwds):
loc= kwds.get('loc')
args, loc = self._fix_loc(args, loc)
loc = arr(loc)
args = map(arr,args)
cond0 = self._argcheck(*args) & (loc==loc)
output = zeros(shape(cond0),'d')
output = place(output,(1-cond0),self.badvalue)
goodargs = argsreduce(cond0, *args)
output = place(output,cond0,self.vecentropy(*goodargs))
return output
def __call__(self, *args, **kwds):
return self.freeze(*args,**kwds)
def expect(self, func=None, args=(), loc=0, lb=None, ub=None, conditional=False):
"""calculate expected value of a function with respect to the distribution
for discrete distribution
Parameters
----------
fn : function (default: identity mapping)
Function for which sum is calculated. Takes only one argument.
args : tuple
argument (parameters) of the distribution
optional keyword parameters
lb, ub : numbers
lower and upper bound for integration, default is set to the support
of the distribution, lb and ub are inclusive (ul<=k<=ub)
conditional : boolean (False)
If true then the expectation is corrected by the conditional
probability of the integration interval. The return value is the
expectation of the function, conditional on being in the given
interval (k such that ul<=k<=ub).
Returns
-------
expected value : float
Notes
-----
* function is not vectorized
* accuracy: uses self.moment_tol as stopping criterium
for heavy tailed distribution e.g. zipf(4), accuracy for
mean, variance in example is only 1e-5,
increasing precision (moment_tol) makes zipf very slow
* suppnmin=100 internal parameter for minimum number of points to evaluate
could be added as keyword parameter, to evaluate functions with
non-monotonic shapes, points include integers in (-suppnmin, suppnmin)
* uses maxcount=1000 limits the number of points that are evaluated
to break loop for infinite sums
(a maximum of suppnmin+1000 positive plus suppnmin+1000 negative integers
are evaluated)
"""
#moment_tol = 1e-12 # increase compared to self.moment_tol,
# too slow for only small gain in precision for zipf
#avoid endless loop with unbound integral, eg. var of zipf(2)
maxcount = 1000
suppnmin = 100 #minimum number of points to evaluate (+ and -)
if func is None:
def fun(x):
#loc and args from outer scope
return (x+loc)*self._pmf(x, *args)
else:
def fun(x):
#loc and args from outer scope
return func(x+loc)*self._pmf(x, *args)
# used pmf because _pmf does not check support in randint
# and there might be problems(?) with correct self.a, self.b at this stage
# maybe not anymore, seems to work now with _pmf
self._argcheck(*args) # (re)generate scalar self.a and self.b
if lb is None:
lb = (self.a)
else:
lb = lb - loc #convert bound for standardized distribution
if ub is None:
ub = (self.b)
else:
ub = ub - loc #convert bound for standardized distribution
if conditional:
if np.isposinf(ub)[()]:
#work around bug: stats.poisson.sf(stats.poisson.b, 2) is nan
invfac = 1 - self.cdf(lb-1,*args)
else:
invfac = 1 - self.cdf(lb-1,*args) - self.sf(ub,*args)
else:
invfac = 1.0
tot = 0.0
low, upp = self._ppf(0.001, *args), self._ppf(0.999, *args)
low = max(min(-suppnmin, low), lb)
upp = min(max(suppnmin, upp), ub)
supp = np.arange(low, upp+1, self.inc) #check limits
#print 'low, upp', low, upp
tot = np.sum(fun(supp))
diff = 1e100
pos = upp + self.inc
count = 0
#handle cases with infinite support
while (pos <= ub) and (diff > self.moment_tol) and count <= maxcount:
diff = fun(pos)
tot += diff
pos += self.inc
count += 1
if self.a < 0: #handle case when self.a = -inf
diff = 1e100
pos = low - self.inc
while (pos >= lb) and (diff > self.moment_tol) and count <= maxcount:
diff = fun(pos)
tot += diff
pos -= self.inc
count += 1
if count > maxcount:
# fixme: replace with proper warning
print 'sum did not converge'
return tot/invfac
# Binomial
class binom_gen(rv_discrete):
def _rvs(self, n, pr):
return mtrand.binomial(n,pr,self._size)
def _argcheck(self, n, pr):
self.b = n
return (n>=0) & (pr >= 0) & (pr <= 1)
def _logpmf(self, x, n, pr):
k = floor(x)
combiln = (gamln(n+1) - (gamln(k+1) +
gamln(n-k+1)))
return combiln + k*np.log(pr) + (n-k)*np.log(1-pr)
def _pmf(self, x, n, pr):
return exp(self._logpmf(x, n, pr))
def _cdf(self, x, n, pr):
k = floor(x)
vals = special.bdtr(k,n,pr)
return vals
def _sf(self, x, n, pr):
k = floor(x)
return special.bdtrc(k,n,pr)
def _ppf(self, q, n, pr):
vals = ceil(special.bdtrik(q,n,pr))
vals1 = vals-1
temp = special.bdtr(vals1,n,pr)
return where(temp >= q, vals1, vals)
def _stats(self, n, pr):
q = 1.0-pr
mu = n * pr
var = n * pr * q
g1 = (q-pr) / sqrt(n*pr*q)
g2 = (1.0-6*pr*q)/(n*pr*q)
return mu, var, g1, g2
def _entropy(self, n, pr):
k = r_[0:n+1]
vals = self._pmf(k,n,pr)
lvals = where(vals==0,0.0,log(vals))
return -sum(vals*lvals,axis=0)
binom = binom_gen(name='binom',shapes="n, pr",extradoc="""
Binomial distribution
Counts the number of successes in *n* independent
trials when the probability of success each time is *pr*.
binom.pmf(k,n,p) = choose(n,k)*p**k*(1-p)**(n-k)
for k in {0,1,...,n}
""")
# Bernoulli distribution
class bernoulli_gen(binom_gen):
def _rvs(self, pr):
return binom_gen._rvs(self, 1, pr)
def _argcheck(self, pr):
return (pr >=0 ) & (pr <= 1)
def _logpmf(self, x, pr):
return binom._logpmf(x, 1, pr)
def _pmf(self, x, pr):
return binom._pmf(x, 1, pr)
def _cdf(self, x, pr):
return binom._cdf(x, 1, pr)
def _sf(self, x, pr):
return binom._sf(x, 1, pr)
def _ppf(self, q, pr):
return binom._ppf(q, 1, pr)
def _stats(self, pr):
return binom._stats(1, pr)
def _entropy(self, pr):
return -pr*log(pr)-(1-pr)*log(1-pr)
bernoulli = bernoulli_gen(b=1,name='bernoulli',shapes="pr",extradoc="""
Bernoulli distribution
1 if binary experiment succeeds, 0 otherwise. Experiment
succeeds with probabilty *pr*.
bernoulli.pmf(k,p) = 1-p if k = 0
= p if k = 1
for k = 0,1
"""
)
# Negative binomial
class nbinom_gen(rv_discrete):
"""A negative binomial discrete random variable.
%(before_notes)s
Notes
-----
Probability mass function, given by
``np.choose(k+n-1, n-1) * p**n * (1-p)**k`` for ``k >= 0``.
%(example)s
"""
def _rvs(self, n, pr):
return mtrand.negative_binomial(n, pr, self._size)
def _argcheck(self, n, pr):
return (n >= 0) & (pr >= 0) & (pr <= 1)
def _pmf(self, x, n, pr):
coeff = exp(gamln(n+x) - gamln(x+1) - gamln(n))
return coeff * power(pr,n) * power(1-pr,x)
def _logpmf(self, x, n, pr):
coeff = gamln(n+x) - gamln(x+1) - gamln(n)
return coeff + n*log(pr) + x*log(1-pr)
def _cdf(self, x, n, pr):
k = floor(x)
return special.betainc(n, k+1, pr)
def _sf_skip(self, x, n, pr):
#skip because special.nbdtrc doesn't work for 0<n<1
k = floor(x)
return special.nbdtrc(k,n,pr)
def _ppf(self, q, n, pr):
vals = ceil(special.nbdtrik(q,n,pr))
vals1 = (vals-1).clip(0.0, np.inf)
temp = self._cdf(vals1,n,pr)
return where(temp >= q, vals1, vals)
def _stats(self, n, pr):
Q = 1.0 / pr
P = Q - 1.0
mu = n*P
var = n*P*Q
g1 = (Q+P)/sqrt(n*P*Q)
g2 = (1.0 + 6*P*Q) / (n*P*Q)
return mu, var, g1, g2
nbinom = nbinom_gen(name='nbinom', shapes="n, pr", extradoc="""
Negative binomial distribution
nbinom.pmf(k,n,p) = choose(k+n-1,n-1) * p**n * (1-p)**k
for k >= 0.
"""
)
## Geometric distribution
class geom_gen(rv_discrete):
def _rvs(self, pr):
return mtrand.geometric(pr,size=self._size)
def _argcheck(self, pr):
return (pr<=1) & (pr >= 0)
def _pmf(self, k, pr):
return (1-pr)**(k-1) * pr
def _logpmf(self, k, pr):
return (k-1)*log(1-pr) + pr
def _cdf(self, x, pr):
k = floor(x)
return (1.0-(1.0-pr)**k)
def _sf(self, x, pr):
k = floor(x)
return (1.0-pr)**k
def _ppf(self, q, pr):
vals = ceil(log(1.0-q)/log(1-pr))
temp = 1.0-(1.0-pr)**(vals-1)
return where((temp >= q) & (vals > 0), vals-1, vals)
def _stats(self, pr):
mu = 1.0/pr
qr = 1.0-pr
var = qr / pr / pr
g1 = (2.0-pr) / sqrt(qr)
g2 = numpy.polyval([1,-6,6],pr)/(1.0-pr)
return mu, var, g1, g2
geom = geom_gen(a=1,name='geom', longname="A geometric",
shapes="pr", extradoc="""
Geometric distribution
geom.pmf(k,p) = (1-p)**(k-1)*p
for k >= 1
"""
)
## Hypergeometric distribution
class hypergeom_gen(rv_discrete):
def _rvs(self, M, n, N):
return mtrand.hypergeometric(n,M-n,N,size=self._size)
def _argcheck(self, M, n, N):
cond = rv_discrete._argcheck(self,M,n,N)
cond &= (n <= M) & (N <= M)
self.a = N-(M-n)
self.b = min(n,N)
return cond
def _logpmf(self, k, M, n, N):
tot, good = M, n
bad = tot - good
return gamln(good+1) - gamln(good-k+1) - gamln(k+1) + gamln(bad+1) \
- gamln(bad-N+k+1) - gamln(N-k+1) - gamln(tot+1) + gamln(tot-N+1) \
+ gamln(N+1)
def _pmf(self, k, M, n, N):
#same as the following but numerically more precise
#return comb(good,k) * comb(bad,N-k) / comb(tot,N)
return exp(self._logpmf(k, M, n, N))
def _stats(self, M, n, N):
tot, good = M, n
n = good*1.0
m = (tot-good)*1.0
N = N*1.0
tot = m+n
p = n/tot
mu = N*p
var = m*n*N*(tot-N)*1.0/(tot*tot*(tot-1))
g1 = (m - n)*(tot-2*N) / (tot-2.0)*sqrt((tot-1.0)/(m*n*N*(tot-N)))
m2, m3, m4, m5 = m**2, m**3, m**4, m**5
n2, n3, n4, n5 = n**2, n**2, n**4, n**5
g2 = m3 - m5 + n*(3*m2-6*m3+m4) + 3*m*n2 - 12*m2*n2 + 8*m3*n2 + n3 \
- 6*m*n3 + 8*m2*n3 + m*n4 - n5 - 6*m3*N + 6*m4*N + 18*m2*n*N \
- 6*m3*n*N + 18*m*n2*N - 24*m2*n2*N - 6*n3*N - 6*m*n3*N \
+ 6*n4*N + N*N*(6*m2 - 6*m3 - 24*m*n + 12*m2*n + 6*n2 + \
12*m*n2 - 6*n3)
return mu, var, g1, g2
def _entropy(self, M, n, N):
k = r_[N-(M-n):min(n,N)+1]
vals = self.pmf(k,M,n,N)
lvals = where(vals==0.0,0.0,log(vals))
return -sum(vals*lvals,axis=0)
hypergeom = hypergeom_gen(name='hypergeom',longname="A hypergeometric",
shapes="M, n, N", extradoc="""
Hypergeometric distribution
Models drawing objects from a bin.
M is total number of objects, n is total number of Type I objects.
RV counts number of Type I objects in N drawn without replacement from
population.
hypergeom.pmf(k, M, n, N) = choose(n,k)*choose(M-n,N-k)/choose(M,N)
for N - (M-n) <= k <= min(m,N)
"""
)
## Logarithmic (Log-Series), (Series) distribution
# FIXME: Fails _cdfvec
class logser_gen(rv_discrete):
def _rvs(self, pr):
# looks wrong for pr>0.5, too few k=1
# trying to use generic is worse, no k=1 at all
return mtrand.logseries(pr,size=self._size)
def _argcheck(self, pr):
return (pr > 0) & (pr < 1)
def _pmf(self, k, pr):
return -pr**k * 1.0 / k / log(1-pr)
def _stats(self, pr):
r = log(1-pr)
mu = pr / (pr - 1.0) / r
mu2p = -pr / r / (pr-1.0)**2
var = mu2p - mu*mu
mu3p = -pr / r * (1.0+pr) / (1.0-pr)**3
mu3 = mu3p - 3*mu*mu2p + 2*mu**3
g1 = mu3 / var**1.5
mu4p = -pr / r * (1.0/(pr-1)**2 - 6*pr/(pr-1)**3 + \
6*pr*pr / (pr-1)**4)
mu4 = mu4p - 4*mu3p*mu + 6*mu2p*mu*mu - 3*mu**4
g2 = mu4 / var**2 - 3.0
return mu, var, g1, g2
logser = logser_gen(a=1,name='logser', longname='A logarithmic',
shapes='pr', extradoc="""
Logarithmic (Log-Series, Series) distribution
logser.pmf(k,p) = - p**k / (k*log(1-p))
for k >= 1
"""
)
## Poisson distribution
class poisson_gen(rv_discrete):
def _rvs(self, mu):
return mtrand.poisson(mu, self._size)
def _pmf(self, k, mu):
Pk = k*log(mu)-gamln(k+1) - mu
return exp(Pk)
def _cdf(self, x, mu):
k = floor(x)
return special.pdtr(k,mu)
def _sf(self, x, mu):
k = floor(x)
return special.pdtrc(k,mu)
def _ppf(self, q, mu):
vals = ceil(special.pdtrik(q,mu))
vals1 = vals-1
temp = special.pdtr(vals1,mu)
return where((temp >= q), vals1, vals)
def _stats(self, mu):
var = mu
g1 = 1.0/arr(sqrt(mu))
g2 = 1.0 / arr(mu)
return mu, var, g1, g2
poisson = poisson_gen(name="poisson", longname='A Poisson',
shapes="mu", extradoc="""
Poisson distribution
poisson.pmf(k, mu) = exp(-mu) * mu**k / k!
for k >= 0
"""
)
## (Planck) Discrete Exponential
class planck_gen(rv_discrete):
def _argcheck(self, lambda_):
if (lambda_ > 0):
self.a = 0
self.b = inf
return 1
elif (lambda_ < 0):
self.a = -inf
self.b = 0
return 1
return 0 # lambda_ = 0
def _pmf(self, k, lambda_):
fact = (1-exp(-lambda_))
return fact*exp(-lambda_*k)
def _cdf(self, x, lambda_):
k = floor(x)
return 1-exp(-lambda_*(k+1))
def _ppf(self, q, lambda_):
vals = ceil(-1.0/lambda_ * log1p(-q)-1)
vals1 = (vals-1).clip(self.a, np.inf)
temp = self._cdf(vals1, lambda_)
return where(temp >= q, vals1, vals)
def _stats(self, lambda_):
mu = 1/(exp(lambda_)-1)
var = exp(-lambda_)/(expm1(-lambda_))**2
g1 = 2*cosh(lambda_/2.0)
g2 = 4+2*cosh(lambda_)
return mu, var, g1, g2
def _entropy(self, lambda_):
l = lambda_
C = (1-exp(-l))
return l*exp(-l)/C - log(C)
planck = planck_gen(name='planck',longname='A discrete exponential ',
shapes="lamda",
extradoc="""
Planck (Discrete Exponential)
planck.pmf(k,b) = (1-exp(-b))*exp(-b*k)
for k*b >= 0
"""
)
class boltzmann_gen(rv_discrete):
def _pmf(self, k, lambda_, N):
fact = (1-exp(-lambda_))/(1-exp(-lambda_*N))
return fact*exp(-lambda_*k)
def _cdf(self, x, lambda_, N):
k = floor(x)
return (1-exp(-lambda_*(k+1)))/(1-exp(-lambda_*N))
def _ppf(self, q, lambda_, N):
qnew = q*(1-exp(-lambda_*N))
vals = ceil(-1.0/lambda_ * log(1-qnew)-1)
vals1 = (vals-1).clip(0.0, np.inf)
temp = self._cdf(vals1, lambda_, N)
return where(temp >= q, vals1, vals)
def _stats(self, lambda_, N):
z = exp(-lambda_)
zN = exp(-lambda_*N)
mu = z/(1.0-z)-N*zN/(1-zN)
var = z/(1.0-z)**2 - N*N*zN/(1-zN)**2
trm = (1-zN)/(1-z)
trm2 = (z*trm**2 - N*N*zN)
g1 = z*(1+z)*trm**3 - N**3*zN*(1+zN)
g1 = g1 / trm2**(1.5)
g2 = z*(1+4*z+z*z)*trm**4 - N**4 * zN*(1+4*zN+zN*zN)
g2 = g2 / trm2 / trm2
return mu, var, g1, g2
boltzmann = boltzmann_gen(name='boltzmann',longname='A truncated discrete exponential ',
shapes="lamda, N",
extradoc="""
Boltzmann (Truncated Discrete Exponential)
boltzmann.pmf(k,b,N) = (1-exp(-b))*exp(-b*k)/(1-exp(-b*N))
for k=0,..,N-1
"""
)
## Discrete Uniform
class randint_gen(rv_discrete):
def _argcheck(self, min, max):
self.a = min
self.b = max-1
return (max > min)
def _pmf(self, k, min, max):
fact = 1.0 / (max - min)
return fact
def _cdf(self, x, min, max):
k = floor(x)
return (k-min+1)*1.0/(max-min)
def _ppf(self, q, min, max):
vals = ceil(q*(max-min)+min)-1
vals1 = (vals-1).clip(min, max)
temp = self._cdf(vals1, min, max)
return where(temp >= q, vals1, vals)
def _stats(self, min, max):
m2, m1 = arr(max), arr(min)
mu = (m2 + m1 - 1.0) / 2
d = m2 - m1
var = (d-1)*(d+1.0)/12.0
g1 = 0.0
g2 = -6.0/5.0*(d*d+1.0)/(d-1.0)*(d+1.0)
return mu, var, g1, g2
def _rvs(self, min, max=None):
"""An array of *size* random integers >= min and < max.
If max is None, then range is >=0 and < min
"""
return mtrand.randint(min, max, self._size)
def _entropy(self, min, max):
return log(max-min)
randint = randint_gen(name='randint',longname='A discrete uniform '\
'(random integer)', shapes="min, max",
extradoc="""
Discrete Uniform
Random integers >=min and <max.
randint.pmf(k,min, max) = 1/(max-min)
for min <= k < max.
"""
)
# Zipf distribution
# FIXME: problems sampling.
class zipf_gen(rv_discrete):
def _rvs(self, a):
return mtrand.zipf(a, size=self._size)
def _argcheck(self, a):
return a > 1
def _pmf(self, k, a):
Pk = 1.0 / arr(special.zeta(a,1) * k**a)
return Pk
def _munp(self, n, a):
return special.zeta(a-n,1) / special.zeta(a,1)
def _stats(self, a):
sv = errp(0)
fac = arr(special.zeta(a,1))
mu = special.zeta(a-1.0,1)/fac
mu2p = special.zeta(a-2.0,1)/fac
var = mu2p - mu*mu
mu3p = special.zeta(a-3.0,1)/fac
mu3 = mu3p - 3*mu*mu2p + 2*mu**3
g1 = mu3 / arr(var**1.5)
mu4p = special.zeta(a-4.0,1)/fac
sv = errp(sv)
mu4 = mu4p - 4*mu3p*mu + 6*mu2p*mu*mu - 3*mu**4
g2 = mu4 / arr(var**2) - 3.0
return mu, var, g1, g2
zipf = zipf_gen(a=1,name='zipf', longname='A Zipf',
shapes="a", extradoc="""
Zipf distribution
zipf.pmf(k,a) = 1/(zeta(a)*k**a)
for k >= 1
"""
)
# Discrete Laplacian
class dlaplace_gen(rv_discrete):
def _pmf(self, k, a):
return tanh(a/2.0)*exp(-a*abs(k))
def _cdf(self, x, a):
k = floor(x)
ind = (k >= 0)
const = exp(a)+1
return where(ind, 1.0-exp(-a*k)/const, exp(a*(k+1))/const)
def _ppf(self, q, a):
const = 1.0/(1+exp(-a))
cons2 = 1+exp(a)
ind = q < const
vals = ceil(where(ind, log(q*cons2)/a-1, -log((1-q)*cons2)/a))
vals1 = (vals-1)
temp = self._cdf(vals1, a)
return where(temp >= q, vals1, vals)
def _stats_skip(self, a):
# variance mu2 does not aggree with sample variance,
# nor with direct calculation using pmf
# remove for now because generic calculation works
# except it does not show nice zeros for mean and skew(?)
ea = exp(-a)
e2a = exp(-2*a)
e3a = exp(-3*a)
e4a = exp(-4*a)
mu2 = 2* (e2a + ea) / (1-ea)**3.0
mu4 = 2* (e4a + 11*e3a + 11*e2a + ea) / (1-ea)**5.0
return 0.0, mu2, 0.0, mu4 / mu2**2.0 - 3
def _entropy(self, a):
return a / sinh(a) - log(tanh(a/2.0))
dlaplace = dlaplace_gen(a=-inf,
name='dlaplace', longname='A discrete Laplacian',
shapes="a", extradoc="""
Discrete Laplacian distribution.
dlaplace.pmf(k,a) = tanh(a/2) * exp(-a*abs(k))
for a > 0.
"""
)
class skellam_gen(rv_discrete):
def _rvs(self, mu1, mu2):
n = self._size
return np.random.poisson(mu1, n)-np.random.poisson(mu2, n)
def _pmf(self, x, mu1, mu2):
px = np.where(x < 0, ncx2.pdf(2*mu2, 2*(1-x), 2*mu1)*2,
ncx2.pdf(2*mu1, 2*(x+1), 2*mu2)*2)
#ncx2.pdf() returns nan's for extremely low probabilities
return px
def _cdf(self, x, mu1, mu2):
x = np.floor(x)
px = np.where(x < 0, ncx2.cdf(2*mu2, -2*x, 2*mu1),
1-ncx2.cdf(2*mu1, 2*(x+1), 2*mu2))
return px
# enable later
## def _cf(self, w, mu1, mu2):
## # characteristic function
## poisscf = poisson._cf
## return poisscf(w, mu1) * poisscf(-w, mu2)
def _stats(self, mu1, mu2):
mean = mu1 - mu2
var = mu1 + mu2
g1 = mean / np.sqrt((var)**3)
g2 = 1 / var
return mean, var, g1, g2
skellam = skellam_gen(a=-np.inf, name="skellam", longname='A Skellam',
shapes="mu1,mu2", extradoc="""
Skellam distribution
Probability distribution of the difference of two correlated or
uncorrelated Poisson random variables.
Let k1 and k2 be two Poisson-distributed r.v. with expected values
lam1 and lam2. Then, k1-k2 follows a Skellam distribution with
parameters mu1 = lam1 - rho*sqrt(lam1*lam2) and
mu2 = lam2 - rho*sqrt(lam1*lam2), where rho is the correlation
coefficient between k1 and k2. If the two Poisson-distributed r.v.
are independent then rho = 0.
Parameters mu1 and mu2 must be strictly positive.
For details see: http://en.wikipedia.org/wiki/Skellam_distribution
"""
)
| gpl-3.0 |
denisff/python-for-android | python3-alpha/python3-src/Lib/turtledemo/round_dance.py | 164 | 1804 | """ turtle-example-suite:
tdemo_round_dance.py
(Needs version 1.1 of the turtle module that
comes with Python 3.1)
Dancing turtles have a compound shape
consisting of a series of triangles of
decreasing size.
Turtles march along a circle while rotating
pairwise in opposite direction, with one
exception. Does that breaking of symmetry
enhance the attractiveness of the example?
Press any key to stop the animation.
Technically: demonstrates use of compound
shapes, transformation of shapes as well as
cloning turtles. The animation is
controlled through update().
"""
from turtle import *
def stop():
global running
running = False
def main():
global running
clearscreen()
bgcolor("gray10")
tracer(False)
shape("triangle")
f = 0.793402
phi = 9.064678
s = 5
c = 1
# create compound shape
sh = Shape("compound")
for i in range(10):
shapesize(s)
p =get_shapepoly()
s *= f
c *= f
tilt(-phi)
sh.addcomponent(p, (c, 0.25, 1-c), "black")
register_shape("multitri", sh)
# create dancers
shapesize(1)
shape("multitri")
pu()
setpos(0, -200)
dancers = []
for i in range(180):
fd(7)
tilt(-4)
lt(2)
update()
if i % 12 == 0:
dancers.append(clone())
home()
# dance
running = True
onkeypress(stop)
listen()
cs = 1
while running:
ta = -4
for dancer in dancers:
dancer.fd(7)
dancer.lt(2)
dancer.tilt(ta)
ta = -4 if ta > 0 else 2
if cs < 180:
right(4)
shapesize(cs)
cs *= 1.005
update()
return "DONE!"
if __name__=='__main__':
print(main())
mainloop()
| apache-2.0 |
ogenstad/ansible | lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py | 78 | 6127 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_external_provider_facts
short_description: Retrieve facts about one or more oVirt/RHV external providers
author: "Ondra Machacek (@machacekondra)"
version_added: "2.3"
description:
- "Retrieve facts about one or more oVirt/RHV external providers."
notes:
- "This module creates a new top-level C(ovirt_external_providers) fact, which
contains a list of external_providers."
options:
type:
description:
- "Type of the external provider."
choices: ['os_image', 'os_network', 'os_volume', 'foreman']
required: true
name:
description:
- "Name of the external provider, can be used as glob expression."
extends_documentation_fragment: ovirt_facts
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Gather facts about all image external providers named C<glance>:
- ovirt_external_provider_facts:
type: os_image
name: glance
- debug:
var: ovirt_external_providers
'''
RETURN = '''
external_host_providers:
description: "List of dictionaries of all the external_host_provider attributes. External provider attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/external_host_provider."
returned: "On success and if parameter 'type: foreman' is used."
type: list
openstack_image_providers:
description: "List of dictionaries of all the openstack_image_provider attributes. External provider attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/openstack_image_provider."
returned: "On success and if parameter 'type: os_image' is used."
type: list
openstack_volume_providers:
description: "List of dictionaries of all the openstack_volume_provider attributes. External provider attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/openstack_volume_provider."
returned: "On success and if parameter 'type: os_volume' is used."
type: list
openstack_network_providers:
description: "List of dictionaries of all the openstack_network_provider attributes. External provider attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/openstack_network_provider."
returned: "On success and if parameter 'type: os_network' is used."
type: list
'''
import fnmatch
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
check_sdk,
create_connection,
get_dict_of_struct,
ovirt_facts_full_argument_spec,
)
def _external_provider_service(provider_type, system_service):
if provider_type == 'os_image':
return system_service.openstack_image_providers_service()
elif provider_type == 'os_network':
return system_service.openstack_network_providers_service()
elif provider_type == 'os_volume':
return system_service.openstack_volume_providers_service()
elif provider_type == 'foreman':
return system_service.external_host_providers_service()
def main():
argument_spec = ovirt_facts_full_argument_spec(
name=dict(default=None, required=False),
type=dict(
default=None,
required=True,
choices=[
'os_image', 'os_network', 'os_volume', 'foreman',
],
aliases=['provider'],
),
)
module = AnsibleModule(argument_spec)
if module._name == 'ovirt_external_providers_facts':
module.deprecate("The 'ovirt_external_providers_facts' module is being renamed 'ovirt_external_provider_facts'", version=2.8)
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
external_providers_service = _external_provider_service(
provider_type=module.params.pop('type'),
system_service=connection.system_service(),
)
if module.params['name']:
external_providers = [
e for e in external_providers_service.list()
if fnmatch.fnmatch(e.name, module.params['name'])
]
else:
external_providers = external_providers_service.list()
module.exit_json(
changed=False,
ansible_facts=dict(
ovirt_external_providers=[
get_dict_of_struct(
struct=c,
connection=connection,
fetch_nested=module.params.get('fetch_nested'),
attributes=module.params.get('nested_attributes'),
) for c in external_providers
],
),
)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == '__main__':
main()
| gpl-3.0 |
edx/edx-platform | lms/djangoapps/certificates/services.py | 3 | 1454 | """
Certificate service
"""
import logging
from django.core.exceptions import ObjectDoesNotExist
from opaque_keys.edx.keys import CourseKey
from lms.djangoapps.certificates.generation_handler import is_on_certificate_allowlist
from lms.djangoapps.certificates.models import GeneratedCertificate
from lms.djangoapps.utils import _get_key
log = logging.getLogger(__name__)
class CertificateService:
"""
User Certificate service
"""
def invalidate_certificate(self, user_id, course_key_or_id):
"""
Invalidate the user certificate in a given course if it exists and the user is not on the allowlist for this
course run.
"""
course_key = _get_key(course_key_or_id, CourseKey)
if is_on_certificate_allowlist(user_id, course_key):
log.info(f'User {user_id} is on the allowlist for {course_key}. The certificate will not be invalidated.')
return False
try:
generated_certificate = GeneratedCertificate.objects.get(
user=user_id,
course_id=course_key
)
generated_certificate.invalidate(source='certificate_service')
except ObjectDoesNotExist:
log.warning(
'Invalidation failed because a certificate for user %d in course %s does not exist.',
user_id,
course_key
)
return False
return True
| agpl-3.0 |
chirilo/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/common/net/irc/ircproxy_unittest.py | 122 | 2069 | # Copyright (c) 2010 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest2 as unittest
from webkitpy.common.net.irc.ircproxy import IRCProxy
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.thirdparty.mock import Mock
class IRCProxyTest(unittest.TestCase):
def test_trivial(self):
def fun():
proxy = IRCProxy(Mock(), Mock())
proxy.post("hello")
proxy.disconnect()
expected_logs = "Connecting to IRC\nDisconnecting from IRC...\n"
OutputCapture().assert_outputs(self, fun, expected_logs=expected_logs)
| bsd-3-clause |
konstruktoid/ansible-upstream | lib/ansible/plugins/inventory/openstack.py | 8 | 12130 | # Copyright (c) 2012, Marco Vito Moscaritolo <[email protected]>
# Copyright (c) 2013, Jesse Keating <[email protected]>
# Copyright (c) 2015, Hewlett-Packard Development Company, L.P.
# Copyright (c) 2016, Rackspace Australia
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: openstack
plugin_type: inventory
authors:
- Marco Vito Moscaritolo <[email protected]>
- Jesse Keating <[email protected]>
short_description: OpenStack inventory source
description:
- Get inventory hosts from OpenStack clouds
- Uses openstack.(yml|yaml) YAML configuration file to configure the inventory plugin
- Uses standard clouds.yaml YAML configuration file to configure cloud credentials
options:
show_all:
description: toggles showing all vms vs only those with a working IP
type: bool
default: 'no'
inventory_hostname:
description: |
What to register as the inventory hostname.
If set to 'uuid' the uuid of the server will be used and a
group will be created for the server name.
If set to 'name' the name of the server will be used unless
there are more than one server with the same name in which
case the 'uuid' logic will be used.
Default is to do 'name', which is the opposite of the old
openstack.py inventory script's option use_hostnames)
type: string
choices:
- name
- uuid
default: "name"
expand_hostvars:
description: |
Run extra commands on each host to fill in additional
information about the host. May interrogate cinder and
neutron and can be expensive for people with many hosts.
(Note, the default value of this is opposite from the default
old openstack.py inventory script's option expand_hostvars)
type: bool
default: 'no'
private:
description: |
Use the private interface of each server, if it has one, as
the host's IP in the inventory. This can be useful if you are
running ansible inside a server in the cloud and would rather
communicate to your servers over the private network.
type: bool
default: 'no'
only_clouds:
description: |
List of clouds from clouds.yaml to use, instead of using
the whole list.
type: list
default: []
fail_on_errors:
description: |
Causes the inventory to fail and return no hosts if one cloud
has failed (for example, bad credentials or being offline).
When set to False, the inventory will return as many hosts as
it can from as many clouds as it can contact. (Note, the
default value of this is opposite from the old openstack.py
inventory script's option fail_on_errors)
type: bool
default: 'no'
clouds_yaml_path:
description: |
Override path to clouds.yaml file. If this value is given it
will be searched first. The default path for the
ansible inventory adds /etc/ansible/openstack.yaml and
/etc/ansible/openstack.yml to the regular locations documented
at https://docs.openstack.org/os-client-config/latest/user/configuration.html#config-files
type: string
compose:
description: Create vars from jinja2 expressions.
type: dictionary
default: {}
groups:
description: Add hosts to group based on Jinja2 conditionals.
type: dictionary
default: {}
'''
EXAMPLES = '''
# file must be named openstack.yaml or openstack.yml
# Make the plugin behave like the default behavior of the old script
plugin: openstack
expand_hostvars: yes
fail_on_errors: yes
'''
import collections
from ansible.errors import AnsibleParserError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
try:
# Due to the name shadowing we should import other way
import importlib
sdk = importlib.import_module('openstack')
sdk_inventory = importlib.import_module('openstack.cloud.inventory')
client_config = importlib.import_module('openstack.config.loader')
HAS_SDK = True
except ImportError:
HAS_SDK = False
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
''' Host inventory provider for ansible using OpenStack clouds. '''
NAME = 'openstack'
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
cache_key = self._get_cache_prefix(path)
# file is config file
self._config_data = self._read_config_data(path)
msg = ''
if not self._config_data:
msg = 'File empty. this is not my config file'
elif 'plugin' in self._config_data and self._config_data['plugin'] != self.NAME:
msg = 'plugin config file, but not for us: %s' % self._config_data['plugin']
elif 'plugin' not in self._config_data and 'clouds' not in self._config_data:
msg = "it's not a plugin configuration nor a clouds.yaml file"
elif not HAS_SDK:
msg = "openstacksdk is required for the OpenStack inventory plugin. OpenStack inventory sources will be skipped."
if msg:
raise AnsibleParserError(msg)
# The user has pointed us at a clouds.yaml file. Use defaults for
# everything.
if 'clouds' in self._config_data:
self._config_data = {}
source_data = None
if cache and cache_key in self._cache:
try:
source_data = self._cache[cache_key]
except KeyError:
pass
if not source_data:
clouds_yaml_path = self._config_data.get('clouds_yaml_path')
if clouds_yaml_path:
config_files = (clouds_yaml_path +
client_config.CONFIG_FILES)
else:
config_files = None
# TODO(mordred) Integrate shade's logging with ansible's logging
sdk.enable_logging()
cloud_inventory = sdk_inventory.OpenStackInventory(
config_files=config_files,
private=self._config_data.get('private', False))
only_clouds = self._config_data.get('only_clouds', [])
if only_clouds and not isinstance(only_clouds, list):
raise ValueError(
'OpenStack Inventory Config Error: only_clouds must be'
' a list')
if only_clouds:
new_clouds = []
for cloud in cloud_inventory.clouds:
if cloud.name in only_clouds:
new_clouds.append(cloud)
cloud_inventory.clouds = new_clouds
expand_hostvars = self._config_data.get('expand_hostvars', False)
fail_on_errors = self._config_data.get('fail_on_errors', False)
source_data = cloud_inventory.list_hosts(
expand=expand_hostvars, fail_on_cloud_config=fail_on_errors)
self._cache[cache_key] = source_data
self._populate_from_source(source_data)
def _populate_from_source(self, source_data):
groups = collections.defaultdict(list)
firstpass = collections.defaultdict(list)
hostvars = {}
use_server_id = (
self._config_data.get('inventory_hostname', 'name') != 'name')
show_all = self._config_data.get('show_all', False)
for server in source_data:
if 'interface_ip' not in server and not show_all:
continue
firstpass[server['name']].append(server)
for name, servers in firstpass.items():
if len(servers) == 1 and not use_server_id:
self._append_hostvars(hostvars, groups, name, servers[0])
else:
server_ids = set()
# Trap for duplicate results
for server in servers:
server_ids.add(server['id'])
if len(server_ids) == 1 and not use_server_id:
self._append_hostvars(hostvars, groups, name, servers[0])
else:
for server in servers:
self._append_hostvars(
hostvars, groups, server['id'], server,
namegroup=True)
self._set_variables(hostvars, groups)
def _set_variables(self, hostvars, groups):
# set vars in inventory from hostvars
for host in hostvars:
# create composite vars
self._set_composite_vars(
self._config_data.get('compose'), hostvars, host)
# actually update inventory
for key in hostvars[host]:
self.inventory.set_variable(host, key, hostvars[host][key])
# constructed groups based on conditionals
self._add_host_to_composed_groups(
self._config_data.get('groups'), hostvars, host)
for group_name, group_hosts in groups.items():
self.inventory.add_group(group_name)
for host in group_hosts:
self.inventory.add_child(group_name, host)
def _get_groups_from_server(self, server_vars, namegroup=True):
groups = []
region = server_vars['region']
cloud = server_vars['cloud']
metadata = server_vars.get('metadata', {})
# Create a group for the cloud
groups.append(cloud)
# Create a group on region
groups.append(region)
# And one by cloud_region
groups.append("%s_%s" % (cloud, region))
# Check if group metadata key in servers' metadata
if 'group' in metadata:
groups.append(metadata['group'])
for extra_group in metadata.get('groups', '').split(','):
if extra_group:
groups.append(extra_group.strip())
groups.append('instance-%s' % server_vars['id'])
if namegroup:
groups.append(server_vars['name'])
for key in ('flavor', 'image'):
if 'name' in server_vars[key]:
groups.append('%s-%s' % (key, server_vars[key]['name']))
for key, value in iter(metadata.items()):
groups.append('meta-%s_%s' % (key, value))
az = server_vars.get('az', None)
if az:
# Make groups for az, region_az and cloud_region_az
groups.append(az)
groups.append('%s_%s' % (region, az))
groups.append('%s_%s_%s' % (cloud, region, az))
return groups
def _append_hostvars(self, hostvars, groups, current_host,
server, namegroup=False):
hostvars[current_host] = dict(
ansible_ssh_host=server['interface_ip'],
ansible_host=server['interface_ip'],
openstack=server)
self.inventory.add_host(current_host)
for group in self._get_groups_from_server(server, namegroup=namegroup):
groups[group].append(current_host)
def verify_file(self, path):
if super(InventoryModule, self).verify_file(path):
for fn in ('openstack', 'clouds'):
for suffix in ('yaml', 'yml'):
maybe = '{fn}.{suffix}'.format(fn=fn, suffix=suffix)
if path.endswith(maybe):
return True
return False
| gpl-3.0 |
cmenard/GB_Bullet | scripts/tracing/draw_functrace.py | 14676 | 3560 | #!/usr/bin/python
"""
Copyright 2008 (c) Frederic Weisbecker <[email protected]>
Licensed under the terms of the GNU GPL License version 2
This script parses a trace provided by the function tracer in
kernel/trace/trace_functions.c
The resulted trace is processed into a tree to produce a more human
view of the call stack by drawing textual but hierarchical tree of
calls. Only the functions's names and the the call time are provided.
Usage:
Be sure that you have CONFIG_FUNCTION_TRACER
# mount -t debugfs nodev /sys/kernel/debug
# echo function > /sys/kernel/debug/tracing/current_tracer
$ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func
Wait some times but not too much, the script is a bit slow.
Break the pipe (Ctrl + Z)
$ scripts/draw_functrace.py < raw_trace_func > draw_functrace
Then you have your drawn trace in draw_functrace
"""
import sys, re
class CallTree:
""" This class provides a tree representation of the functions
call stack. If a function has no parent in the kernel (interrupt,
syscall, kernel thread...) then it is attached to a virtual parent
called ROOT.
"""
ROOT = None
def __init__(self, func, time = None, parent = None):
self._func = func
self._time = time
if parent is None:
self._parent = CallTree.ROOT
else:
self._parent = parent
self._children = []
def calls(self, func, calltime):
""" If a function calls another one, call this method to insert it
into the tree at the appropriate place.
@return: A reference to the newly created child node.
"""
child = CallTree(func, calltime, self)
self._children.append(child)
return child
def getParent(self, func):
""" Retrieve the last parent of the current node that
has the name given by func. If this function is not
on a parent, then create it as new child of root
@return: A reference to the parent.
"""
tree = self
while tree != CallTree.ROOT and tree._func != func:
tree = tree._parent
if tree == CallTree.ROOT:
child = CallTree.ROOT.calls(func, None)
return child
return tree
def __repr__(self):
return self.__toString("", True)
def __toString(self, branch, lastChild):
if self._time is not None:
s = "%s----%s (%s)\n" % (branch, self._func, self._time)
else:
s = "%s----%s\n" % (branch, self._func)
i = 0
if lastChild:
branch = branch[:-1] + " "
while i < len(self._children):
if i != len(self._children) - 1:
s += "%s" % self._children[i].__toString(branch +\
" |", False)
else:
s += "%s" % self._children[i].__toString(branch +\
" |", True)
i += 1
return s
class BrokenLineException(Exception):
"""If the last line is not complete because of the pipe breakage,
we want to stop the processing and ignore this line.
"""
pass
class CommentLineException(Exception):
""" If the line is a comment (as in the beginning of the trace file),
just ignore it.
"""
pass
def parseLine(line):
line = line.strip()
if line.startswith("#"):
raise CommentLineException
m = re.match("[^]]+?\\] +([0-9.]+): (\\w+) <-(\\w+)", line)
if m is None:
raise BrokenLineException
return (m.group(1), m.group(2), m.group(3))
def main():
CallTree.ROOT = CallTree("Root (Nowhere)", None, None)
tree = CallTree.ROOT
for line in sys.stdin:
try:
calltime, callee, caller = parseLine(line)
except BrokenLineException:
break
except CommentLineException:
continue
tree = tree.getParent(caller)
tree = tree.calls(callee, calltime)
print CallTree.ROOT
if __name__ == "__main__":
main()
| gpl-2.0 |
davidjb/sqlalchemy | test/aaa_profiling/test_pool.py | 27 | 1673 | from sqlalchemy.testing import fixtures, AssertsExecutionResults, profiling
from sqlalchemy.pool import QueuePool
from sqlalchemy import pool as pool_module
pool = None
class QueuePoolTest(fixtures.TestBase, AssertsExecutionResults):
__requires__ = 'cpython',
class Connection(object):
def rollback(self):
pass
def close(self):
pass
def teardown(self):
# the tests leave some fake connections
# around which don't necessarily
# get gc'ed as quickly as we'd like,
# on backends like pypy, python3.2
pool_module._refs.clear()
def setup(self):
# create a throwaway pool which
# has the effect of initializing
# class-level event listeners on Pool,
# if not present already.
p1 = QueuePool(creator=self.Connection,
pool_size=3, max_overflow=-1,
use_threadlocal=True)
p1.connect()
global pool
pool = QueuePool(creator=self.Connection,
pool_size=3, max_overflow=-1,
use_threadlocal=True)
@profiling.function_call_count()
def test_first_connect(self):
pool.connect()
def test_second_connect(self):
conn = pool.connect()
conn.close()
@profiling.function_call_count()
def go():
conn2 = pool.connect()
return conn2
go()
def test_second_samethread_connect(self):
conn = pool.connect()
conn # strong ref
@profiling.function_call_count()
def go():
return pool.connect()
go()
| mit |
rickmendes/ansible-modules-extras | windows/win_file_version.py | 65 | 2187 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Get DLL or EXE build version
# Copyright © 2015 Sam Liu <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: win_file_version
version_added: "2.1"
short_description: Get DLL or EXE file build version
description:
- Get DLL or EXE file build version
- change state alway be false
options:
path:
description:
- File to get version(provide absolute path)
required: true
aliases: []
author: Sam Liu
'''
EXAMPLES = '''
# get C:\Windows\System32\cmd.exe version in playbook
---
- name: Get acm instance version
win_file_version:
path: 'C:\Windows\System32\cmd.exe'
register: exe_file_version
- debug: msg="{{exe_file_version}}"
'''
RETURN = """
win_file_version.path:
description: file path
returned: always
type: string
win_file_version.file_version:
description: file version number.
returned: no error
type: string
win_file_version.product_version:
description: the version of the product this file is distributed with.
returned: no error
type: string
win_file_version.file_major_part:
description: the major part of the version number.
returned: no error
type: string
win_file_version.file_minor_part:
description: the minor part of the version number of the file.
returned: no error
type: string
win_file_version.file_build_part:
description: build number of the file.
returned: no error
type: string
win_file_version.file_private_part:
description: file private part number.
returned: no error
type: string
"""
| gpl-3.0 |
iandriver/RNA-sequence-tools | FPKM_Parsing/make_align_report_2.py | 2 | 2762 | import fnmatch
import os
import pandas as pd
import cPickle as pickle
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
from collections import OrderedDict
path = '/Volumes/Seq_data'
result_file_names = ['results_sca_spc', 'results_spc2_n2']
basename = 'sca_spc_combined'
cell_list =[]
align_dict =OrderedDict()
align_dict['input_reads'] = []
align_dict['num_condcord_0'] = []
align_dict['per_condcord_0'] = []
align_dict['num_condcord_exactly1'] = []
align_dict['per_condcord_exactly1'] = []
align_dict['num_condcord_g1'] = []
align_dict['per_condcord_g1'] = []
align_dict['per_overall'] = []
for rf in result_file_names:
path_to_file = os.path.join(path, rf)
for root, dirnames, filenames in os.walk(path_to_file):
for filename in fnmatch.filter(filenames, '*.o*'):
cell_name = (filename.split('.')[0])
cell_list.append(cell_name)
f = open(os.path.join(root,filename), 'r+')
for l in f:
if 'reads; of these:' in l:
total_reads = l.split(' ')[0]
elif 'aligned concordantly 0 times' in l:
concord0_num = l.split(' ')[0]
concord0_per1 = l.split(')')[0]
concord0_per2 = concord0_per1.split('(')[-1].strip('%')
if "aligned concordantly exactly 1 time" in l:
concord1_num = l.split(' ')[0]
concord1_per1 = l.split(')')[0]
concord1_per2 = concord0_per1.split('(')[-1].strip('%')
if "Mapped" in l and side_s == 0:
mapped_L_1 = l.split(':')[-1]
mapped_L_num = int(mapped_L_1.split('(')[0].strip())
if "Input" in l and side_s == 1:
input_R_num = int(l.split(':')[-1])
if "Mapped" in l and side_s == 0:
mapped_R_1 = l.split(':')[-1]
mapped_R_num = int(mapped_R_1.split('(')[0].strip())
if "overall read mapping rate." in l:
per_mapped = float(l.split('%')[0])
align_dict['input_L_num'].append(input_L_num)
align_dict['mapped_L_num'].append(mapped_L_num)
align_dict['input_R_num'].append(input_R_num)
align_dict['mapped_R_num'].append(mapped_R_num)
align_dict['per_mapped'].append(per_mapped)
f.close()
align_df = pd.DataFrame(align_dict, index = cell_list)
align_df.to_csv(os.path.join(path,result_file_names[0],'results_'+basename+'_align.txt'), sep = '\t')
plt.hist(align_df['mapped_L_num'])
plt.show()
with open(os.path.join(path,result_file_names[0],'results_'+basename+'_align.p'), 'wb') as fp:
pickle.dump(align_df, fp)
| mit |
revzim/interactive-tutorials | suds/client.py | 5 | 25972 | # This program is free software; you can redistribute it and/or modify
# it under the terms of the (LGPL) GNU Lesser General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library Lesser General Public License for more details at
# ( http://www.gnu.org/licenses/lgpl.html ).
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# written by: Jeff Ortel ( [email protected] )
"""
The I{2nd generation} service proxy provides access to web services.
See I{README.txt}
"""
import suds
import suds.metrics as metrics
from cookielib import CookieJar
from suds import *
from suds.reader import DefinitionsReader
from suds.transport import TransportError, Request
from suds.transport.https import HttpAuthenticated
from suds.servicedefinition import ServiceDefinition
from suds import sudsobject
from sudsobject import Factory as InstFactory
from sudsobject import Object
from suds.resolver import PathResolver
from suds.builder import Builder
from suds.wsdl import Definitions
from suds.cache import ObjectCache
from suds.sax.document import Document
from suds.sax.parser import Parser
from suds.options import Options
from suds.properties import Unskin
from urlparse import urlparse
from copy import deepcopy
from suds.plugin import PluginContainer
from logging import getLogger
log = getLogger(__name__)
class Client(object):
"""
A lightweight web services client.
I{(2nd generation)} API.
@ivar wsdl: The WSDL object.
@type wsdl:L{Definitions}
@ivar service: The service proxy used to invoke operations.
@type service: L{Service}
@ivar factory: The factory used to create objects.
@type factory: L{Factory}
@ivar sd: The service definition
@type sd: L{ServiceDefinition}
@ivar messages: The last sent/received messages.
@type messages: str[2]
"""
@classmethod
def items(cls, sobject):
"""
Extract the I{items} from a suds object much like the
items() method works on I{dict}.
@param sobject: A suds object
@type sobject: L{Object}
@return: A list of items contained in I{sobject}.
@rtype: [(key, value),...]
"""
return sudsobject.items(sobject)
@classmethod
def dict(cls, sobject):
"""
Convert a sudsobject into a dictionary.
@param sobject: A suds object
@type sobject: L{Object}
@return: A python dictionary containing the
items contained in I{sobject}.
@rtype: dict
"""
return sudsobject.asdict(sobject)
@classmethod
def metadata(cls, sobject):
"""
Extract the metadata from a suds object.
@param sobject: A suds object
@type sobject: L{Object}
@return: The object's metadata
@rtype: L{sudsobject.Metadata}
"""
return sobject.__metadata__
def __init__(self, url, **kwargs):
"""
@param url: The URL for the WSDL.
@type url: str
@param kwargs: keyword arguments.
@see: L{Options}
"""
options = Options()
options.transport = HttpAuthenticated()
self.options = options
#options.cache = ObjectCache(days=1)
self.set_options(**kwargs)
reader = DefinitionsReader(options, Definitions)
self.wsdl = reader.open(url)
plugins = PluginContainer(options.plugins)
plugins.init.initialized(wsdl=self.wsdl)
self.factory = Factory(self.wsdl)
self.service = ServiceSelector(self, self.wsdl.services)
self.sd = []
for s in self.wsdl.services:
sd = ServiceDefinition(self.wsdl, s)
self.sd.append(sd)
self.messages = dict(tx=None, rx=None)
def set_options(self, **kwargs):
"""
Set options.
@param kwargs: keyword arguments.
@see: L{Options}
"""
p = Unskin(self.options)
p.update(kwargs)
def add_prefix(self, prefix, uri):
"""
Add I{static} mapping of an XML namespace prefix to a namespace.
This is useful for cases when a wsdl and referenced schemas make heavy
use of namespaces and those namespaces are subject to changed.
@param prefix: An XML namespace prefix.
@type prefix: str
@param uri: An XML namespace URI.
@type uri: str
@raise Exception: when prefix is already mapped.
"""
root = self.wsdl.root
mapped = root.resolvePrefix(prefix, None)
if mapped is None:
root.addPrefix(prefix, uri)
return
if mapped[1] != uri:
raise Exception('"%s" already mapped as "%s"' % (prefix, mapped))
def last_sent(self):
"""
Get last sent I{soap} message.
@return: The last sent I{soap} message.
@rtype: L{Document}
"""
return self.messages.get('tx')
def last_received(self):
"""
Get last received I{soap} message.
@return: The last received I{soap} message.
@rtype: L{Document}
"""
return self.messages.get('rx')
def clone(self):
"""
Get a shallow clone of this object.
The clone only shares the WSDL. All other attributes are
unique to the cloned object including options.
@return: A shallow clone.
@rtype: L{Client}
"""
class Uninitialized(Client):
def __init__(self):
pass
clone = Uninitialized()
clone.options = Options()
cp = Unskin(clone.options)
mp = Unskin(self.options)
cp.update(deepcopy(mp))
clone.wsdl = self.wsdl
clone.factory = self.factory
clone.service = ServiceSelector(clone, self.wsdl.services)
clone.sd = self.sd
clone.messages = dict(tx=None, rx=None)
return clone
def __str__(self):
return unicode(self)
def __unicode__(self):
s = ['\n']
build = suds.__build__.split()
s.append('Suds ( https://fedorahosted.org/suds/ )')
s.append(' version: %s' % suds.__version__)
s.append(' %s build: %s' % (build[0], build[1]))
for sd in self.sd:
s.append('\n\n%s' % unicode(sd))
return ''.join(s)
class Factory:
"""
A factory for instantiating types defined in the wsdl
@ivar resolver: A schema type resolver.
@type resolver: L{PathResolver}
@ivar builder: A schema object builder.
@type builder: L{Builder}
"""
def __init__(self, wsdl):
"""
@param wsdl: A schema object.
@type wsdl: L{wsdl.Definitions}
"""
self.wsdl = wsdl
self.resolver = PathResolver(wsdl)
self.builder = Builder(self.resolver)
def create(self, name):
"""
create a WSDL type by name
@param name: The name of a type defined in the WSDL.
@type name: str
@return: The requested object.
@rtype: L{Object}
"""
timer = metrics.Timer()
timer.start()
type = self.resolver.find(name)
if type is None:
raise TypeNotFound(name)
if type.enum():
result = InstFactory.object(name)
for e, a in type.children():
setattr(result, e.name, e.name)
else:
try:
result = self.builder.build(type)
except Exception, e:
log.error("create '%s' failed", name, exc_info=True)
raise BuildError(name, e)
timer.stop()
metrics.log.debug('%s created: %s', name, timer)
return result
def separator(self, ps):
"""
Set the path separator.
@param ps: The new path separator.
@type ps: char
"""
self.resolver = PathResolver(self.wsdl, ps)
class ServiceSelector:
"""
The B{service} selector is used to select a web service.
In most cases, the wsdl only defines (1) service in which access
by subscript is passed through to a L{PortSelector}. This is also the
behavior when a I{default} service has been specified. In cases
where multiple services have been defined and no default has been
specified, the service is found by name (or index) and a L{PortSelector}
for the service is returned. In all cases, attribute access is
forwarded to the L{PortSelector} for either the I{first} service or the
I{default} service (when specified).
@ivar __client: A suds client.
@type __client: L{Client}
@ivar __services: A list of I{wsdl} services.
@type __services: list
"""
def __init__(self, client, services):
"""
@param client: A suds client.
@type client: L{Client}
@param services: A list of I{wsdl} services.
@type services: list
"""
self.__client = client
self.__services = services
def __getattr__(self, name):
"""
Request to access an attribute is forwarded to the
L{PortSelector} for either the I{first} service or the
I{default} service (when specified).
@param name: The name of a method.
@type name: str
@return: A L{PortSelector}.
@rtype: L{PortSelector}.
"""
default = self.__ds()
if default is None:
port = self.__find(0)
else:
port = default
return getattr(port, name)
def __getitem__(self, name):
"""
Provides selection of the I{service} by name (string) or
index (integer). In cases where only (1) service is defined
or a I{default} has been specified, the request is forwarded
to the L{PortSelector}.
@param name: The name (or index) of a service.
@type name: (int|str)
@return: A L{PortSelector} for the specified service.
@rtype: L{PortSelector}.
"""
if len(self.__services) == 1:
port = self.__find(0)
return port[name]
default = self.__ds()
if default is not None:
port = default
return port[name]
return self.__find(name)
def __find(self, name):
"""
Find a I{service} by name (string) or index (integer).
@param name: The name (or index) of a service.
@type name: (int|str)
@return: A L{PortSelector} for the found service.
@rtype: L{PortSelector}.
"""
service = None
if not len(self.__services):
raise Exception, 'No services defined'
if isinstance(name, int):
try:
service = self.__services[name]
name = service.name
except IndexError:
raise ServiceNotFound, 'at [%d]' % name
else:
for s in self.__services:
if name == s.name:
service = s
break
if service is None:
raise ServiceNotFound, name
return PortSelector(self.__client, service.ports, name)
def __ds(self):
"""
Get the I{default} service if defined in the I{options}.
@return: A L{PortSelector} for the I{default} service.
@rtype: L{PortSelector}.
"""
ds = self.__client.options.service
if ds is None:
return None
else:
return self.__find(ds)
class PortSelector:
"""
The B{port} selector is used to select a I{web service} B{port}.
In cases where multiple ports have been defined and no default has been
specified, the port is found by name (or index) and a L{MethodSelector}
for the port is returned. In all cases, attribute access is
forwarded to the L{MethodSelector} for either the I{first} port or the
I{default} port (when specified).
@ivar __client: A suds client.
@type __client: L{Client}
@ivar __ports: A list of I{service} ports.
@type __ports: list
@ivar __qn: The I{qualified} name of the port (used for logging).
@type __qn: str
"""
def __init__(self, client, ports, qn):
"""
@param client: A suds client.
@type client: L{Client}
@param ports: A list of I{service} ports.
@type ports: list
@param qn: The name of the service.
@type qn: str
"""
self.__client = client
self.__ports = ports
self.__qn = qn
def __getattr__(self, name):
"""
Request to access an attribute is forwarded to the
L{MethodSelector} for either the I{first} port or the
I{default} port (when specified).
@param name: The name of a method.
@type name: str
@return: A L{MethodSelector}.
@rtype: L{MethodSelector}.
"""
default = self.__dp()
if default is None:
m = self.__find(0)
else:
m = default
return getattr(m, name)
def __getitem__(self, name):
"""
Provides selection of the I{port} by name (string) or
index (integer). In cases where only (1) port is defined
or a I{default} has been specified, the request is forwarded
to the L{MethodSelector}.
@param name: The name (or index) of a port.
@type name: (int|str)
@return: A L{MethodSelector} for the specified port.
@rtype: L{MethodSelector}.
"""
default = self.__dp()
if default is None:
return self.__find(name)
else:
return default
def __find(self, name):
"""
Find a I{port} by name (string) or index (integer).
@param name: The name (or index) of a port.
@type name: (int|str)
@return: A L{MethodSelector} for the found port.
@rtype: L{MethodSelector}.
"""
port = None
if not len(self.__ports):
raise Exception, 'No ports defined: %s' % self.__qn
if isinstance(name, int):
qn = '%s[%d]' % (self.__qn, name)
try:
port = self.__ports[name]
except IndexError:
raise PortNotFound, qn
else:
qn = '.'.join((self.__qn, name))
for p in self.__ports:
if name == p.name:
port = p
break
if port is None:
raise PortNotFound, qn
qn = '.'.join((self.__qn, port.name))
return MethodSelector(self.__client, port.methods, qn)
def __dp(self):
"""
Get the I{default} port if defined in the I{options}.
@return: A L{MethodSelector} for the I{default} port.
@rtype: L{MethodSelector}.
"""
dp = self.__client.options.port
if dp is None:
return None
else:
return self.__find(dp)
class MethodSelector:
"""
The B{method} selector is used to select a B{method} by name.
@ivar __client: A suds client.
@type __client: L{Client}
@ivar __methods: A dictionary of methods.
@type __methods: dict
@ivar __qn: The I{qualified} name of the method (used for logging).
@type __qn: str
"""
def __init__(self, client, methods, qn):
"""
@param client: A suds client.
@type client: L{Client}
@param methods: A dictionary of methods.
@type methods: dict
@param qn: The I{qualified} name of the port.
@type qn: str
"""
self.__client = client
self.__methods = methods
self.__qn = qn
def __getattr__(self, name):
"""
Get a method by name and return it in an I{execution wrapper}.
@param name: The name of a method.
@type name: str
@return: An I{execution wrapper} for the specified method name.
@rtype: L{Method}
"""
return self[name]
def __getitem__(self, name):
"""
Get a method by name and return it in an I{execution wrapper}.
@param name: The name of a method.
@type name: str
@return: An I{execution wrapper} for the specified method name.
@rtype: L{Method}
"""
m = self.__methods.get(name)
if m is None:
qn = '.'.join((self.__qn, name))
raise MethodNotFound, qn
return Method(self.__client, m)
class Method:
"""
The I{method} (namespace) object.
@ivar client: A client object.
@type client: L{Client}
@ivar method: A I{wsdl} method.
@type I{wsdl} Method.
"""
def __init__(self, client, method):
"""
@param client: A client object.
@type client: L{Client}
@param method: A I{raw} method.
@type I{raw} Method.
"""
self.client = client
self.method = method
def __call__(self, *args, **kwargs):
"""
Invoke the method.
"""
clientclass = self.clientclass(kwargs)
client = clientclass(self.client, self.method)
if not self.faults():
try:
return client.invoke(args, kwargs)
except WebFault, e:
return (500, e)
else:
return client.invoke(args, kwargs)
def faults(self):
""" get faults option """
return self.client.options.faults
def clientclass(self, kwargs):
""" get soap client class """
if SimClient.simulation(kwargs):
return SimClient
else:
return SoapClient
class SoapClient:
"""
A lightweight soap based web client B{**not intended for external use}
@ivar service: The target method.
@type service: L{Service}
@ivar method: A target method.
@type method: L{Method}
@ivar options: A dictonary of options.
@type options: dict
@ivar cookiejar: A cookie jar.
@type cookiejar: libcookie.CookieJar
"""
def __init__(self, client, method):
"""
@param client: A suds client.
@type client: L{Client}
@param method: A target method.
@type method: L{Method}
"""
self.client = client
self.method = method
self.options = client.options
self.cookiejar = CookieJar()
def invoke(self, args, kwargs):
"""
Send the required soap message to invoke the specified method
@param args: A list of args for the method invoked.
@type args: list
@param kwargs: Named (keyword) args for the method invoked.
@type kwargs: dict
@return: The result of the method invocation.
@rtype: I{builtin}|I{subclass of} L{Object}
"""
timer = metrics.Timer()
timer.start()
result = None
binding = self.method.binding.input
soapenv = binding.get_message(self.method, args, kwargs)
timer.stop()
metrics.log.debug(
"message for '%s' created: %s",
self.method.name,
timer)
timer.start()
result = self.send(soapenv)
timer.stop()
metrics.log.debug(
"method '%s' invoked: %s",
self.method.name,
timer)
return result
def send(self, soapenv):
"""
Send soap message.
@param soapenv: A soap envelope to send.
@type soapenv: L{Document}
@return: The reply to the sent message.
@rtype: I{builtin} or I{subclass of} L{Object}
"""
result = None
location = self.location()
binding = self.method.binding.input
transport = self.options.transport
retxml = self.options.retxml
prettyxml = self.options.prettyxml
log.debug('sending to (%s)\nmessage:\n%s', location, soapenv)
try:
self.last_sent(soapenv)
plugins = PluginContainer(self.options.plugins)
plugins.message.marshalled(envelope=soapenv.root())
if prettyxml:
soapenv = soapenv.str()
else:
soapenv = soapenv.plain()
soapenv = soapenv.encode('utf-8')
plugins.message.sending(envelope=soapenv)
request = Request(location, soapenv)
request.headers = self.headers()
reply = transport.send(request)
ctx = plugins.message.received(reply=reply.message)
reply.message = ctx.reply
if retxml:
result = reply.message
else:
result = self.succeeded(binding, reply.message)
except TransportError, e:
if e.httpcode in (202,204):
result = None
else:
log.error(self.last_sent())
result = self.failed(binding, e)
return result
def headers(self):
"""
Get http headers or the http/https request.
@return: A dictionary of header/values.
@rtype: dict
"""
action = self.method.soap.action
stock = { 'Content-Type' : 'text/xml; charset=utf-8', 'SOAPAction': action }
result = dict(stock, **self.options.headers)
log.debug('headers = %s', result)
return result
def succeeded(self, binding, reply):
"""
Request succeeded, process the reply
@param binding: The binding to be used to process the reply.
@type binding: L{bindings.binding.Binding}
@param reply: The raw reply text.
@type reply: str
@return: The method result.
@rtype: I{builtin}, L{Object}
@raise WebFault: On server.
"""
log.debug('http succeeded:\n%s', reply)
plugins = PluginContainer(self.options.plugins)
if len(reply) > 0:
reply, result = binding.get_reply(self.method, reply)
self.last_received(reply)
else:
result = None
ctx = plugins.message.unmarshalled(reply=result)
result = ctx.reply
if self.options.faults:
return result
else:
return (200, result)
def failed(self, binding, error):
"""
Request failed, process reply based on reason
@param binding: The binding to be used to process the reply.
@type binding: L{suds.bindings.binding.Binding}
@param error: The http error message
@type error: L{transport.TransportError}
"""
status, reason = (error.httpcode, tostr(error))
reply = error.fp.read()
log.debug('http failed:\n%s', reply)
if status == 500:
if len(reply) > 0:
r, p = binding.get_fault(reply)
self.last_received(r)
return (status, p)
else:
return (status, None)
if self.options.faults:
raise Exception((status, reason))
else:
return (status, None)
def location(self):
p = Unskin(self.options)
return p.get('location', self.method.location)
def last_sent(self, d=None):
key = 'tx'
messages = self.client.messages
if d is None:
return messages.get(key)
else:
messages[key] = d
def last_received(self, d=None):
key = 'rx'
messages = self.client.messages
if d is None:
return messages.get(key)
else:
messages[key] = d
class SimClient(SoapClient):
"""
Loopback client used for message/reply simulation.
"""
injkey = '__inject'
@classmethod
def simulation(cls, kwargs):
""" get whether loopback has been specified in the I{kwargs}. """
return kwargs.has_key(SimClient.injkey)
def invoke(self, args, kwargs):
"""
Send the required soap message to invoke the specified method
@param args: A list of args for the method invoked.
@type args: list
@param kwargs: Named (keyword) args for the method invoked.
@type kwargs: dict
@return: The result of the method invocation.
@rtype: I{builtin} or I{subclass of} L{Object}
"""
simulation = kwargs[self.injkey]
msg = simulation.get('msg')
reply = simulation.get('reply')
fault = simulation.get('fault')
if msg is None:
if reply is not None:
return self.__reply(reply, args, kwargs)
if fault is not None:
return self.__fault(fault)
raise Exception('(reply|fault) expected when msg=None')
sax = Parser()
msg = sax.parse(string=msg)
return self.send(msg)
def __reply(self, reply, args, kwargs):
""" simulate the reply """
binding = self.method.binding.input
msg = binding.get_message(self.method, args, kwargs)
log.debug('inject (simulated) send message:\n%s', msg)
binding = self.method.binding.output
return self.succeeded(binding, reply)
def __fault(self, reply):
""" simulate the (fault) reply """
binding = self.method.binding.output
if self.options.faults:
r, p = binding.get_fault(reply)
self.last_received(r)
return (500, p)
else:
return (500, None)
| apache-2.0 |
iu5team/rms | app/views/alekseyl/active_record/employee.py | 1 | 1039 | from app.views.alekseyl.active_record.model import Model
from app.utils.db_utils import *
from app.views.alekseyl.active_record.task import Task
class Employee(Model):
__table__ = 'app_employee'
def __init__(self, **kwargs):
self.id = None
self.name = None
self.position_id = None
self.salary = None
self.manager_id = None
super(Employee, self).__init__(**kwargs)
@classmethod
def find_by_name(cls, name):
conn = Connection.get_connection()
cursor = conn.cursor()
res = cursor.execute(
'SELECT * FROM app_employee as e ' +
'WHERE name LIKE \'%{}%\''.format(name))
desc = Connection.get_cursor_description(res)
employees = []
for row in res:
data = Connection.row_to_dict(row, desc)
employee = cls(**data)
employees.append(employee)
return employees
def get_tasks(self, date_from, date_to):
return Task.find(date_from, date_to, self.id)
| mit |
caspartse/QQ-Groups-Spider | vendor/chardet/charsetgroupprober.py | 270 | 3787 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .enums import ProbingState
from .charsetprober import CharSetProber
class CharSetGroupProber(CharSetProber):
def __init__(self, lang_filter=None):
super(CharSetGroupProber, self).__init__(lang_filter=lang_filter)
self._active_num = 0
self.probers = []
self._best_guess_prober = None
def reset(self):
super(CharSetGroupProber, self).reset()
self._active_num = 0
for prober in self.probers:
if prober:
prober.reset()
prober.active = True
self._active_num += 1
self._best_guess_prober = None
@property
def charset_name(self):
if not self._best_guess_prober:
self.get_confidence()
if not self._best_guess_prober:
return None
return self._best_guess_prober.charset_name
@property
def language(self):
if not self._best_guess_prober:
self.get_confidence()
if not self._best_guess_prober:
return None
return self._best_guess_prober.language
def feed(self, byte_str):
for prober in self.probers:
if not prober:
continue
if not prober.active:
continue
state = prober.feed(byte_str)
if not state:
continue
if state == ProbingState.FOUND_IT:
self._best_guess_prober = prober
return self.state
elif state == ProbingState.NOT_ME:
prober.active = False
self._active_num -= 1
if self._active_num <= 0:
self._state = ProbingState.NOT_ME
return self.state
return self.state
def get_confidence(self):
state = self.state
if state == ProbingState.FOUND_IT:
return 0.99
elif state == ProbingState.NOT_ME:
return 0.01
best_conf = 0.0
self._best_guess_prober = None
for prober in self.probers:
if not prober:
continue
if not prober.active:
self.logger.debug('%s not active', prober.charset_name)
continue
conf = prober.get_confidence()
self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf)
if best_conf < conf:
best_conf = conf
self._best_guess_prober = prober
if not self._best_guess_prober:
return 0.0
return best_conf
| mit |
jounex/hue | desktop/core/ext-py/Django-1.6.10/django/contrib/sessions/backends/cached_db.py | 103 | 2723 | """
Cached, database-backed sessions.
"""
import logging
from django.contrib.sessions.backends.db import SessionStore as DBStore
from django.core.cache import cache
from django.core.exceptions import SuspiciousOperation
from django.utils import timezone
from django.utils.encoding import force_text
KEY_PREFIX = "django.contrib.sessions.cached_db"
class SessionStore(DBStore):
"""
Implements cached, database backed sessions.
"""
def __init__(self, session_key=None):
super(SessionStore, self).__init__(session_key)
@property
def cache_key(self):
return KEY_PREFIX + self._get_or_create_session_key()
def load(self):
try:
data = cache.get(self.cache_key, None)
except Exception:
# Some backends (e.g. memcache) raise an exception on invalid
# cache keys. If this happens, reset the session. See #17810.
data = None
if data is None:
# Duplicate DBStore.load, because we need to keep track
# of the expiry date to set it properly in the cache.
try:
s = Session.objects.get(
session_key=self.session_key,
expire_date__gt=timezone.now()
)
data = self.decode(s.session_data)
cache.set(self.cache_key, data,
self.get_expiry_age(expiry=s.expire_date))
except (Session.DoesNotExist, SuspiciousOperation) as e:
if isinstance(e, SuspiciousOperation):
logger = logging.getLogger('django.security.%s' %
e.__class__.__name__)
logger.warning(force_text(e))
self.create()
data = {}
return data
def exists(self, session_key):
if (KEY_PREFIX + session_key) in cache:
return True
return super(SessionStore, self).exists(session_key)
def save(self, must_create=False):
super(SessionStore, self).save(must_create)
cache.set(self.cache_key, self._session, self.get_expiry_age())
def delete(self, session_key=None):
super(SessionStore, self).delete(session_key)
if session_key is None:
if self.session_key is None:
return
session_key = self.session_key
cache.delete(KEY_PREFIX + session_key)
def flush(self):
"""
Removes the current session data from the database and regenerates the
key.
"""
self.clear()
self.delete(self.session_key)
self.create()
# At bottom to avoid circular import
from django.contrib.sessions.models import Session
| apache-2.0 |
cogmission/nupic | src/nupic/encoders/multi.py | 15 | 7688 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
from nupic.encoders.base import Encoder
from nupic.encoders.scalar import ScalarEncoder
from nupic.encoders.adaptivescalar import AdaptiveScalarEncoder
from nupic.encoders.date import DateEncoder
from nupic.encoders.logenc import LogEncoder
from nupic.encoders.category import CategoryEncoder
from nupic.encoders.sdrcategory import SDRCategoryEncoder
from nupic.encoders.delta import DeltaEncoder
from nupic.encoders.scalarspace import ScalarSpaceEncoder
from nupic.encoders.pass_through_encoder import PassThroughEncoder
from nupic.encoders.sparse_pass_through_encoder import SparsePassThroughEncoder
from nupic.encoders.coordinate import CoordinateEncoder
from nupic.encoders.geospatial_coordinate import GeospatialCoordinateEncoder
# multiencoder must be imported last because it imports * from this module!
from nupic.encoders.utils import bitsToString
from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder
# Map class to Cap'n Proto schema union attribute
_CLASS_ATTR_MAP = {
ScalarEncoder: "scalarEncoder",
AdaptiveScalarEncoder: "adaptiveScalarEncoder",
DateEncoder: "dateEncoder",
LogEncoder: "logEncoder",
CategoryEncoder: "categoryEncoder",
CoordinateEncoder: "coordinateEncoder",
SDRCategoryEncoder: "sdrCategoryEncoder",
DeltaEncoder: "deltaEncoder",
PassThroughEncoder: "passThroughEncoder",
SparsePassThroughEncoder: "sparsePassThroughEncoder",
RandomDistributedScalarEncoder: "randomDistributedScalarEncoder"
}
# Invert for fast lookup in MultiEncoder.read()
_ATTR_CLASS_MAP = {value:key for key, value in _CLASS_ATTR_MAP.items()}
class MultiEncoder(Encoder):
"""A MultiEncoder encodes a dictionary or object with
multiple components. A MultiEncode contains a number
of sub-encoders, each of which encodes a separate component."""
# TODO expand this docstring to explain how the multiple encoders are combined
def __init__(self, encoderDescriptions=None):
self.width = 0
self.encoders = []
self.description = []
self.name = ''
if encoderDescriptions is not None:
self.addMultipleEncoders(encoderDescriptions)
def setFieldStats(self, fieldName, fieldStatistics ):
for (name, encoder, offset) in self.encoders:
encoder.setFieldStats(name, fieldStatistics)
def addEncoder(self, name, encoder):
self.encoders.append((name, encoder, self.width))
for d in encoder.getDescription():
self.description.append((d[0], d[1] + self.width))
self.width += encoder.getWidth()
self._flattenedEncoderList = None
self._flattenedFieldTypeList = None
def encodeIntoArray(self, obj, output):
for name, encoder, offset in self.encoders:
encoder.encodeIntoArray(self._getInputValue(obj, name), output[offset:])
def getDescription(self):
return self.description
def getWidth(self):
"""Represents the sum of the widths of each fields encoding."""
return self.width
def setLearning(self,learningEnabled):
encoders = self.getEncoderList()
for encoder in encoders:
encoder.setLearning(learningEnabled)
return
def encodeField(self, fieldName, value):
for name, encoder, offset in self.encoders:
if name == fieldName:
return encoder.encode(value)
def encodeEachField(self, inputRecord):
encodings = []
for name, encoder, offset in self.encoders:
encodings.append(encoder.encode(getattr(inputRecord, name)))
return encodings
def addMultipleEncoders(self, fieldEncodings):
"""
fieldEncodings -- a dict of dicts, mapping field names to the field params
dict.
Each field params dict has the following keys
1) data fieldname that matches the key ('fieldname')
2) an encoder type ('type')
3) and the encoder params (all other keys)
For example,
fieldEncodings={
'dateTime': dict(fieldname='dateTime', type='DateEncoder',
timeOfDay=(5,5)),
'attendeeCount': dict(fieldname='attendeeCount', type='ScalarEncoder',
name='attendeeCount', minval=0, maxval=250,
clipInput=True, w=5, resolution=10),
'consumption': dict(fieldname='consumption',type='ScalarEncoder',
name='consumption', minval=0,maxval=110,
clipInput=True, w=5, resolution=5),
}
would yield a vector with a part encoded by the DateEncoder,
and to parts seperately taken care of by the ScalarEncoder with the specified parameters.
The three seperate encodings are then merged together to the final vector, in such a way that
they are always at the same location within the vector.
"""
# Sort the encoders so that they end up in a controlled order
encoderList = sorted(fieldEncodings.items())
for key, fieldParams in encoderList:
if ':' not in key and fieldParams is not None:
fieldParams = fieldParams.copy()
fieldName = fieldParams.pop('fieldname')
encoderName = fieldParams.pop('type')
try:
self.addEncoder(fieldName, eval(encoderName)(**fieldParams))
except TypeError, e:
print ("#### Error in constructing %s encoder. Possibly missing "
"some required constructor parameters. Parameters "
"that were provided are: %s" % (encoderName, fieldParams))
raise
@classmethod
def read(cls, proto):
encoder = object.__new__(cls)
encoder.encoders = [None] * len(proto.encoders)
encoder.width = 0
for index, encoderProto in enumerate(proto.encoders):
# Identify which attr is set in union
encoderType = encoderProto.which()
encoderDetails = getattr(encoderProto, encoderType)
encoder.encoders[index] = (
encoderProto.name,
# Call class.read() where class is determined by _ATTR_CLASS_MAP
_ATTR_CLASS_MAP.get(encoderType).read(encoderDetails),
encoderProto.offset
)
encoder.width += encoder.encoders[index][1].getWidth()
# Derive description from encoder list
encoder.description = [(enc[1].name, enc[2]) for enc in encoder.encoders]
encoder.name = proto.name
return encoder
def write(self, proto):
proto.init("encoders", len(self.encoders))
for index, (name, encoder, offset) in enumerate(self.encoders):
encoderProto = proto.encoders[index]
encoderType = _CLASS_ATTR_MAP.get(encoder.__class__)
encoderProto.init(encoderType)
encoderDetails = getattr(encoderProto, encoderType)
encoder.write(encoderDetails)
encoderProto.name = name
encoderProto.offset = offset
proto.name = self.name
| agpl-3.0 |
amatotech/p2pool | wstools/UserTuple.py | 295 | 4047 | """
A more or less complete user-defined wrapper around tuple objects.
Adapted version of the standard library's UserList.
Taken from Stefan Schwarzer's ftputil library, available at
<http://www.ndh.net/home/sschwarzer/python/python_software.html>, and used under this license:
Copyright (C) 1999, Stefan Schwarzer
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
- Neither the name of the above author nor the names of the
contributors to the software may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
# $Id$
#XXX tuple instances (in Python 2.2) contain also:
# __class__, __delattr__, __getattribute__, __hash__, __new__,
# __reduce__, __setattr__, __str__
# What about these?
class UserTuple:
def __init__(self, inittuple=None):
self.data = ()
if inittuple is not None:
# XXX should this accept an arbitrary sequence?
if type(inittuple) == type(self.data):
self.data = inittuple
elif isinstance(inittuple, UserTuple):
# this results in
# self.data is inittuple.data
# but that's ok for tuples because they are
# immutable. (Builtin tuples behave the same.)
self.data = inittuple.data[:]
else:
# the same applies here; (t is tuple(t)) == 1
self.data = tuple(inittuple)
def __repr__(self): return repr(self.data)
def __lt__(self, other): return self.data < self.__cast(other)
def __le__(self, other): return self.data <= self.__cast(other)
def __eq__(self, other): return self.data == self.__cast(other)
def __ne__(self, other): return self.data != self.__cast(other)
def __gt__(self, other): return self.data > self.__cast(other)
def __ge__(self, other): return self.data >= self.__cast(other)
def __cast(self, other):
if isinstance(other, UserTuple): return other.data
else: return other
def __cmp__(self, other):
return cmp(self.data, self.__cast(other))
def __contains__(self, item): return item in self.data
def __len__(self): return len(self.data)
def __getitem__(self, i): return self.data[i]
def __getslice__(self, i, j):
i = max(i, 0); j = max(j, 0)
return self.__class__(self.data[i:j])
def __add__(self, other):
if isinstance(other, UserTuple):
return self.__class__(self.data + other.data)
elif isinstance(other, type(self.data)):
return self.__class__(self.data + other)
else:
return self.__class__(self.data + tuple(other))
# dir( () ) contains no __radd__ (at least in Python 2.2)
def __mul__(self, n):
return self.__class__(self.data*n)
__rmul__ = __mul__
| gpl-3.0 |
Fogapod/ChatBot | chatbot/textdata.py | 1 | 15555 | # Copyright 2015 Conchylicultor. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Loads the dialogue corpus, builds the vocabulary
"""
import numpy as np
import nltk # For tokenize
from tqdm import tqdm # Progress bar
import pickle # Saving the data
import math # For float comparison
import os # Checking file existance
import random
#from chatbot.cornelldata import CornellData
class Batch:
"""Struct containing batches info
"""
def __init__(self):
self.encoderSeqs = []
self.decoderSeqs = []
self.targetSeqs = []
self.weights = []
class TextData:
"""Dataset class
Warning: No vocabulary limit
"""
def __init__(self, args):
"""Load all conversations
Args:
args: parameters of the model
"""
# Model parameters
self.args = args
# Path variables
#self.corpusDir = os.path.join(self.args.rootDir, 'data/cornell/')
self.corpusDir = os.path.join(self.args.rootDir, 'data/')
self.samplesDir = os.path.join(self.args.rootDir, 'data/samples/')
self.samplesName = self._constructName()
self.padToken = -1 # Padding
self.goToken = -1 # Start of sequence
self.eosToken = -1 # End of sequence
self.unknownToken = -1 # Word dropped from vocabulary
self.trainingSamples = [] # 2d array containing each question and his answer [[input,target]]
self.word2id = {}
self.id2word = {} # For a rapid conversion
self.loadCorpus(self.samplesDir)
# Plot some stats:
print('Loaded: {} words, {} QA'.format(len(self.word2id), len(self.trainingSamples)))
if self.args.playDataset:
self.playDataset()
def _constructName(self):
"""Return the name of the dataset that the program should use with the current parameters.
Computer from the base name, the given tag (self.args.datasetTag) and the sentence length
"""
baseName = 'dataset'
if self.args.datasetTag:
baseName += '-' + self.args.datasetTag
return baseName + '-' + str(self.args.maxLength) + '.pkl'
def makeLighter(self, ratioDataset):
"""Only keep a small fraction of the dataset, given by the ratio
"""
#if not math.isclose(ratioDataset, 1.0):
# self.shuffle() # Really ?
# print('WARNING: Ratio feature not implemented !!!')
pass
def shuffle(self):
"""Shuffle the training samples
"""
print("Shuffling the dataset...")
random.shuffle(self.trainingSamples)
def _createBatch(self, samples):
"""Create a single batch from the list of sample. The batch size is automatically defined by the number of
samples given.
The inputs should already be inverted. The target should already have <go> and <eos>
Warning: This function should not make direct calls to args.batchSize !!!
Args:
samples (list<Obj>): a list of samples, each sample being on the form [input, target]
Return:
Batch: a batch object en
"""
batch = Batch()
batchSize = len(samples)
# Create the batch tensor
for i in range(batchSize):
# Unpack the sample
sample = samples[i]
if not self.args.test and self.args.watsonMode: # Watson mode: invert question and answer
sample = list(reversed(sample))
batch.encoderSeqs.append(list(reversed(sample[0]))) # Reverse inputs (and not outputs), little trick as defined on the original seq2seq paper
batch.decoderSeqs.append([self.goToken] + sample[1] + [self.eosToken]) # Add the <go> and <eos> tokens
batch.targetSeqs.append(batch.decoderSeqs[-1][1:]) # Same as decoder, but shifted to the left (ignore the <go>)
# Long sentences should have been filtered during the dataset creation
assert len(batch.encoderSeqs[i]) <= self.args.maxLengthEnco
assert len(batch.decoderSeqs[i]) <= self.args.maxLengthDeco
# Add padding & define weight
batch.encoderSeqs[i] = [self.padToken] * (self.args.maxLengthEnco - len(batch.encoderSeqs[i])) + batch.encoderSeqs[i] # Left padding for the input
batch.weights.append([1.0] * len(batch.targetSeqs[i]) + [0.0] * (self.args.maxLengthDeco - len(batch.targetSeqs[i])))
batch.decoderSeqs[i] = batch.decoderSeqs[i] + [self.padToken] * (self.args.maxLengthDeco - len(batch.decoderSeqs[i]))
batch.targetSeqs[i] = batch.targetSeqs[i] + [self.padToken] * (self.args.maxLengthDeco - len(batch.targetSeqs[i]))
# Simple hack to reshape the batch
encoderSeqsT = [] # Corrected orientation
for i in range(self.args.maxLengthEnco):
encoderSeqT = []
for j in range(batchSize):
encoderSeqT.append(batch.encoderSeqs[j][i])
encoderSeqsT.append(encoderSeqT)
batch.encoderSeqs = encoderSeqsT
decoderSeqsT = []
targetSeqsT = []
weightsT = []
for i in range(self.args.maxLengthDeco):
decoderSeqT = []
targetSeqT = []
weightT = []
for j in range(batchSize):
decoderSeqT.append(batch.decoderSeqs[j][i])
targetSeqT.append(batch.targetSeqs[j][i])
weightT.append(batch.weights[j][i])
decoderSeqsT.append(decoderSeqT)
targetSeqsT.append(targetSeqT)
weightsT.append(weightT)
batch.decoderSeqs = decoderSeqsT
batch.targetSeqs = targetSeqsT
batch.weights = weightsT
# # Debug
# self.printBatch(batch) # Input inverted, padding should be correct
# print(self.sequence2str(samples[0][0]))
# print(self.sequence2str(samples[0][1])) # Check we did not modified the original sample
return batch
def getBatches(self):
"""Prepare the batches for the current epoch
Return:
list<Batch>: Get a list of the batches for the next epoch
"""
self.shuffle()
batches = []
def genNextSamples():
""" Generator over the mini-batch training samples
"""
for i in range(0, self.getSampleSize(), self.args.batchSize):
yield self.trainingSamples[i:min(i + self.args.batchSize, self.getSampleSize())]
for samples in genNextSamples():
batch = self._createBatch(samples)
batches.append(batch)
return batches
def getSampleSize(self):
"""Return the size of the dataset
Return:
int: Number of training samples
"""
return len(self.trainingSamples)
def getVocabularySize(self):
"""Return the number of words present in the dataset
Return:
int: Number of word on the loader corpus
"""
return len(self.word2id)
def loadCorpus(self, dirName):
"""Load/create the conversations data
Args:
dirName (str): The directory where to load/save the model
"""
datasetExist = False
if os.path.exists(os.path.join(dirName, self.samplesName)):
datasetExist = True
if not datasetExist: # First time we load the database: creating all files
print('Training samples not found. Creating dataset...')
# Corpus creation
#cornellData = CornellData(self.corpusDir)
#self.createCorpus(cornellData.getConversations())
conversations = []
convObj = {}
convObj["lines"] = []
with open(self.corpusDir + 'message_dump.txt', 'r') as f:
lines = f.readlines()
for line in lines:
convObj["lines"].append({'text': line[:-1]})
conversations.append(convObj)
self.createCorpus(conversations)
# Saving
print('Saving dataset...')
self.saveDataset(dirName) # Saving tf samples
else:
print('Loading dataset from {}...'.format(dirName))
self.loadDataset(dirName)
assert self.padToken == 0
def saveDataset(self, dirName):
"""Save samples to file
Args:
dirName (str): The directory where to load/save the model
"""
with open(os.path.join(dirName, self.samplesName), 'wb') as handle:
data = { # Warning: If adding something here, also modifying loadDataset
"word2id": self.word2id,
"id2word": self.id2word,
"trainingSamples": self.trainingSamples
}
pickle.dump(data, handle, -1) # Using the highest protocol available
def loadDataset(self, dirName):
"""Load samples from file
Args:
dirName (str): The directory where to load the model
"""
with open(os.path.join(dirName, self.samplesName), 'rb') as handle:
data = pickle.load(handle) # Warning: If adding something here, also modifying saveDataset
self.word2id = data["word2id"]
self.id2word = data["id2word"]
self.trainingSamples = data["trainingSamples"]
self.padToken = self.word2id["<pad>"]
self.goToken = self.word2id["<go>"]
self.eosToken = self.word2id["<eos>"]
self.unknownToken = self.word2id["<unknown>"] # Restore special words
def createCorpus(self, conversations):
"""Extract all data from the given vocabulary
"""
# Add standard tokens
self.padToken = self.getWordId("<pad>") # Padding (Warning: first things to add > id=0 !!)
self.goToken = self.getWordId("<go>") # Start of sequence
self.eosToken = self.getWordId("<eos>") # End of sequence
self.unknownToken = self.getWordId("<unknown>") # Word dropped from vocabulary
# Preprocessing data
for conversation in tqdm(conversations, desc="Extract conversations"):
self.extractConversation(conversation)
# The dataset will be saved in the same order it has been extracted
def extractConversation(self, conversation):
"""Extract the sample lines from the conversations
Args:
conversation (Obj): a conversation object containing the lines to extract
"""
# Iterate over all the lines of the conversation
for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it)
inputLine = conversation["lines"][i]
targetLine = conversation["lines"][i+1]
inputWords = self.extractText(inputLine["text"])
targetWords = self.extractText(targetLine["text"], True)
if inputWords and targetWords: # Filter wrong samples (if one of the list is empty)
self.trainingSamples.append([inputWords, targetWords])
def extractText(self, line, isTarget=False):
"""Extract the words from a sample lines
Args:
line (str): a line containing the text to extract
isTarget (bool): Define the question on the answer
Return:
list<int>: the list of the word ids of the sentence
"""
words = []
# Extract sentences
sentencesToken = nltk.sent_tokenize(line)
# We add sentence by sentence until we reach the maximum length
for i in range(len(sentencesToken)):
# If question: we only keep the last sentences
# If answer: we only keep the first sentences
if not isTarget:
i = len(sentencesToken)-1 - i
tokens = nltk.word_tokenize(sentencesToken[i])
# If the total length is not too big, we still can add one more sentence
if len(words) + len(tokens) <= self.args.maxLength:
tempWords = []
for token in tokens:
tempWords.append(self.getWordId(token)) # Create the vocabulary and the training sentences
if isTarget:
words = words + tempWords
else:
words = tempWords + words
else:
break # We reach the max length already
return words
def getWordId(self, word, create=True):
"""Get the id of the word (and add it to the dictionary if not existing). If the word does not exist and
create is set to False, the function will return the unknownToken value
Args:
word (str): word to add
create (Bool): if True and the word does not exist already, the world will be added
Return:
int: the id of the word created
"""
# Should we Keep only words with more than one occurrence ?
word = word.lower() # Ignore case
# Get the id if the word already exist
wordId = self.word2id.get(word, -1)
# If not, we create a new entry
if wordId == -1:
if create:
wordId = len(self.word2id)
self.word2id[word] = wordId
self.id2word[wordId] = word
else:
wordId = self.unknownToken
return wordId
def printBatch(self, batch):
"""Print a complete batch, useful for debugging
Args:
batch (Batch): a batch object
"""
print('----- Print batch -----')
for i in range(len(batch.encoderSeqs[0])): # Batch size
print('Encoder: {}'.format(self.batchSeq2str(batch.encoderSeqs, seqId=i)))
print('Decoder: {}'.format(self.batchSeq2str(batch.decoderSeqs, seqId=i)))
print('Targets: {}'.format(self.batchSeq2str(batch.targetSeqs, seqId=i)))
print('Weights: {}'.format(' '.join([str(weight) for weight in [batchWeight[i] for batchWeight in batch.weights]])))
def sequence2str(self, sequence, clean=False, reverse=False):
"""Convert a list of integer into a human readable string
Args:
sequence (list<int>): the sentence to print
clean (Bool): if set, remove the <go>, <pad> and <eos> tokens
reverse (Bool): for the input, option to restore the standard order
Return:
str: the sentence
"""
if not sequence:
return ''
if not clean:
return ' '.join([self.id2word[idx] for idx in sequence])
sentence = []
for wordId in sequence:
if wordId == self.eosToken: # End of generated sentence
break
elif wordId != self.padToken and wordId != self.goToken:
sentence.append(self.id2word[wordId])
if reverse: # Reverse means input so no <eos> (otherwise pb with previous early stop)
sentence.reverse()
return ' '.join(sentence)
def batchSeq2str(self, batchSeq, seqId=0, **kwargs):
"""Convert a list of integer into a human readable string.
The difference between the previous function is that on a batch object, the values have been reorganized as
batch instead of sentence.
Args:
batchSeq (list<list<int>>): the sentence(s) to print
seqId (int): the position of the sequence inside the batch
kwargs: the formatting options( See sequence2str() )
Return:
str: the sentence
"""
sequence = []
for i in range(len(batchSeq)): # Sequence length
sequence.append(batchSeq[i][seqId])
return self.sequence2str(sequence, **kwargs)
def sentence2enco(self, sentence):
"""Encode a sequence and return a batch as an input for the model
Return:
Batch: a batch object containing the sentence, or none if something went wrong
"""
if sentence == '':
return None
# First step: Divide the sentence in token
tokens = nltk.word_tokenize(sentence)
if len(tokens) > self.args.maxLength:
return None
# Second step: Convert the token in word ids
wordIds = []
for token in tokens:
wordIds.append(self.getWordId(token, create=False)) # Create the vocabulary and the training sentences
# Third step: creating the batch (add padding, reverse)
batch = self._createBatch([[wordIds, []]]) # Mono batch, no target output
return batch
def deco2sentence(self, decoderOutputs):
"""Decode the output of the decoder and return a human friendly sentence
decoderOutputs (list<np.array>):
"""
sequence = []
# Choose the words with the highest prediction score
for out in decoderOutputs:
sequence.append(np.argmax(out)) # Adding each predicted word ids
return sequence # We return the raw sentence. Let the caller do some cleaning eventually
def playDataset(self):
"""Print a random dialogue from the dataset
"""
print('Randomly play samples:')
for i in range(self.args.playDataset):
idSample = random.randint(0, len(self.trainingSamples))
print('Q: {}'.format(self.sequence2str(self.trainingSamples[idSample][0])))
print('A: {}'.format(self.sequence2str(self.trainingSamples[idSample][1])))
print()
pass
| mit |
kesuki/DexHunter | dalvik/vm/mterp/gen-mterp.py | 37 | 20423 | #!/usr/bin/env python
#
# Copyright (C) 2007 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Using instructions from an architecture-specific config file, generate C
# and assembly source files for the Dalvik interpreter.
#
import sys, string, re, time
from string import Template
interp_defs_file = "../../libdex/DexOpcodes.h" # need opcode list
kNumPackedOpcodes = 256 # TODO: Derive this from DexOpcodes.h.
splitops = False
verbose = False
handler_size_bits = -1000
handler_size_bytes = -1000
in_op_start = 0 # 0=not started, 1=started, 2=ended
in_alt_op_start = 0 # 0=not started, 1=started, 2=ended
default_op_dir = None
default_alt_stub = None
opcode_locations = {}
alt_opcode_locations = {}
asm_stub_text = []
label_prefix = ".L" # use ".L" to hide labels from gdb
alt_label_prefix = ".L_ALT" # use ".L" to hide labels from gdb
style = None # interpreter style
generate_alt_table = False
# Exception class.
class DataParseError(SyntaxError):
"Failure when parsing data file"
#
# Set any omnipresent substitution values.
#
def getGlobalSubDict():
return { "handler_size_bits":handler_size_bits,
"handler_size_bytes":handler_size_bytes }
#
# Parse arch config file --
# Set interpreter style.
#
def setHandlerStyle(tokens):
global style
if len(tokens) != 2:
raise DataParseError("handler-style requires one argument")
style = tokens[1]
if style != "computed-goto" and style != "jump-table" and style != "all-c":
raise DataParseError("handler-style (%s) invalid" % style)
#
# Parse arch config file --
# Set handler_size_bytes to the value of tokens[1], and handler_size_bits to
# log2(handler_size_bytes). Throws an exception if "bytes" is not 0 or
# a power of two.
#
def setHandlerSize(tokens):
global handler_size_bits, handler_size_bytes
if style != "computed-goto":
print "Warning: handler-size valid only for computed-goto interpreters"
if len(tokens) != 2:
raise DataParseError("handler-size requires one argument")
if handler_size_bits != -1000:
raise DataParseError("handler-size may only be set once")
# compute log2(n), and make sure n is 0 or a power of 2
handler_size_bytes = bytes = int(tokens[1])
bits = -1
while bytes > 0:
bytes //= 2 # halve with truncating division
bits += 1
if handler_size_bytes == 0 or handler_size_bytes != (1 << bits):
raise DataParseError("handler-size (%d) must be power of 2" \
% orig_bytes)
handler_size_bits = bits
#
# Parse arch config file --
# Copy a file in to the C or asm output file.
#
def importFile(tokens):
if len(tokens) != 2:
raise DataParseError("import requires one argument")
source = tokens[1]
if source.endswith(".cpp"):
appendSourceFile(tokens[1], getGlobalSubDict(), c_fp, None)
elif source.endswith(".S"):
appendSourceFile(tokens[1], getGlobalSubDict(), asm_fp, None)
else:
raise DataParseError("don't know how to import %s (expecting .cpp/.S)"
% source)
#
# Parse arch config file --
# Copy a file in to the C or asm output file.
#
def setAsmStub(tokens):
global asm_stub_text
if style == "all-c":
print "Warning: asm-stub ignored for all-c interpreter"
if len(tokens) != 2:
raise DataParseError("import requires one argument")
try:
stub_fp = open(tokens[1])
asm_stub_text = stub_fp.readlines()
except IOError, err:
stub_fp.close()
raise DataParseError("unable to load asm-stub: %s" % str(err))
stub_fp.close()
#
# Parse arch config file --
# Record location of default alt stub
#
def setAsmAltStub(tokens):
global default_alt_stub, generate_alt_table
if style == "all-c":
print "Warning: asm-alt-stub ingored for all-c interpreter"
if len(tokens) != 2:
raise DataParseError("import requires one argument")
default_alt_stub = tokens[1]
generate_alt_table = True
#
# Parse arch config file --
# Start of opcode list.
#
def opStart(tokens):
global in_op_start
global default_op_dir
if len(tokens) != 2:
raise DataParseError("opStart takes a directory name argument")
if in_op_start != 0:
raise DataParseError("opStart can only be specified once")
default_op_dir = tokens[1]
in_op_start = 1
#
# Parse arch config file --
# Set location of a single alt opcode's source file.
#
def altEntry(tokens):
global generate_alt_table
if len(tokens) != 3:
raise DataParseError("alt requires exactly two arguments")
if in_op_start != 1:
raise DataParseError("alt statements must be between opStart/opEnd")
try:
index = opcodes.index(tokens[1])
except ValueError:
raise DataParseError("unknown opcode %s" % tokens[1])
if alt_opcode_locations.has_key(tokens[1]):
print "Note: alt overrides earlier %s (%s -> %s)" \
% (tokens[1], alt_opcode_locations[tokens[1]], tokens[2])
alt_opcode_locations[tokens[1]] = tokens[2]
generate_alt_table = True
#
# Parse arch config file --
# Set location of a single opcode's source file.
#
def opEntry(tokens):
#global opcode_locations
if len(tokens) != 3:
raise DataParseError("op requires exactly two arguments")
if in_op_start != 1:
raise DataParseError("op statements must be between opStart/opEnd")
try:
index = opcodes.index(tokens[1])
except ValueError:
raise DataParseError("unknown opcode %s" % tokens[1])
if opcode_locations.has_key(tokens[1]):
print "Note: op overrides earlier %s (%s -> %s)" \
% (tokens[1], opcode_locations[tokens[1]], tokens[2])
opcode_locations[tokens[1]] = tokens[2]
#
# Emit jump table
#
def emitJmpTable(start_label, prefix):
asm_fp.write("\n .global %s\n" % start_label)
asm_fp.write(" .text\n")
asm_fp.write("%s:\n" % start_label)
for i in xrange(kNumPackedOpcodes):
op = opcodes[i]
dict = getGlobalSubDict()
dict.update({ "opcode":op, "opnum":i })
asm_fp.write(" .long " + prefix + \
"_%(opcode)s /* 0x%(opnum)02x */\n" % dict)
#
# Parse arch config file --
# End of opcode list; emit instruction blocks.
#
def opEnd(tokens):
global in_op_start
if len(tokens) != 1:
raise DataParseError("opEnd takes no arguments")
if in_op_start != 1:
raise DataParseError("opEnd must follow opStart, and only appear once")
in_op_start = 2
loadAndEmitOpcodes()
if splitops == False:
if generate_alt_table:
loadAndEmitAltOpcodes()
if style == "jump-table":
emitJmpTable("dvmAsmInstructionStart", label_prefix);
emitJmpTable("dvmAsmAltInstructionStart", alt_label_prefix);
def genaltop(tokens):
if in_op_start != 2:
raise DataParseError("alt-op can be specified only after op-end")
if len(tokens) != 1:
raise DataParseError("opEnd takes no arguments")
if generate_alt_table:
loadAndEmitAltOpcodes()
if style == "jump-table":
emitJmpTable("dvmAsmInstructionStart", label_prefix);
emitJmpTable("dvmAsmAltInstructionStart", alt_label_prefix);
#
# Extract an ordered list of instructions from the VM sources. We use the
# "goto table" definition macro, which has exactly kNumPackedOpcodes
# entries.
#
def getOpcodeList():
opcodes = []
opcode_fp = open(interp_defs_file)
opcode_re = re.compile(r"^\s*H\(OP_(\w+)\),.*", re.DOTALL)
for line in opcode_fp:
match = opcode_re.match(line)
if not match:
continue
opcodes.append("OP_" + match.group(1))
opcode_fp.close()
if len(opcodes) != kNumPackedOpcodes:
print "ERROR: found %d opcodes in Interp.h (expected %d)" \
% (len(opcodes), kNumPackedOpcodes)
raise SyntaxError, "bad opcode count"
return opcodes
def emitAlign():
if style == "computed-goto":
asm_fp.write(" .balign %d\n" % handler_size_bytes)
#
# Load and emit opcodes for all kNumPackedOpcodes instructions.
#
def loadAndEmitOpcodes():
sister_list = []
assert len(opcodes) == kNumPackedOpcodes
need_dummy_start = False
if style == "jump-table":
start_label = "dvmAsmInstructionStartCode"
end_label = "dvmAsmInstructionEndCode"
else:
start_label = "dvmAsmInstructionStart"
end_label = "dvmAsmInstructionEnd"
# point dvmAsmInstructionStart at the first handler or stub
asm_fp.write("\n .global %s\n" % start_label)
asm_fp.write(" .type %s, %%function\n" % start_label)
asm_fp.write("%s = " % start_label + label_prefix + "_OP_NOP\n")
asm_fp.write(" .text\n\n")
for i in xrange(kNumPackedOpcodes):
op = opcodes[i]
if opcode_locations.has_key(op):
location = opcode_locations[op]
else:
location = default_op_dir
if location == "c":
loadAndEmitC(location, i)
if len(asm_stub_text) == 0:
need_dummy_start = True
else:
loadAndEmitAsm(location, i, sister_list)
# For a 100% C implementation, there are no asm handlers or stubs. We
# need to have the dvmAsmInstructionStart label point at OP_NOP, and it's
# too annoying to try to slide it in after the alignment psuedo-op, so
# we take the low road and just emit a dummy OP_NOP here.
if need_dummy_start:
emitAlign()
asm_fp.write(label_prefix + "_OP_NOP: /* dummy */\n");
emitAlign()
asm_fp.write(" .size %s, .-%s\n" % (start_label, start_label))
asm_fp.write(" .global %s\n" % end_label)
asm_fp.write("%s:\n" % end_label)
if style == "computed-goto":
emitSectionComment("Sister implementations", asm_fp)
asm_fp.write(" .global dvmAsmSisterStart\n")
asm_fp.write(" .type dvmAsmSisterStart, %function\n")
asm_fp.write(" .text\n")
asm_fp.write(" .balign 4\n")
asm_fp.write("dvmAsmSisterStart:\n")
asm_fp.writelines(sister_list)
asm_fp.write("\n .size dvmAsmSisterStart, .-dvmAsmSisterStart\n")
asm_fp.write(" .global dvmAsmSisterEnd\n")
asm_fp.write("dvmAsmSisterEnd:\n\n")
#
# Load an alternate entry stub
#
def loadAndEmitAltStub(source, opindex):
op = opcodes[opindex]
if verbose:
print " alt emit %s --> stub" % source
dict = getGlobalSubDict()
dict.update({ "opcode":op, "opnum":opindex })
emitAsmHeader(asm_fp, dict, alt_label_prefix)
appendSourceFile(source, dict, asm_fp, None)
#
# Load and emit alternate opcodes for all kNumPackedOpcodes instructions.
#
def loadAndEmitAltOpcodes():
assert len(opcodes) == kNumPackedOpcodes
if style == "jump-table":
start_label = "dvmAsmAltInstructionStartCode"
end_label = "dvmAsmAltInstructionEndCode"
else:
start_label = "dvmAsmAltInstructionStart"
end_label = "dvmAsmAltInstructionEnd"
# point dvmAsmInstructionStart at the first handler or stub
asm_fp.write("\n .global %s\n" % start_label)
asm_fp.write(" .type %s, %%function\n" % start_label)
asm_fp.write(" .text\n\n")
asm_fp.write("%s = " % start_label + label_prefix + "_ALT_OP_NOP\n")
for i in xrange(kNumPackedOpcodes):
op = opcodes[i]
if alt_opcode_locations.has_key(op):
source = "%s/ALT_%s.S" % (alt_opcode_locations[op], op)
else:
source = default_alt_stub
loadAndEmitAltStub(source, i)
emitAlign()
asm_fp.write(" .size %s, .-%s\n" % (start_label, start_label))
asm_fp.write(" .global %s\n" % end_label)
asm_fp.write("%s:\n" % end_label)
#
# Load a C fragment and emit it, then output an assembly stub.
#
def loadAndEmitC(location, opindex):
op = opcodes[opindex]
source = "%s/%s.cpp" % (location, op)
if verbose:
print " emit %s --> C++" % source
dict = getGlobalSubDict()
dict.update({ "opcode":op, "opnum":opindex })
appendSourceFile(source, dict, c_fp, None)
if len(asm_stub_text) != 0:
emitAsmStub(asm_fp, dict)
#
# Load an assembly fragment and emit it.
#
def loadAndEmitAsm(location, opindex, sister_list):
op = opcodes[opindex]
source = "%s/%s.S" % (location, op)
dict = getGlobalSubDict()
dict.update({ "opcode":op, "opnum":opindex })
if verbose:
print " emit %s --> asm" % source
emitAsmHeader(asm_fp, dict, label_prefix)
appendSourceFile(source, dict, asm_fp, sister_list)
#
# Output the alignment directive and label for an assembly piece.
#
def emitAsmHeader(outfp, dict, prefix):
outfp.write("/* ------------------------------ */\n")
# The alignment directive ensures that the handler occupies
# at least the correct amount of space. We don't try to deal
# with overflow here.
emitAlign()
# Emit a label so that gdb will say the right thing. We prepend an
# underscore so the symbol name doesn't clash with the Opcode enum.
outfp.write(prefix + "_%(opcode)s: /* 0x%(opnum)02x */\n" % dict)
#
# Output a generic instruction stub that updates the "glue" struct and
# calls the C implementation.
#
def emitAsmStub(outfp, dict):
emitAsmHeader(outfp, dict, label_prefix)
for line in asm_stub_text:
templ = Template(line)
outfp.write(templ.substitute(dict))
#
# Append the file specified by "source" to the open "outfp". Each line will
# be template-replaced using the substitution dictionary "dict".
#
# If the first line of the file starts with "%" it is taken as a directive.
# A "%include" line contains a filename and, optionally, a Python-style
# dictionary declaration with substitution strings. (This is implemented
# with recursion.)
#
# If "sister_list" is provided, and we find a line that contains only "&",
# all subsequent lines from the file will be appended to sister_list instead
# of copied to the output.
#
# This may modify "dict".
#
def appendSourceFile(source, dict, outfp, sister_list):
outfp.write("/* File: %s */\n" % source)
infp = open(source, "r")
in_sister = False
for line in infp:
if line.startswith("%include"):
# Parse the "include" line
tokens = line.strip().split(' ', 2)
if len(tokens) < 2:
raise DataParseError("malformed %%include in %s" % source)
alt_source = tokens[1].strip("\"")
if alt_source == source:
raise DataParseError("self-referential %%include in %s"
% source)
new_dict = dict.copy()
if len(tokens) == 3:
new_dict.update(eval(tokens[2]))
#print " including src=%s dict=%s" % (alt_source, new_dict)
appendSourceFile(alt_source, new_dict, outfp, sister_list)
continue
elif line.startswith("%default"):
# copy keywords into dictionary
tokens = line.strip().split(' ', 1)
if len(tokens) < 2:
raise DataParseError("malformed %%default in %s" % source)
defaultValues = eval(tokens[1])
for entry in defaultValues:
dict.setdefault(entry, defaultValues[entry])
continue
elif line.startswith("%verify"):
# more to come, someday
continue
elif line.startswith("%break") and sister_list != None:
# allow more than one %break, ignoring all following the first
if style == "computed-goto" and not in_sister:
in_sister = True
sister_list.append("\n/* continuation for %(opcode)s */\n"%dict)
continue
# perform keyword substitution if a dictionary was provided
if dict != None:
templ = Template(line)
try:
subline = templ.substitute(dict)
except KeyError, err:
raise DataParseError("keyword substitution failed in %s: %s"
% (source, str(err)))
except:
print "ERROR: substitution failed: " + line
raise
else:
subline = line
# write output to appropriate file
if in_sister:
sister_list.append(subline)
else:
outfp.write(subline)
outfp.write("\n")
infp.close()
#
# Emit a C-style section header comment.
#
def emitSectionComment(str, fp):
equals = "========================================" \
"==================================="
fp.write("\n/*\n * %s\n * %s\n * %s\n */\n" %
(equals, str, equals))
#
# ===========================================================================
# "main" code
#
#
# Check args.
#
if len(sys.argv) != 3:
print "Usage: %s target-arch output-dir" % sys.argv[0]
sys.exit(2)
target_arch = sys.argv[1]
output_dir = sys.argv[2]
#
# Extract opcode list.
#
opcodes = getOpcodeList()
#for op in opcodes:
# print " %s" % op
#
# Open config file.
#
try:
config_fp = open("config-%s" % target_arch)
except:
print "Unable to open config file 'config-%s'" % target_arch
sys.exit(1)
#
# Open and prepare output files.
#
try:
c_fp = open("%s/InterpC-%s.cpp" % (output_dir, target_arch), "w")
asm_fp = open("%s/InterpAsm-%s.S" % (output_dir, target_arch), "w")
except:
print "Unable to open output files"
print "Make sure directory '%s' exists and existing files are writable" \
% output_dir
# Ideally we'd remove the files to avoid confusing "make", but if they
# failed to open we probably won't be able to remove them either.
sys.exit(1)
print "Generating %s, %s" % (c_fp.name, asm_fp.name)
file_header = """/*
* This file was generated automatically by gen-mterp.py for '%s'.
*
* --> DO NOT EDIT <--
*/
""" % (target_arch)
c_fp.write(file_header)
asm_fp.write(file_header)
#
# Process the config file.
#
failed = False
try:
for line in config_fp:
line = line.strip() # remove CRLF, leading spaces
tokens = line.split(' ') # tokenize
#print "%d: %s" % (len(tokens), tokens)
if len(tokens[0]) == 0:
#print " blank"
pass
elif tokens[0][0] == '#':
#print " comment"
pass
else:
if tokens[0] == "handler-size":
setHandlerSize(tokens)
elif tokens[0] == "import":
importFile(tokens)
elif tokens[0] == "asm-stub":
setAsmStub(tokens)
elif tokens[0] == "asm-alt-stub":
setAsmAltStub(tokens)
elif tokens[0] == "op-start":
opStart(tokens)
elif tokens[0] == "op-end":
opEnd(tokens)
elif tokens[0] == "alt":
altEntry(tokens)
elif tokens[0] == "op":
opEntry(tokens)
elif tokens[0] == "handler-style":
setHandlerStyle(tokens)
elif tokens[0] == "alt-ops":
genaltop(tokens)
elif tokens[0] == "split-ops":
splitops = True
else:
raise DataParseError, "unrecognized command '%s'" % tokens[0]
if style == None:
print "tokens[0] = %s" % tokens[0]
raise DataParseError, "handler-style must be first command"
except DataParseError, err:
print "Failed: " + str(err)
# TODO: remove output files so "make" doesn't get confused
failed = True
c_fp.close()
asm_fp.close()
c_fp = asm_fp = None
config_fp.close()
#
# Done!
#
if c_fp:
c_fp.close()
if asm_fp:
asm_fp.close()
sys.exit(failed)
| apache-2.0 |
bright-sparks/chromium-spacewalk | tools/deep_memory_profiler/visualizer/app_unittest.py | 99 | 2830 | # Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# This file is expected to be used under another directory to use,
# so we disable checking import path of GAE tools from this directory.
# pylint: disable=F0401,E0611
import json
import unittest
from google.appengine.api import files
from google.appengine.ext import ndb
from google.appengine.ext import testbed
from google.appengine.ext.blobstore import BlobInfo
import services
class ServicesTest(unittest.TestCase):
@staticmethod
def CreateBlob(path):
# Initialize blob dictionary to return.
blob = {}
# Read sample file.
blob['json_str'] = open(path, 'r').read()
# Create file in blobstore according to sample file.
file_name = files.blobstore.create(mime_type='text/plain')
with files.open(file_name, 'a') as f:
f.write(blob['json_str'])
files.finalize(file_name)
# Get BlobInfo of sample file.
blob['blob_info'] = BlobInfo.get(files.blobstore.get_blob_key(file_name))
return blob
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_all_stubs()
# Read sample file.
self.correct_blob = ServicesTest.CreateBlob('testdata/sample.json')
self.error_blob = ServicesTest.CreateBlob('testdata/error_sample.json')
def tearDown(self):
self.testbed.deactivate()
def testProfiler(self):
correct_blob = self.correct_blob
# Call services function to create Profiler entity.
run_id = services.CreateProfiler(correct_blob['blob_info'])
# Test GetProfiler
self.assertEqual(services.GetProfiler(run_id), correct_blob['json_str'])
# Create Profiler entity with the same file again and check uniqueness.
services.CreateProfiler(correct_blob['blob_info'])
self.assertEqual(services.Profiler.query().count(), 1)
def testTemplate(self):
correct_blob = self.correct_blob
# Call services function to create template entities.
services.CreateTemplates(correct_blob['blob_info'])
# Test templates being stored in database correctly.
json_obj = json.loads(correct_blob['json_str'])
for content in json_obj['templates'].values():
template_entity = ndb.Key('Template', json.dumps(content)).get()
self.assertEqual(template_entity.content, content)
# Create template entities with the same file again and check uniqueness.
services.CreateTemplates(correct_blob['blob_info'])
self.assertEqual(services.Template.query().count(), 2)
def testErrorBlob(self):
error_blob = self.error_blob
# Test None when default template not indicated or found in templates.
dflt_tmpl = services.CreateTemplates(error_blob['blob_info'])
self.assertIsNone(dflt_tmpl)
| bsd-3-clause |
gVallverdu/pymatgen | pymatgen/symmetry/groups.py | 2 | 18974 | # coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
Defines SymmetryGroup parent class and PointGroup and SpaceGroup classes.
Shyue Ping Ong thanks Marc De Graef for his generous sharing of his
SpaceGroup data as published in his textbook "Structure of Materials".
"""
import os
from itertools import product
from fractions import Fraction
from abc import ABCMeta, abstractmethod
from collections.abc import Sequence
import numpy as np
import warnings
import re
from monty.serialization import loadfn
from pymatgen.core.operations import SymmOp
from monty.design_patterns import cached_class
SYMM_DATA = None
def _get_symm_data(name):
global SYMM_DATA
if SYMM_DATA is None:
SYMM_DATA = loadfn(os.path.join(os.path.dirname(__file__),
"symm_data.json"))
return SYMM_DATA[name]
class SymmetryGroup(Sequence, metaclass=ABCMeta):
"""
Abstract class representation a symmetry group.
"""
@property
@abstractmethod
def symmetry_ops(self):
"""
:return: List of symmetry operations
"""
pass
def __contains__(self, item):
for i in self.symmetry_ops:
if np.allclose(i.affine_matrix, item.affine_matrix):
return True
return False
def __hash__(self):
return self.__len__()
def __getitem__(self, item):
return self.symmetry_ops[item]
def __len__(self):
return len(self.symmetry_ops)
def is_subgroup(self, supergroup):
"""
True if this group is a subgroup of the supplied group.
Args:
supergroup (SymmetryGroup): Supergroup to test.
Returns:
True if this group is a subgroup of the supplied group.
"""
warnings.warn("This is not fully functional. Only trivial subsets are tested right now. ")
return set(self.symmetry_ops).issubset(supergroup.symmetry_ops)
def is_supergroup(self, subgroup):
"""
True if this group is a supergroup of the supplied group.
Args:
subgroup (SymmetryGroup): Subgroup to test.
Returns:
True if this group is a supergroup of the supplied group.
"""
warnings.warn("This is not fully functional. Only trivial subsets are "
"tested right now. ")
return set(subgroup.symmetry_ops).issubset(self.symmetry_ops)
@cached_class
class PointGroup(SymmetryGroup):
"""
Class representing a Point Group, with generators and symmetry operations.
.. attribute:: symbol
Full International or Hermann-Mauguin Symbol.
.. attribute:: generators
List of generator matrices. Note that 3x3 matrices are used for Point
Groups.
.. attribute:: symmetry_ops
Full set of symmetry operations as matrices.
"""
def __init__(self, int_symbol):
"""
Initializes a Point Group from its international symbol.
Args:
int_symbol (str): International or Hermann-Mauguin Symbol.
"""
self.symbol = int_symbol
self.generators = [_get_symm_data("generator_matrices")[c]
for c in _get_symm_data("point_group_encoding")[int_symbol]]
self._symmetry_ops = set([SymmOp.from_rotation_and_translation(m)
for m in self._generate_full_symmetry_ops()])
self.order = len(self._symmetry_ops)
@property
def symmetry_ops(self):
"""
:return: List of symmetry operations for SpaceGroup
"""
return self._symmetry_ops
def _generate_full_symmetry_ops(self):
symm_ops = list(self.generators)
new_ops = self.generators
while len(new_ops) > 0:
gen_ops = []
for g1, g2 in product(new_ops, symm_ops):
op = np.dot(g1, g2)
if not in_array_list(symm_ops, op):
gen_ops.append(op)
symm_ops.append(op)
new_ops = gen_ops
return symm_ops
def get_orbit(self, p, tol=1e-5):
"""
Returns the orbit for a point.
Args:
p: Point as a 3x1 array.
tol: Tolerance for determining if sites are the same. 1e-5 should
be sufficient for most purposes. Set to 0 for exact matching
(and also needed for symbolic orbits).
Returns:
([array]) Orbit for point.
"""
orbit = []
for o in self.symmetry_ops:
pp = o.operate(p)
if not in_array_list(orbit, pp, tol=tol):
orbit.append(pp)
return orbit
@cached_class
class SpaceGroup(SymmetryGroup):
"""
Class representing a SpaceGroup.
.. attribute:: symbol
Full International or Hermann-Mauguin Symbol.
.. attribute:: int_number
International number
.. attribute:: generators
List of generator matrices. Note that 4x4 matrices are used for Space
Groups.
.. attribute:: order
Order of Space Group
"""
SYMM_OPS = loadfn(os.path.join(os.path.dirname(__file__),
"symm_ops.json"))
SG_SYMBOLS = set(_get_symm_data("space_group_encoding").keys())
for op in SYMM_OPS:
op["hermann_mauguin"] = re.sub(r" ", "", op["hermann_mauguin"])
op["universal_h_m"] = re.sub(r" ", "", op["universal_h_m"])
SG_SYMBOLS.add(op["hermann_mauguin"])
SG_SYMBOLS.add(op["universal_h_m"])
gen_matrices = _get_symm_data("generator_matrices")
# POINT_GROUP_ENC = SYMM_DATA["point_group_encoding"]
sgencoding = _get_symm_data("space_group_encoding")
abbrev_sg_mapping = _get_symm_data("abbreviated_spacegroup_symbols")
translations = {k: Fraction(v) for k, v in _get_symm_data(
"translations").items()}
full_sg_mapping = {
v["full_symbol"]: k
for k, v in _get_symm_data("space_group_encoding").items()}
def __init__(self, int_symbol):
"""
Initializes a Space Group from its full or abbreviated international
symbol. Only standard settings are supported.
Args:
int_symbol (str): Full International (e.g., "P2/m2/m2/m") or
Hermann-Mauguin Symbol ("Pmmm") or abbreviated symbol. The
notation is a LaTeX-like string, with screw axes being
represented by an underscore. For example, "P6_3/mmc".
Alternative settings can be access by adding a ":identifier".
For example, the hexagonal setting for rhombohedral cells can be
accessed by adding a ":H", e.g., "R-3m:H". To find out all
possible settings for a spacegroup, use the get_settings
classmethod. Alternative origin choices can be indicated by a
translation vector, e.g., 'Fm-3m(a-1/4,b-1/4,c-1/4)'.
"""
int_symbol = re.sub(r" ", "", int_symbol)
if int_symbol in SpaceGroup.abbrev_sg_mapping:
int_symbol = SpaceGroup.abbrev_sg_mapping[int_symbol]
elif int_symbol in SpaceGroup.full_sg_mapping:
int_symbol = SpaceGroup.full_sg_mapping[int_symbol]
for spg in SpaceGroup.SYMM_OPS:
if int_symbol in [spg["hermann_mauguin"], spg["universal_h_m"]]:
ops = [SymmOp.from_xyz_string(s) for s in spg["symops"]]
self.symbol = re.sub(r":", "",
re.sub(r" ", "", spg["universal_h_m"]))
if int_symbol in SpaceGroup.sgencoding:
self.full_symbol = SpaceGroup.sgencoding[int_symbol]["full_symbol"]
self.point_group = SpaceGroup.sgencoding[int_symbol]["point_group"]
else:
self.full_symbol = re.sub(r" ", "",
spg["universal_h_m"])
self.point_group = spg["schoenflies"]
self.int_number = spg["number"]
self.order = len(ops)
self._symmetry_ops = ops
break
else:
if int_symbol not in SpaceGroup.sgencoding:
raise ValueError("Bad international symbol %s" % int_symbol)
data = SpaceGroup.sgencoding[int_symbol]
self.symbol = int_symbol
# TODO: Support different origin choices.
enc = list(data["enc"])
inversion = int(enc.pop(0))
ngen = int(enc.pop(0))
symm_ops = [np.eye(4)]
if inversion:
symm_ops.append(np.array(
[[-1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0],
[0, 0, 0, 1]]))
for i in range(ngen):
m = np.eye(4)
m[:3, :3] = SpaceGroup.gen_matrices[enc.pop(0)]
m[0, 3] = SpaceGroup.translations[enc.pop(0)]
m[1, 3] = SpaceGroup.translations[enc.pop(0)]
m[2, 3] = SpaceGroup.translations[enc.pop(0)]
symm_ops.append(m)
self.generators = symm_ops
self.full_symbol = data["full_symbol"]
self.point_group = data["point_group"]
self.int_number = data["int_number"]
self.order = data["order"]
self._symmetry_ops = None
def _generate_full_symmetry_ops(self):
symm_ops = np.array(self.generators)
for op in symm_ops:
op[0:3, 3] = np.mod(op[0:3, 3], 1)
new_ops = symm_ops
while len(new_ops) > 0 and len(symm_ops) < self.order:
gen_ops = []
for g in new_ops:
temp_ops = np.einsum('ijk,kl', symm_ops, g)
for op in temp_ops:
op[0:3, 3] = np.mod(op[0:3, 3], 1)
ind = np.where(np.abs(1 - op[0:3, 3]) < 1e-5)
op[ind, 3] = 0
if not in_array_list(symm_ops, op):
gen_ops.append(op)
symm_ops = np.append(symm_ops, [op], axis=0)
new_ops = gen_ops
assert len(symm_ops) == self.order
return symm_ops
@classmethod
def get_settings(cls, int_symbol):
"""
Returns all the settings for a particular international symbol.
Args:
int_symbol (str): Full International (e.g., "P2/m2/m2/m") or
Hermann-Mauguin Symbol ("Pmmm") or abbreviated symbol. The
notation is a LaTeX-like string, with screw axes being
represented by an underscore. For example, "P6_3/mmc".
"""
symbols = []
if int_symbol in SpaceGroup.abbrev_sg_mapping:
symbols.append(SpaceGroup.abbrev_sg_mapping[int_symbol])
int_number = SpaceGroup.sgencoding[int_symbol]["int_number"]
elif int_symbol in SpaceGroup.full_sg_mapping:
symbols.append(SpaceGroup.full_sg_mapping[int_symbol])
int_number = SpaceGroup.sgencoding[int_symbol]["int_number"]
else:
for spg in SpaceGroup.SYMM_OPS:
if int_symbol in [re.split(r"\(|:", spg["hermann_mauguin"])[0],
re.split(r"\(|:", spg["universal_h_m"])[0]]:
int_number = spg["number"]
break
for spg in SpaceGroup.SYMM_OPS:
if int_number == spg["number"]:
symbols.append(spg["hermann_mauguin"])
symbols.append(spg["universal_h_m"])
return set(symbols)
@property
def symmetry_ops(self):
"""
Full set of symmetry operations as matrices. Lazily initialized as
generation sometimes takes a bit of time.
"""
if self._symmetry_ops is None:
self._symmetry_ops = [
SymmOp(m) for m in self._generate_full_symmetry_ops()]
return self._symmetry_ops
def get_orbit(self, p, tol=1e-5):
"""
Returns the orbit for a point.
Args:
p: Point as a 3x1 array.
tol: Tolerance for determining if sites are the same. 1e-5 should
be sufficient for most purposes. Set to 0 for exact matching
(and also needed for symbolic orbits).
Returns:
([array]) Orbit for point.
"""
orbit = []
for o in self.symmetry_ops:
pp = o.operate(p)
pp = np.mod(np.round(pp, decimals=10), 1)
if not in_array_list(orbit, pp, tol=tol):
orbit.append(pp)
return orbit
def is_compatible(self, lattice, tol=1e-5, angle_tol=5):
"""
Checks whether a particular lattice is compatible with the
*conventional* unit cell.
Args:
lattice (Lattice): A Lattice.
tol (float): The tolerance to check for equality of lengths.
angle_tol (float): The tolerance to check for equality of angles
in degrees.
"""
abc = lattice.lengths
angles = lattice.angles
crys_system = self.crystal_system
def check(param, ref, tolerance):
return all([abs(i - j) < tolerance for i, j in zip(param, ref)
if j is not None])
if crys_system == "cubic":
a = abc[0]
return check(abc, [a, a, a], tol) and check(angles, [90, 90, 90], angle_tol)
elif crys_system == "hexagonal" or (
crys_system == "trigonal" and (
self.symbol.endswith("H") or
self.int_number in [143, 144, 145, 147, 149, 150, 151, 152,
153, 154, 156, 157, 158, 159, 162, 163,
164, 165])):
a = abc[0]
return check(abc, [a, a, None], tol) and check(angles, [90, 90, 120], angle_tol)
elif crys_system == "trigonal":
a = abc[0]
alpha = angles[0]
return check(abc, [a, a, a], tol) and check(angles, [alpha, alpha, alpha], angle_tol)
elif crys_system == "tetragonal":
a = abc[0]
return check(abc, [a, a, None], tol) and check(angles, [90, 90, 90], angle_tol)
elif crys_system == "orthorhombic":
return check(angles, [90, 90, 90], angle_tol)
elif crys_system == "monoclinic":
return check(angles, [90, None, 90], angle_tol)
return True
@property
def crystal_system(self):
"""
:return: Crystal system for space group.
"""
i = self.int_number
if i <= 2:
return "triclinic"
elif i <= 15:
return "monoclinic"
elif i <= 74:
return "orthorhombic"
elif i <= 142:
return "tetragonal"
elif i <= 167:
return "trigonal"
elif i <= 194:
return "hexagonal"
else:
return "cubic"
def is_subgroup(self, supergroup):
"""
True if this space group is a subgroup of the supplied group.
Args:
group (Spacegroup): Supergroup to test.
Returns:
True if this space group is a subgroup of the supplied group.
"""
if len(supergroup.symmetry_ops) < len(self.symmetry_ops):
return False
groups = [[supergroup.int_number]]
all_groups = [supergroup.int_number]
max_subgroups = {int(k): v
for k, v in _get_symm_data("maximal_subgroups").items()}
while True:
new_sub_groups = set()
for i in groups[-1]:
new_sub_groups.update([j for j in max_subgroups[i] if j
not in all_groups])
if self.int_number in new_sub_groups:
return True
elif len(new_sub_groups) == 0:
break
else:
groups.append(new_sub_groups)
all_groups.extend(new_sub_groups)
return False
def is_supergroup(self, subgroup):
"""
True if this space group is a supergroup of the supplied group.
Args:
subgroup (Spacegroup): Subgroup to test.
Returns:
True if this space group is a supergroup of the supplied group.
"""
return subgroup.is_subgroup(self)
@classmethod
def from_int_number(cls, int_number, hexagonal=True):
"""
Obtains a SpaceGroup from its international number.
Args:
int_number (int): International number.
hexagonal (bool): For rhombohedral groups, whether to return the
hexagonal setting (default) or rhombohedral setting.
Returns:
(SpaceGroup)
"""
sym = sg_symbol_from_int_number(int_number, hexagonal=hexagonal)
if not hexagonal and int_number in [146, 148, 155, 160, 161, 166, 167]:
sym += ':R'
return SpaceGroup(sym)
def __str__(self):
return "Spacegroup %s with international number %d and order %d" % (
self.symbol, self.int_number, len(self.symmetry_ops))
def sg_symbol_from_int_number(int_number, hexagonal=True):
"""
Obtains a SpaceGroup name from its international number.
Args:
int_number (int): International number.
hexagonal (bool): For rhombohedral groups, whether to return the
hexagonal setting (default) or rhombohedral setting.
Returns:
(str) Spacegroup symbol
"""
syms = []
for n, v in _get_symm_data("space_group_encoding").items():
if v["int_number"] == int_number:
syms.append(n)
if len(syms) == 0:
raise ValueError("Invalid international number!")
if len(syms) == 2:
for sym in syms:
if "e" in sym:
return sym
if hexagonal:
syms = list(filter(lambda s: s.endswith("H"), syms))
else:
syms = list(filter(lambda s: not s.endswith("H"), syms))
return syms.pop()
def in_array_list(array_list, a, tol=1e-5):
"""
Extremely efficient nd-array comparison using numpy's broadcasting. This
function checks if a particular array a, is present in a list of arrays.
It works for arrays of any size, e.g., even matrix searches.
Args:
array_list ([array]): A list of arrays to compare to.
a (array): The test array for comparison.
tol (float): The tolerance. Defaults to 1e-5. If 0, an exact match is
done.
Returns:
(bool)
"""
if len(array_list) == 0:
return False
axes = tuple(range(1, a.ndim + 1))
if not tol:
return np.any(np.all(np.equal(array_list, a[None, :]), axes))
else:
return np.any(np.sum(np.abs(array_list - a[None, :]), axes) < tol)
| mit |
DroneQuest/drone-quest | leap_motion/Leap.py | 1 | 89494 | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.3
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('LeapPython', [dirname(__file__)])
except ImportError:
import LeapPython
return LeapPython
if fp is not None:
try:
_mod = imp.load_module('LeapPython', fp, pathname, description)
finally:
fp.close()
return _mod
LeapPython = swig_import_helper()
del swig_import_helper
else:
import LeapPython
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
object.__setattr__(self, name, value)
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr_nondynamic(self, class_type, name, static=1):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
if (not static):
return object.__getattr__(self, name)
else:
raise AttributeError(name)
def _swig_getattr(self, class_type, name):
return _swig_getattr_nondynamic(self, class_type, name, 0)
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object:
pass
_newclass = 0
try:
import weakref
weakref_proxy = weakref.proxy
except:
weakref_proxy = lambda x: x
class SwigPyIterator(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, SwigPyIterator, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, SwigPyIterator, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined - class is abstract")
__repr__ = _swig_repr
__swig_destroy__ = LeapPython.delete_SwigPyIterator
__del__ = lambda self: None
def value(self):
return LeapPython.SwigPyIterator_value(self)
def incr(self, n=1):
return LeapPython.SwigPyIterator_incr(self, n)
def decr(self, n=1):
return LeapPython.SwigPyIterator_decr(self, n)
def distance(self, x):
return LeapPython.SwigPyIterator_distance(self, x)
def equal(self, x):
return LeapPython.SwigPyIterator_equal(self, x)
def copy(self):
return LeapPython.SwigPyIterator_copy(self)
def next(self):
return LeapPython.SwigPyIterator_next(self)
def __next__(self):
return LeapPython.SwigPyIterator___next__(self)
def previous(self):
return LeapPython.SwigPyIterator_previous(self)
def advance(self, n):
return LeapPython.SwigPyIterator_advance(self, n)
def __eq__(self, x):
return LeapPython.SwigPyIterator___eq__(self, x)
def __ne__(self, x):
return LeapPython.SwigPyIterator___ne__(self, x)
def __iadd__(self, n):
return LeapPython.SwigPyIterator___iadd__(self, n)
def __isub__(self, n):
return LeapPython.SwigPyIterator___isub__(self, n)
def __add__(self, n):
return LeapPython.SwigPyIterator___add__(self, n)
def __sub__(self, *args):
return LeapPython.SwigPyIterator___sub__(self, *args)
def __iter__(self):
return self
SwigPyIterator_swigregister = LeapPython.SwigPyIterator_swigregister
SwigPyIterator_swigregister(SwigPyIterator)
class byte_array(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, byte_array, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, byte_array, name)
__repr__ = _swig_repr
def __init__(self, nelements):
this = LeapPython.new_byte_array(nelements)
try:
self.this.append(this)
except:
self.this = this
__swig_destroy__ = LeapPython.delete_byte_array
__del__ = lambda self: None
def __getitem__(self, index):
return LeapPython.byte_array___getitem__(self, index)
def __setitem__(self, index, value):
return LeapPython.byte_array___setitem__(self, index, value)
def cast(self):
return LeapPython.byte_array_cast(self)
__swig_getmethods__["frompointer"] = lambda x: LeapPython.byte_array_frompointer
if _newclass:
frompointer = staticmethod(LeapPython.byte_array_frompointer)
byte_array_swigregister = LeapPython.byte_array_swigregister
byte_array_swigregister(byte_array)
def byte_array_frompointer(t):
return LeapPython.byte_array_frompointer(t)
byte_array_frompointer = LeapPython.byte_array_frompointer
class float_array(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, float_array, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, float_array, name)
__repr__ = _swig_repr
def __init__(self, nelements):
this = LeapPython.new_float_array(nelements)
try:
self.this.append(this)
except:
self.this = this
__swig_destroy__ = LeapPython.delete_float_array
__del__ = lambda self: None
def __getitem__(self, index):
return LeapPython.float_array___getitem__(self, index)
def __setitem__(self, index, value):
return LeapPython.float_array___setitem__(self, index, value)
def cast(self):
return LeapPython.float_array_cast(self)
__swig_getmethods__["frompointer"] = lambda x: LeapPython.float_array_frompointer
if _newclass:
frompointer = staticmethod(LeapPython.float_array_frompointer)
float_array_swigregister = LeapPython.float_array_swigregister
float_array_swigregister(float_array)
def float_array_frompointer(t):
return LeapPython.float_array_frompointer(t)
float_array_frompointer = LeapPython.float_array_frompointer
class Vector(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Vector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Vector, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = LeapPython.new_Vector(*args)
try:
self.this.append(this)
except:
self.this = this
def distance_to(self, other):
return LeapPython.Vector_distance_to(self, other)
def angle_to(self, other):
return LeapPython.Vector_angle_to(self, other)
def dot(self, other):
return LeapPython.Vector_dot(self, other)
def cross(self, other):
return LeapPython.Vector_cross(self, other)
def __neg__(self):
return LeapPython.Vector___neg__(self)
def __add__(self, other):
return LeapPython.Vector___add__(self, other)
def __sub__(self, other):
return LeapPython.Vector___sub__(self, other)
def __mul__(self, scalar):
return LeapPython.Vector___mul__(self, scalar)
def __div__(self, scalar):
return LeapPython.Vector___div__(self, scalar)
def __iadd__(self, other):
return LeapPython.Vector___iadd__(self, other)
def __isub__(self, other):
return LeapPython.Vector___isub__(self, other)
def __imul__(self, scalar):
return LeapPython.Vector___imul__(self, scalar)
def __idiv__(self, scalar):
return LeapPython.Vector___idiv__(self, scalar)
def __str__(self):
return LeapPython.Vector___str__(self)
def __eq__(self, other):
return LeapPython.Vector___eq__(self, other)
def __ne__(self, other):
return LeapPython.Vector___ne__(self, other)
def is_valid(self):
return LeapPython.Vector_is_valid(self)
def __getitem__(self, index):
return LeapPython.Vector___getitem__(self, index)
__swig_setmethods__["x"] = LeapPython.Vector_x_set
__swig_getmethods__["x"] = LeapPython.Vector_x_get
if _newclass:
x = _swig_property(LeapPython.Vector_x_get, LeapPython.Vector_x_set)
__swig_setmethods__["y"] = LeapPython.Vector_y_set
__swig_getmethods__["y"] = LeapPython.Vector_y_get
if _newclass:
y = _swig_property(LeapPython.Vector_y_get, LeapPython.Vector_y_set)
__swig_setmethods__["z"] = LeapPython.Vector_z_set
__swig_getmethods__["z"] = LeapPython.Vector_z_get
if _newclass:
z = _swig_property(LeapPython.Vector_z_get, LeapPython.Vector_z_set)
__swig_getmethods__["magnitude"] = LeapPython.Vector_magnitude_get
if _newclass:
magnitude = _swig_property(LeapPython.Vector_magnitude_get)
__swig_getmethods__["magnitude_squared"] = LeapPython.Vector_magnitude_squared_get
if _newclass:
magnitude_squared = _swig_property(LeapPython.Vector_magnitude_squared_get)
__swig_getmethods__["pitch"] = LeapPython.Vector_pitch_get
if _newclass:
pitch = _swig_property(LeapPython.Vector_pitch_get)
__swig_getmethods__["roll"] = LeapPython.Vector_roll_get
if _newclass:
roll = _swig_property(LeapPython.Vector_roll_get)
__swig_getmethods__["yaw"] = LeapPython.Vector_yaw_get
if _newclass:
yaw = _swig_property(LeapPython.Vector_yaw_get)
__swig_getmethods__["normalized"] = LeapPython.Vector_normalized_get
if _newclass:
normalized = _swig_property(LeapPython.Vector_normalized_get)
def to_float_array(self): return [self.x, self.y, self.z]
def to_tuple(self): return (self.x, self.y, self.z)
__swig_destroy__ = LeapPython.delete_Vector
__del__ = lambda self: None
Vector_swigregister = LeapPython.Vector_swigregister
Vector_swigregister(Vector)
cvar = LeapPython.cvar
PI = cvar.PI
DEG_TO_RAD = cvar.DEG_TO_RAD
RAD_TO_DEG = cvar.RAD_TO_DEG
EPSILON = cvar.EPSILON
Vector.zero = LeapPython.cvar.Vector_zero
Vector.x_axis = LeapPython.cvar.Vector_x_axis
Vector.y_axis = LeapPython.cvar.Vector_y_axis
Vector.z_axis = LeapPython.cvar.Vector_z_axis
Vector.forward = LeapPython.cvar.Vector_forward
Vector.backward = LeapPython.cvar.Vector_backward
Vector.left = LeapPython.cvar.Vector_left
Vector.right = LeapPython.cvar.Vector_right
Vector.up = LeapPython.cvar.Vector_up
Vector.down = LeapPython.cvar.Vector_down
class Matrix(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Matrix, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Matrix, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = LeapPython.new_Matrix(*args)
try:
self.this.append(this)
except:
self.this = this
def set_rotation(self, axis, angleRadians):
return LeapPython.Matrix_set_rotation(self, axis, angleRadians)
def transform_point(self, arg2):
return LeapPython.Matrix_transform_point(self, arg2)
def transform_direction(self, arg2):
return LeapPython.Matrix_transform_direction(self, arg2)
def rigid_inverse(self):
return LeapPython.Matrix_rigid_inverse(self)
def __mul__(self, other):
return LeapPython.Matrix___mul__(self, other)
def __imul__(self, other):
return LeapPython.Matrix___imul__(self, other)
def __eq__(self, other):
return LeapPython.Matrix___eq__(self, other)
def __ne__(self, other):
return LeapPython.Matrix___ne__(self, other)
def __str__(self):
return LeapPython.Matrix___str__(self)
__swig_setmethods__["x_basis"] = LeapPython.Matrix_x_basis_set
__swig_getmethods__["x_basis"] = LeapPython.Matrix_x_basis_get
if _newclass:
x_basis = _swig_property(LeapPython.Matrix_x_basis_get, LeapPython.Matrix_x_basis_set)
__swig_setmethods__["y_basis"] = LeapPython.Matrix_y_basis_set
__swig_getmethods__["y_basis"] = LeapPython.Matrix_y_basis_get
if _newclass:
y_basis = _swig_property(LeapPython.Matrix_y_basis_get, LeapPython.Matrix_y_basis_set)
__swig_setmethods__["z_basis"] = LeapPython.Matrix_z_basis_set
__swig_getmethods__["z_basis"] = LeapPython.Matrix_z_basis_get
if _newclass:
z_basis = _swig_property(LeapPython.Matrix_z_basis_get, LeapPython.Matrix_z_basis_set)
__swig_setmethods__["origin"] = LeapPython.Matrix_origin_set
__swig_getmethods__["origin"] = LeapPython.Matrix_origin_get
if _newclass:
origin = _swig_property(LeapPython.Matrix_origin_get, LeapPython.Matrix_origin_set)
def to_array_3x3(self, output = None):
if output is None:
output = [0]*9
output[0], output[1], output[2] = self.x_basis.x, self.x_basis.y, self.x_basis.z
output[3], output[4], output[5] = self.y_basis.x, self.y_basis.y, self.y_basis.z
output[6], output[7], output[8] = self.z_basis.x, self.z_basis.y, self.z_basis.z
return output
def to_array_4x4(self, output = None):
if output is None:
output = [0]*16
output[0], output[1], output[2], output[3] = self.x_basis.x, self.x_basis.y, self.x_basis.z, 0.0
output[4], output[5], output[6], output[7] = self.y_basis.x, self.y_basis.y, self.y_basis.z, 0.0
output[8], output[9], output[10], output[11] = self.z_basis.x, self.z_basis.y, self.z_basis.z, 0.0
output[12], output[13], output[14], output[15] = self.origin.x, self.origin.y, self.origin.z, 1.0
return output
__swig_destroy__ = LeapPython.delete_Matrix
__del__ = lambda self: None
Matrix_swigregister = LeapPython.Matrix_swigregister
Matrix_swigregister(Matrix)
Matrix.identity = LeapPython.cvar.Matrix_identity
class Interface(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Interface, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Interface, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
Interface_swigregister = LeapPython.Interface_swigregister
Interface_swigregister(Interface)
class Pointable(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Pointable, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Pointable, name)
__repr__ = _swig_repr
ZONE_NONE = LeapPython.Pointable_ZONE_NONE
ZONE_HOVERING = LeapPython.Pointable_ZONE_HOVERING
ZONE_TOUCHING = LeapPython.Pointable_ZONE_TOUCHING
def __init__(self):
this = LeapPython.new_Pointable()
try:
self.this.append(this)
except:
self.this = this
def __eq__(self, arg2):
return LeapPython.Pointable___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Pointable___ne__(self, arg2)
def __str__(self):
return LeapPython.Pointable___str__(self)
__swig_getmethods__["id"] = LeapPython.Pointable_id_get
if _newclass:
id = _swig_property(LeapPython.Pointable_id_get)
__swig_getmethods__["hand"] = LeapPython.Pointable_hand_get
if _newclass:
hand = _swig_property(LeapPython.Pointable_hand_get)
__swig_getmethods__["tip_position"] = LeapPython.Pointable_tip_position_get
if _newclass:
tip_position = _swig_property(LeapPython.Pointable_tip_position_get)
__swig_getmethods__["tip_velocity"] = LeapPython.Pointable_tip_velocity_get
if _newclass:
tip_velocity = _swig_property(LeapPython.Pointable_tip_velocity_get)
__swig_getmethods__["direction"] = LeapPython.Pointable_direction_get
if _newclass:
direction = _swig_property(LeapPython.Pointable_direction_get)
__swig_getmethods__["width"] = LeapPython.Pointable_width_get
if _newclass:
width = _swig_property(LeapPython.Pointable_width_get)
__swig_getmethods__["length"] = LeapPython.Pointable_length_get
if _newclass:
length = _swig_property(LeapPython.Pointable_length_get)
__swig_getmethods__["is_tool"] = LeapPython.Pointable_is_tool_get
if _newclass:
is_tool = _swig_property(LeapPython.Pointable_is_tool_get)
__swig_getmethods__["is_finger"] = LeapPython.Pointable_is_finger_get
if _newclass:
is_finger = _swig_property(LeapPython.Pointable_is_finger_get)
__swig_getmethods__["is_extended"] = LeapPython.Pointable_is_extended_get
if _newclass:
is_extended = _swig_property(LeapPython.Pointable_is_extended_get)
__swig_getmethods__["is_valid"] = LeapPython.Pointable_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Pointable_is_valid_get)
__swig_getmethods__["touch_zone"] = LeapPython.Pointable_touch_zone_get
if _newclass:
touch_zone = _swig_property(LeapPython.Pointable_touch_zone_get)
__swig_getmethods__["touch_distance"] = LeapPython.Pointable_touch_distance_get
if _newclass:
touch_distance = _swig_property(LeapPython.Pointable_touch_distance_get)
__swig_getmethods__["stabilized_tip_position"] = LeapPython.Pointable_stabilized_tip_position_get
if _newclass:
stabilized_tip_position = _swig_property(LeapPython.Pointable_stabilized_tip_position_get)
__swig_getmethods__["time_visible"] = LeapPython.Pointable_time_visible_get
if _newclass:
time_visible = _swig_property(LeapPython.Pointable_time_visible_get)
__swig_getmethods__["frame"] = LeapPython.Pointable_frame_get
if _newclass:
frame = _swig_property(LeapPython.Pointable_frame_get)
__swig_destroy__ = LeapPython.delete_Pointable
__del__ = lambda self: None
Pointable_swigregister = LeapPython.Pointable_swigregister
Pointable_swigregister(Pointable)
Pointable.invalid = LeapPython.cvar.Pointable_invalid
class Arm(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Arm, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Arm, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Arm()
try:
self.this.append(this)
except:
self.this = this
def __eq__(self, arg2):
return LeapPython.Arm___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Arm___ne__(self, arg2)
def __str__(self):
return LeapPython.Arm___str__(self)
__swig_getmethods__["width"] = LeapPython.Arm_width_get
if _newclass:
width = _swig_property(LeapPython.Arm_width_get)
__swig_getmethods__["center"] = LeapPython.Arm_center_get
if _newclass:
center = _swig_property(LeapPython.Arm_center_get)
__swig_getmethods__["direction"] = LeapPython.Arm_direction_get
if _newclass:
direction = _swig_property(LeapPython.Arm_direction_get)
__swig_getmethods__["basis"] = LeapPython.Arm_basis_get
if _newclass:
basis = _swig_property(LeapPython.Arm_basis_get)
__swig_getmethods__["elbow_position"] = LeapPython.Arm_elbow_position_get
if _newclass:
elbow_position = _swig_property(LeapPython.Arm_elbow_position_get)
__swig_getmethods__["wrist_position"] = LeapPython.Arm_wrist_position_get
if _newclass:
wrist_position = _swig_property(LeapPython.Arm_wrist_position_get)
__swig_getmethods__["is_valid"] = LeapPython.Arm_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Arm_is_valid_get)
__swig_destroy__ = LeapPython.delete_Arm
__del__ = lambda self: None
Arm_swigregister = LeapPython.Arm_swigregister
Arm_swigregister(Arm)
Arm.invalid = LeapPython.cvar.Arm_invalid
class Bone(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Bone, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Bone, name)
__repr__ = _swig_repr
TYPE_METACARPAL = LeapPython.Bone_TYPE_METACARPAL
TYPE_PROXIMAL = LeapPython.Bone_TYPE_PROXIMAL
TYPE_INTERMEDIATE = LeapPython.Bone_TYPE_INTERMEDIATE
TYPE_DISTAL = LeapPython.Bone_TYPE_DISTAL
def __init__(self):
this = LeapPython.new_Bone()
try:
self.this.append(this)
except:
self.this = this
def __eq__(self, arg2):
return LeapPython.Bone___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Bone___ne__(self, arg2)
def __str__(self):
return LeapPython.Bone___str__(self)
__swig_getmethods__["prev_joint"] = LeapPython.Bone_prev_joint_get
if _newclass:
prev_joint = _swig_property(LeapPython.Bone_prev_joint_get)
__swig_getmethods__["next_joint"] = LeapPython.Bone_next_joint_get
if _newclass:
next_joint = _swig_property(LeapPython.Bone_next_joint_get)
__swig_getmethods__["center"] = LeapPython.Bone_center_get
if _newclass:
center = _swig_property(LeapPython.Bone_center_get)
__swig_getmethods__["direction"] = LeapPython.Bone_direction_get
if _newclass:
direction = _swig_property(LeapPython.Bone_direction_get)
__swig_getmethods__["length"] = LeapPython.Bone_length_get
if _newclass:
length = _swig_property(LeapPython.Bone_length_get)
__swig_getmethods__["width"] = LeapPython.Bone_width_get
if _newclass:
width = _swig_property(LeapPython.Bone_width_get)
__swig_getmethods__["type"] = LeapPython.Bone_type_get
if _newclass:
type = _swig_property(LeapPython.Bone_type_get)
__swig_getmethods__["basis"] = LeapPython.Bone_basis_get
if _newclass:
basis = _swig_property(LeapPython.Bone_basis_get)
__swig_getmethods__["is_valid"] = LeapPython.Bone_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Bone_is_valid_get)
__swig_destroy__ = LeapPython.delete_Bone
__del__ = lambda self: None
Bone_swigregister = LeapPython.Bone_swigregister
Bone_swigregister(Bone)
Bone.invalid = LeapPython.cvar.Bone_invalid
class Finger(Pointable):
__swig_setmethods__ = {}
for _s in [Pointable]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Finger, name, value)
__swig_getmethods__ = {}
for _s in [Pointable]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Finger, name)
__repr__ = _swig_repr
JOINT_MCP = LeapPython.Finger_JOINT_MCP
JOINT_PIP = LeapPython.Finger_JOINT_PIP
JOINT_DIP = LeapPython.Finger_JOINT_DIP
JOINT_TIP = LeapPython.Finger_JOINT_TIP
TYPE_THUMB = LeapPython.Finger_TYPE_THUMB
TYPE_INDEX = LeapPython.Finger_TYPE_INDEX
TYPE_MIDDLE = LeapPython.Finger_TYPE_MIDDLE
TYPE_RING = LeapPython.Finger_TYPE_RING
TYPE_PINKY = LeapPython.Finger_TYPE_PINKY
def __init__(self, *args):
this = LeapPython.new_Finger(*args)
try:
self.this.append(this)
except:
self.this = this
def joint_position(self, jointIx):
return LeapPython.Finger_joint_position(self, jointIx)
def bone(self, boneIx):
return LeapPython.Finger_bone(self, boneIx)
def __str__(self):
return LeapPython.Finger___str__(self)
__swig_getmethods__["type"] = LeapPython.Finger_type_get
if _newclass:
type = _swig_property(LeapPython.Finger_type_get)
__swig_destroy__ = LeapPython.delete_Finger
__del__ = lambda self: None
Finger_swigregister = LeapPython.Finger_swigregister
Finger_swigregister(Finger)
Finger.invalid = LeapPython.cvar.Finger_invalid
class Tool(Pointable):
__swig_setmethods__ = {}
for _s in [Pointable]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Tool, name, value)
__swig_getmethods__ = {}
for _s in [Pointable]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Tool, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = LeapPython.new_Tool(*args)
try:
self.this.append(this)
except:
self.this = this
def __str__(self):
return LeapPython.Tool___str__(self)
__swig_destroy__ = LeapPython.delete_Tool
__del__ = lambda self: None
Tool_swigregister = LeapPython.Tool_swigregister
Tool_swigregister(Tool)
Tool.invalid = LeapPython.cvar.Tool_invalid
class Hand(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Hand, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Hand, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Hand()
try:
self.this.append(this)
except:
self.this = this
def pointable(self, id):
return LeapPython.Hand_pointable(self, id)
def finger(self, id):
return LeapPython.Hand_finger(self, id)
def tool(self, id):
return LeapPython.Hand_tool(self, id)
def translation(self, sinceFrame):
return LeapPython.Hand_translation(self, sinceFrame)
def translation_probability(self, sinceFrame):
return LeapPython.Hand_translation_probability(self, sinceFrame)
def rotation_axis(self, sinceFrame):
return LeapPython.Hand_rotation_axis(self, sinceFrame)
def rotation_angle(self, *args):
return LeapPython.Hand_rotation_angle(self, *args)
def rotation_matrix(self, sinceFrame):
return LeapPython.Hand_rotation_matrix(self, sinceFrame)
def rotation_probability(self, sinceFrame):
return LeapPython.Hand_rotation_probability(self, sinceFrame)
def scale_factor(self, sinceFrame):
return LeapPython.Hand_scale_factor(self, sinceFrame)
def scale_probability(self, sinceFrame):
return LeapPython.Hand_scale_probability(self, sinceFrame)
def __eq__(self, arg2):
return LeapPython.Hand___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Hand___ne__(self, arg2)
def __str__(self):
return LeapPython.Hand___str__(self)
__swig_getmethods__["id"] = LeapPython.Hand_id_get
if _newclass:
id = _swig_property(LeapPython.Hand_id_get)
__swig_getmethods__["pointables"] = LeapPython.Hand_pointables_get
if _newclass:
pointables = _swig_property(LeapPython.Hand_pointables_get)
__swig_getmethods__["fingers"] = LeapPython.Hand_fingers_get
if _newclass:
fingers = _swig_property(LeapPython.Hand_fingers_get)
__swig_getmethods__["tools"] = LeapPython.Hand_tools_get
if _newclass:
tools = _swig_property(LeapPython.Hand_tools_get)
__swig_getmethods__["palm_position"] = LeapPython.Hand_palm_position_get
if _newclass:
palm_position = _swig_property(LeapPython.Hand_palm_position_get)
__swig_getmethods__["palm_velocity"] = LeapPython.Hand_palm_velocity_get
if _newclass:
palm_velocity = _swig_property(LeapPython.Hand_palm_velocity_get)
__swig_getmethods__["palm_normal"] = LeapPython.Hand_palm_normal_get
if _newclass:
palm_normal = _swig_property(LeapPython.Hand_palm_normal_get)
__swig_getmethods__["direction"] = LeapPython.Hand_direction_get
if _newclass:
direction = _swig_property(LeapPython.Hand_direction_get)
__swig_getmethods__["basis"] = LeapPython.Hand_basis_get
if _newclass:
basis = _swig_property(LeapPython.Hand_basis_get)
__swig_getmethods__["is_valid"] = LeapPython.Hand_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Hand_is_valid_get)
__swig_getmethods__["sphere_center"] = LeapPython.Hand_sphere_center_get
if _newclass:
sphere_center = _swig_property(LeapPython.Hand_sphere_center_get)
__swig_getmethods__["sphere_radius"] = LeapPython.Hand_sphere_radius_get
if _newclass:
sphere_radius = _swig_property(LeapPython.Hand_sphere_radius_get)
__swig_getmethods__["grab_strength"] = LeapPython.Hand_grab_strength_get
if _newclass:
grab_strength = _swig_property(LeapPython.Hand_grab_strength_get)
__swig_getmethods__["pinch_strength"] = LeapPython.Hand_pinch_strength_get
if _newclass:
pinch_strength = _swig_property(LeapPython.Hand_pinch_strength_get)
__swig_getmethods__["palm_width"] = LeapPython.Hand_palm_width_get
if _newclass:
palm_width = _swig_property(LeapPython.Hand_palm_width_get)
__swig_getmethods__["stabilized_palm_position"] = LeapPython.Hand_stabilized_palm_position_get
if _newclass:
stabilized_palm_position = _swig_property(LeapPython.Hand_stabilized_palm_position_get)
__swig_getmethods__["wrist_position"] = LeapPython.Hand_wrist_position_get
if _newclass:
wrist_position = _swig_property(LeapPython.Hand_wrist_position_get)
__swig_getmethods__["time_visible"] = LeapPython.Hand_time_visible_get
if _newclass:
time_visible = _swig_property(LeapPython.Hand_time_visible_get)
__swig_getmethods__["confidence"] = LeapPython.Hand_confidence_get
if _newclass:
confidence = _swig_property(LeapPython.Hand_confidence_get)
__swig_getmethods__["is_left"] = LeapPython.Hand_is_left_get
if _newclass:
is_left = _swig_property(LeapPython.Hand_is_left_get)
__swig_getmethods__["is_right"] = LeapPython.Hand_is_right_get
if _newclass:
is_right = _swig_property(LeapPython.Hand_is_right_get)
__swig_getmethods__["frame"] = LeapPython.Hand_frame_get
if _newclass:
frame = _swig_property(LeapPython.Hand_frame_get)
__swig_getmethods__["arm"] = LeapPython.Hand_arm_get
if _newclass:
arm = _swig_property(LeapPython.Hand_arm_get)
__swig_destroy__ = LeapPython.delete_Hand
__del__ = lambda self: None
Hand_swigregister = LeapPython.Hand_swigregister
Hand_swigregister(Hand)
Hand.invalid = LeapPython.cvar.Hand_invalid
class Gesture(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Gesture, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Gesture, name)
__repr__ = _swig_repr
TYPE_INVALID = LeapPython.Gesture_TYPE_INVALID
TYPE_SWIPE = LeapPython.Gesture_TYPE_SWIPE
TYPE_CIRCLE = LeapPython.Gesture_TYPE_CIRCLE
TYPE_SCREEN_TAP = LeapPython.Gesture_TYPE_SCREEN_TAP
TYPE_KEY_TAP = LeapPython.Gesture_TYPE_KEY_TAP
STATE_INVALID = LeapPython.Gesture_STATE_INVALID
STATE_START = LeapPython.Gesture_STATE_START
STATE_UPDATE = LeapPython.Gesture_STATE_UPDATE
STATE_STOP = LeapPython.Gesture_STATE_STOP
def __init__(self, *args):
this = LeapPython.new_Gesture(*args)
try:
self.this.append(this)
except:
self.this = this
def __eq__(self, rhs):
return LeapPython.Gesture___eq__(self, rhs)
def __ne__(self, rhs):
return LeapPython.Gesture___ne__(self, rhs)
def __str__(self):
return LeapPython.Gesture___str__(self)
__swig_getmethods__["type"] = LeapPython.Gesture_type_get
if _newclass:
type = _swig_property(LeapPython.Gesture_type_get)
__swig_getmethods__["state"] = LeapPython.Gesture_state_get
if _newclass:
state = _swig_property(LeapPython.Gesture_state_get)
__swig_getmethods__["id"] = LeapPython.Gesture_id_get
if _newclass:
id = _swig_property(LeapPython.Gesture_id_get)
__swig_getmethods__["duration"] = LeapPython.Gesture_duration_get
if _newclass:
duration = _swig_property(LeapPython.Gesture_duration_get)
__swig_getmethods__["duration_seconds"] = LeapPython.Gesture_duration_seconds_get
if _newclass:
duration_seconds = _swig_property(LeapPython.Gesture_duration_seconds_get)
__swig_getmethods__["frame"] = LeapPython.Gesture_frame_get
if _newclass:
frame = _swig_property(LeapPython.Gesture_frame_get)
__swig_getmethods__["hands"] = LeapPython.Gesture_hands_get
if _newclass:
hands = _swig_property(LeapPython.Gesture_hands_get)
__swig_getmethods__["pointables"] = LeapPython.Gesture_pointables_get
if _newclass:
pointables = _swig_property(LeapPython.Gesture_pointables_get)
__swig_getmethods__["is_valid"] = LeapPython.Gesture_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Gesture_is_valid_get)
__swig_destroy__ = LeapPython.delete_Gesture
__del__ = lambda self: None
Gesture_swigregister = LeapPython.Gesture_swigregister
Gesture_swigregister(Gesture)
Gesture.invalid = LeapPython.cvar.Gesture_invalid
class SwipeGesture(Gesture):
__swig_setmethods__ = {}
for _s in [Gesture]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, SwipeGesture, name, value)
__swig_getmethods__ = {}
for _s in [Gesture]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, SwipeGesture, name)
__repr__ = _swig_repr
__swig_getmethods__["class_type"] = lambda x: LeapPython.SwipeGesture_class_type
if _newclass:
class_type = staticmethod(LeapPython.SwipeGesture_class_type)
def __init__(self, *args):
this = LeapPython.new_SwipeGesture(*args)
try:
self.this.append(this)
except:
self.this = this
__swig_getmethods__["start_position"] = LeapPython.SwipeGesture_start_position_get
if _newclass:
start_position = _swig_property(LeapPython.SwipeGesture_start_position_get)
__swig_getmethods__["position"] = LeapPython.SwipeGesture_position_get
if _newclass:
position = _swig_property(LeapPython.SwipeGesture_position_get)
__swig_getmethods__["direction"] = LeapPython.SwipeGesture_direction_get
if _newclass:
direction = _swig_property(LeapPython.SwipeGesture_direction_get)
__swig_getmethods__["speed"] = LeapPython.SwipeGesture_speed_get
if _newclass:
speed = _swig_property(LeapPython.SwipeGesture_speed_get)
__swig_getmethods__["pointable"] = LeapPython.SwipeGesture_pointable_get
if _newclass:
pointable = _swig_property(LeapPython.SwipeGesture_pointable_get)
__swig_destroy__ = LeapPython.delete_SwipeGesture
__del__ = lambda self: None
SwipeGesture_swigregister = LeapPython.SwipeGesture_swigregister
SwipeGesture_swigregister(SwipeGesture)
def SwipeGesture_class_type():
return LeapPython.SwipeGesture_class_type()
SwipeGesture_class_type = LeapPython.SwipeGesture_class_type
class CircleGesture(Gesture):
__swig_setmethods__ = {}
for _s in [Gesture]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CircleGesture, name, value)
__swig_getmethods__ = {}
for _s in [Gesture]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CircleGesture, name)
__repr__ = _swig_repr
__swig_getmethods__["class_type"] = lambda x: LeapPython.CircleGesture_class_type
if _newclass:
class_type = staticmethod(LeapPython.CircleGesture_class_type)
def __init__(self, *args):
this = LeapPython.new_CircleGesture(*args)
try:
self.this.append(this)
except:
self.this = this
__swig_getmethods__["center"] = LeapPython.CircleGesture_center_get
if _newclass:
center = _swig_property(LeapPython.CircleGesture_center_get)
__swig_getmethods__["normal"] = LeapPython.CircleGesture_normal_get
if _newclass:
normal = _swig_property(LeapPython.CircleGesture_normal_get)
__swig_getmethods__["progress"] = LeapPython.CircleGesture_progress_get
if _newclass:
progress = _swig_property(LeapPython.CircleGesture_progress_get)
__swig_getmethods__["radius"] = LeapPython.CircleGesture_radius_get
if _newclass:
radius = _swig_property(LeapPython.CircleGesture_radius_get)
__swig_getmethods__["pointable"] = LeapPython.CircleGesture_pointable_get
if _newclass:
pointable = _swig_property(LeapPython.CircleGesture_pointable_get)
__swig_destroy__ = LeapPython.delete_CircleGesture
__del__ = lambda self: None
CircleGesture_swigregister = LeapPython.CircleGesture_swigregister
CircleGesture_swigregister(CircleGesture)
def CircleGesture_class_type():
return LeapPython.CircleGesture_class_type()
CircleGesture_class_type = LeapPython.CircleGesture_class_type
class ScreenTapGesture(Gesture):
__swig_setmethods__ = {}
for _s in [Gesture]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, ScreenTapGesture, name, value)
__swig_getmethods__ = {}
for _s in [Gesture]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, ScreenTapGesture, name)
__repr__ = _swig_repr
__swig_getmethods__["class_type"] = lambda x: LeapPython.ScreenTapGesture_class_type
if _newclass:
class_type = staticmethod(LeapPython.ScreenTapGesture_class_type)
def __init__(self, *args):
this = LeapPython.new_ScreenTapGesture(*args)
try:
self.this.append(this)
except:
self.this = this
__swig_getmethods__["position"] = LeapPython.ScreenTapGesture_position_get
if _newclass:
position = _swig_property(LeapPython.ScreenTapGesture_position_get)
__swig_getmethods__["direction"] = LeapPython.ScreenTapGesture_direction_get
if _newclass:
direction = _swig_property(LeapPython.ScreenTapGesture_direction_get)
__swig_getmethods__["progress"] = LeapPython.ScreenTapGesture_progress_get
if _newclass:
progress = _swig_property(LeapPython.ScreenTapGesture_progress_get)
__swig_getmethods__["pointable"] = LeapPython.ScreenTapGesture_pointable_get
if _newclass:
pointable = _swig_property(LeapPython.ScreenTapGesture_pointable_get)
__swig_destroy__ = LeapPython.delete_ScreenTapGesture
__del__ = lambda self: None
ScreenTapGesture_swigregister = LeapPython.ScreenTapGesture_swigregister
ScreenTapGesture_swigregister(ScreenTapGesture)
def ScreenTapGesture_class_type():
return LeapPython.ScreenTapGesture_class_type()
ScreenTapGesture_class_type = LeapPython.ScreenTapGesture_class_type
class KeyTapGesture(Gesture):
__swig_setmethods__ = {}
for _s in [Gesture]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, KeyTapGesture, name, value)
__swig_getmethods__ = {}
for _s in [Gesture]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, KeyTapGesture, name)
__repr__ = _swig_repr
__swig_getmethods__["class_type"] = lambda x: LeapPython.KeyTapGesture_class_type
if _newclass:
class_type = staticmethod(LeapPython.KeyTapGesture_class_type)
def __init__(self, *args):
this = LeapPython.new_KeyTapGesture(*args)
try:
self.this.append(this)
except:
self.this = this
__swig_getmethods__["position"] = LeapPython.KeyTapGesture_position_get
if _newclass:
position = _swig_property(LeapPython.KeyTapGesture_position_get)
__swig_getmethods__["direction"] = LeapPython.KeyTapGesture_direction_get
if _newclass:
direction = _swig_property(LeapPython.KeyTapGesture_direction_get)
__swig_getmethods__["progress"] = LeapPython.KeyTapGesture_progress_get
if _newclass:
progress = _swig_property(LeapPython.KeyTapGesture_progress_get)
__swig_getmethods__["pointable"] = LeapPython.KeyTapGesture_pointable_get
if _newclass:
pointable = _swig_property(LeapPython.KeyTapGesture_pointable_get)
__swig_destroy__ = LeapPython.delete_KeyTapGesture
__del__ = lambda self: None
KeyTapGesture_swigregister = LeapPython.KeyTapGesture_swigregister
KeyTapGesture_swigregister(KeyTapGesture)
def KeyTapGesture_class_type():
return LeapPython.KeyTapGesture_class_type()
KeyTapGesture_class_type = LeapPython.KeyTapGesture_class_type
class Screen(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Screen, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Screen, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Screen()
try:
self.this.append(this)
except:
self.this = this
def intersect(self, *args):
return LeapPython.Screen_intersect(self, *args)
def project(self, position, normalize, clampRatio=1.0):
return LeapPython.Screen_project(self, position, normalize, clampRatio)
def normal(self):
return LeapPython.Screen_normal(self)
def distance_to_point(self, point):
return LeapPython.Screen_distance_to_point(self, point)
def __eq__(self, arg2):
return LeapPython.Screen___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Screen___ne__(self, arg2)
def __str__(self):
return LeapPython.Screen___str__(self)
__swig_getmethods__["id"] = LeapPython.Screen_id_get
if _newclass:
id = _swig_property(LeapPython.Screen_id_get)
__swig_getmethods__["horizontal_axis"] = LeapPython.Screen_horizontal_axis_get
if _newclass:
horizontal_axis = _swig_property(LeapPython.Screen_horizontal_axis_get)
__swig_getmethods__["vertical_axis"] = LeapPython.Screen_vertical_axis_get
if _newclass:
vertical_axis = _swig_property(LeapPython.Screen_vertical_axis_get)
__swig_getmethods__["bottom_left_corner"] = LeapPython.Screen_bottom_left_corner_get
if _newclass:
bottom_left_corner = _swig_property(LeapPython.Screen_bottom_left_corner_get)
__swig_getmethods__["width_pixels"] = LeapPython.Screen_width_pixels_get
if _newclass:
width_pixels = _swig_property(LeapPython.Screen_width_pixels_get)
__swig_getmethods__["height_pixels"] = LeapPython.Screen_height_pixels_get
if _newclass:
height_pixels = _swig_property(LeapPython.Screen_height_pixels_get)
__swig_getmethods__["is_valid"] = LeapPython.Screen_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Screen_is_valid_get)
__swig_destroy__ = LeapPython.delete_Screen
__del__ = lambda self: None
Screen_swigregister = LeapPython.Screen_swigregister
Screen_swigregister(Screen)
Screen.invalid = LeapPython.cvar.Screen_invalid
class Device(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Device, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Device, name)
__repr__ = _swig_repr
TYPE_PERIPHERAL = LeapPython.Device_TYPE_PERIPHERAL
TYPE_LAPTOP = LeapPython.Device_TYPE_LAPTOP
TYPE_KEYBOARD = LeapPython.Device_TYPE_KEYBOARD
def __init__(self):
this = LeapPython.new_Device()
try:
self.this.append(this)
except:
self.this = this
def distance_to_boundary(self, position):
return LeapPython.Device_distance_to_boundary(self, position)
def __eq__(self, arg2):
return LeapPython.Device___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Device___ne__(self, arg2)
def __str__(self):
return LeapPython.Device___str__(self)
__swig_getmethods__["horizontal_view_angle"] = LeapPython.Device_horizontal_view_angle_get
if _newclass:
horizontal_view_angle = _swig_property(LeapPython.Device_horizontal_view_angle_get)
__swig_getmethods__["vertical_view_angle"] = LeapPython.Device_vertical_view_angle_get
if _newclass:
vertical_view_angle = _swig_property(LeapPython.Device_vertical_view_angle_get)
__swig_getmethods__["range"] = LeapPython.Device_range_get
if _newclass:
range = _swig_property(LeapPython.Device_range_get)
__swig_getmethods__["baseline"] = LeapPython.Device_baseline_get
if _newclass:
baseline = _swig_property(LeapPython.Device_baseline_get)
__swig_getmethods__["is_valid"] = LeapPython.Device_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Device_is_valid_get)
__swig_getmethods__["is_embedded"] = LeapPython.Device_is_embedded_get
if _newclass:
is_embedded = _swig_property(LeapPython.Device_is_embedded_get)
__swig_getmethods__["is_streaming"] = LeapPython.Device_is_streaming_get
if _newclass:
is_streaming = _swig_property(LeapPython.Device_is_streaming_get)
__swig_getmethods__["is_flipped"] = LeapPython.Device_is_flipped_get
if _newclass:
is_flipped = _swig_property(LeapPython.Device_is_flipped_get)
__swig_getmethods__["type"] = LeapPython.Device_type_get
if _newclass:
type = _swig_property(LeapPython.Device_type_get)
__swig_getmethods__["serial_number"] = LeapPython.Device_serial_number_get
if _newclass:
serial_number = _swig_property(LeapPython.Device_serial_number_get)
__swig_getmethods__["position"] = LeapPython.Device_position_get
if _newclass:
position = _swig_property(LeapPython.Device_position_get)
__swig_getmethods__["orientation"] = LeapPython.Device_orientation_get
if _newclass:
orientation = _swig_property(LeapPython.Device_orientation_get)
__swig_destroy__ = LeapPython.delete_Device
__del__ = lambda self: None
Device_swigregister = LeapPython.Device_swigregister
Device_swigregister(Device)
Device.invalid = LeapPython.cvar.Device_invalid
class Image(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Image, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Image, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Image()
try:
self.this.append(this)
except:
self.this = this
def data(self, dst):
return LeapPython.Image_data(self, dst)
def distortion(self, dst):
return LeapPython.Image_distortion(self, dst)
INFRARED = LeapPython.Image_INFRARED
def rectify(self, uv):
return LeapPython.Image_rectify(self, uv)
def warp(self, xy):
return LeapPython.Image_warp(self, xy)
def __eq__(self, arg2):
return LeapPython.Image___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Image___ne__(self, arg2)
def __str__(self):
return LeapPython.Image___str__(self)
__swig_getmethods__["sequence_id"] = LeapPython.Image_sequence_id_get
if _newclass:
sequence_id = _swig_property(LeapPython.Image_sequence_id_get)
__swig_getmethods__["id"] = LeapPython.Image_id_get
if _newclass:
id = _swig_property(LeapPython.Image_id_get)
__swig_getmethods__["width"] = LeapPython.Image_width_get
if _newclass:
width = _swig_property(LeapPython.Image_width_get)
__swig_getmethods__["height"] = LeapPython.Image_height_get
if _newclass:
height = _swig_property(LeapPython.Image_height_get)
__swig_getmethods__["bytes_per_pixel"] = LeapPython.Image_bytes_per_pixel_get
if _newclass:
bytes_per_pixel = _swig_property(LeapPython.Image_bytes_per_pixel_get)
__swig_getmethods__["format"] = LeapPython.Image_format_get
if _newclass:
format = _swig_property(LeapPython.Image_format_get)
__swig_getmethods__["distortion_width"] = LeapPython.Image_distortion_width_get
if _newclass:
distortion_width = _swig_property(LeapPython.Image_distortion_width_get)
__swig_getmethods__["distortion_height"] = LeapPython.Image_distortion_height_get
if _newclass:
distortion_height = _swig_property(LeapPython.Image_distortion_height_get)
__swig_getmethods__["ray_offset_x"] = LeapPython.Image_ray_offset_x_get
if _newclass:
ray_offset_x = _swig_property(LeapPython.Image_ray_offset_x_get)
__swig_getmethods__["ray_offset_y"] = LeapPython.Image_ray_offset_y_get
if _newclass:
ray_offset_y = _swig_property(LeapPython.Image_ray_offset_y_get)
__swig_getmethods__["ray_scale_x"] = LeapPython.Image_ray_scale_x_get
if _newclass:
ray_scale_x = _swig_property(LeapPython.Image_ray_scale_x_get)
__swig_getmethods__["ray_scale_y"] = LeapPython.Image_ray_scale_y_get
if _newclass:
ray_scale_y = _swig_property(LeapPython.Image_ray_scale_y_get)
__swig_getmethods__["timestamp"] = LeapPython.Image_timestamp_get
if _newclass:
timestamp = _swig_property(LeapPython.Image_timestamp_get)
__swig_getmethods__["is_valid"] = LeapPython.Image_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Image_is_valid_get)
def data(self):
ptr = byte_array(self.width * self.height * self.bytes_per_pixel)
LeapPython.Image_data(self, ptr)
return ptr
def distortion(self):
ptr = float_array(self.distortion_width * self.distortion_height)
LeapPython.Image_distortion(self, ptr)
return ptr
__swig_getmethods__["data"] = data
if _newclass:data = _swig_property(data)
__swig_getmethods__["distortion"] = distortion
if _newclass:distortion = _swig_property(distortion)
__swig_getmethods__["data_pointer"] = LeapPython.Image_data_pointer_get
if _newclass:
data_pointer = _swig_property(LeapPython.Image_data_pointer_get)
__swig_getmethods__["distortion_pointer"] = LeapPython.Image_distortion_pointer_get
if _newclass:
distortion_pointer = _swig_property(LeapPython.Image_distortion_pointer_get)
__swig_destroy__ = LeapPython.delete_Image
__del__ = lambda self: None
Image_swigregister = LeapPython.Image_swigregister
Image_swigregister(Image)
Image.invalid = LeapPython.cvar.Image_invalid
class Mask(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Mask, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Mask, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Mask()
try:
self.this.append(this)
except:
self.this = this
def data(self, dst):
return LeapPython.Mask_data(self, dst)
__swig_getmethods__["invalid"] = lambda x: LeapPython.Mask_invalid
if _newclass:
invalid = staticmethod(LeapPython.Mask_invalid)
def __eq__(self, arg2):
return LeapPython.Mask___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Mask___ne__(self, arg2)
def __str__(self):
return LeapPython.Mask___str__(self)
__swig_getmethods__["sequence_id"] = LeapPython.Mask_sequence_id_get
if _newclass:
sequence_id = _swig_property(LeapPython.Mask_sequence_id_get)
__swig_getmethods__["id"] = LeapPython.Mask_id_get
if _newclass:
id = _swig_property(LeapPython.Mask_id_get)
__swig_getmethods__["width"] = LeapPython.Mask_width_get
if _newclass:
width = _swig_property(LeapPython.Mask_width_get)
__swig_getmethods__["height"] = LeapPython.Mask_height_get
if _newclass:
height = _swig_property(LeapPython.Mask_height_get)
__swig_getmethods__["offset_x"] = LeapPython.Mask_offset_x_get
if _newclass:
offset_x = _swig_property(LeapPython.Mask_offset_x_get)
__swig_getmethods__["offset_y"] = LeapPython.Mask_offset_y_get
if _newclass:
offset_y = _swig_property(LeapPython.Mask_offset_y_get)
__swig_getmethods__["is_valid"] = LeapPython.Mask_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Mask_is_valid_get)
def data(self):
ptr = byte_array(self.width * self.height)
LeapPython.Mask_data(self, ptr)
return ptr
__swig_getmethods__["data"] = data
if _newclass:data = _swig_property(data)
__swig_getmethods__["data_pointer"] = LeapPython.Mask_data_pointer_get
if _newclass:
data_pointer = _swig_property(LeapPython.Mask_data_pointer_get)
__swig_destroy__ = LeapPython.delete_Mask
__del__ = lambda self: None
Mask_swigregister = LeapPython.Mask_swigregister
Mask_swigregister(Mask)
def Mask_invalid():
return LeapPython.Mask_invalid()
Mask_invalid = LeapPython.Mask_invalid
class PointableList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, PointableList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, PointableList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_PointableList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.PointableList___len__(self)
def __getitem__(self, index):
return LeapPython.PointableList___getitem__(self, index)
def append(self, *args):
return LeapPython.PointableList_append(self, *args)
def extended(self):
return LeapPython.PointableList_extended(self)
__swig_getmethods__["is_empty"] = LeapPython.PointableList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.PointableList_is_empty_get)
__swig_getmethods__["leftmost"] = LeapPython.PointableList_leftmost_get
if _newclass:
leftmost = _swig_property(LeapPython.PointableList_leftmost_get)
__swig_getmethods__["rightmost"] = LeapPython.PointableList_rightmost_get
if _newclass:
rightmost = _swig_property(LeapPython.PointableList_rightmost_get)
__swig_getmethods__["frontmost"] = LeapPython.PointableList_frontmost_get
if _newclass:
frontmost = _swig_property(LeapPython.PointableList_frontmost_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_PointableList
__del__ = lambda self: None
PointableList_swigregister = LeapPython.PointableList_swigregister
PointableList_swigregister(PointableList)
class FingerList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, FingerList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, FingerList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_FingerList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.FingerList___len__(self)
def __getitem__(self, index):
return LeapPython.FingerList___getitem__(self, index)
def append(self, other):
return LeapPython.FingerList_append(self, other)
def extended(self):
return LeapPython.FingerList_extended(self)
def finger_type(self, type):
return LeapPython.FingerList_finger_type(self, type)
__swig_getmethods__["is_empty"] = LeapPython.FingerList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.FingerList_is_empty_get)
__swig_getmethods__["leftmost"] = LeapPython.FingerList_leftmost_get
if _newclass:
leftmost = _swig_property(LeapPython.FingerList_leftmost_get)
__swig_getmethods__["rightmost"] = LeapPython.FingerList_rightmost_get
if _newclass:
rightmost = _swig_property(LeapPython.FingerList_rightmost_get)
__swig_getmethods__["frontmost"] = LeapPython.FingerList_frontmost_get
if _newclass:
frontmost = _swig_property(LeapPython.FingerList_frontmost_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_FingerList
__del__ = lambda self: None
FingerList_swigregister = LeapPython.FingerList_swigregister
FingerList_swigregister(FingerList)
class ToolList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, ToolList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, ToolList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_ToolList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.ToolList___len__(self)
def __getitem__(self, index):
return LeapPython.ToolList___getitem__(self, index)
def append(self, other):
return LeapPython.ToolList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.ToolList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.ToolList_is_empty_get)
__swig_getmethods__["leftmost"] = LeapPython.ToolList_leftmost_get
if _newclass:
leftmost = _swig_property(LeapPython.ToolList_leftmost_get)
__swig_getmethods__["rightmost"] = LeapPython.ToolList_rightmost_get
if _newclass:
rightmost = _swig_property(LeapPython.ToolList_rightmost_get)
__swig_getmethods__["frontmost"] = LeapPython.ToolList_frontmost_get
if _newclass:
frontmost = _swig_property(LeapPython.ToolList_frontmost_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_ToolList
__del__ = lambda self: None
ToolList_swigregister = LeapPython.ToolList_swigregister
ToolList_swigregister(ToolList)
class HandList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, HandList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, HandList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_HandList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.HandList___len__(self)
def __getitem__(self, index):
return LeapPython.HandList___getitem__(self, index)
def append(self, other):
return LeapPython.HandList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.HandList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.HandList_is_empty_get)
__swig_getmethods__["leftmost"] = LeapPython.HandList_leftmost_get
if _newclass:
leftmost = _swig_property(LeapPython.HandList_leftmost_get)
__swig_getmethods__["rightmost"] = LeapPython.HandList_rightmost_get
if _newclass:
rightmost = _swig_property(LeapPython.HandList_rightmost_get)
__swig_getmethods__["frontmost"] = LeapPython.HandList_frontmost_get
if _newclass:
frontmost = _swig_property(LeapPython.HandList_frontmost_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_HandList
__del__ = lambda self: None
HandList_swigregister = LeapPython.HandList_swigregister
HandList_swigregister(HandList)
class GestureList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, GestureList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, GestureList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_GestureList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.GestureList___len__(self)
def __getitem__(self, index):
return LeapPython.GestureList___getitem__(self, index)
def append(self, other):
return LeapPython.GestureList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.GestureList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.GestureList_is_empty_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_GestureList
__del__ = lambda self: None
GestureList_swigregister = LeapPython.GestureList_swigregister
GestureList_swigregister(GestureList)
class ScreenList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, ScreenList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, ScreenList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_ScreenList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.ScreenList___len__(self)
def __getitem__(self, index):
return LeapPython.ScreenList___getitem__(self, index)
def closest_screen_hit(self, *args):
return LeapPython.ScreenList_closest_screen_hit(self, *args)
def closest_screen(self, position):
return LeapPython.ScreenList_closest_screen(self, position)
__swig_getmethods__["is_empty"] = LeapPython.ScreenList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.ScreenList_is_empty_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_ScreenList
__del__ = lambda self: None
ScreenList_swigregister = LeapPython.ScreenList_swigregister
ScreenList_swigregister(ScreenList)
class DeviceList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, DeviceList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, DeviceList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_DeviceList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.DeviceList___len__(self)
def __getitem__(self, index):
return LeapPython.DeviceList___getitem__(self, index)
def append(self, other):
return LeapPython.DeviceList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.DeviceList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.DeviceList_is_empty_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_DeviceList
__del__ = lambda self: None
DeviceList_swigregister = LeapPython.DeviceList_swigregister
DeviceList_swigregister(DeviceList)
class ImageList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, ImageList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, ImageList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_ImageList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.ImageList___len__(self)
def __getitem__(self, index):
return LeapPython.ImageList___getitem__(self, index)
def append(self, other):
return LeapPython.ImageList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.ImageList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.ImageList_is_empty_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_ImageList
__del__ = lambda self: None
ImageList_swigregister = LeapPython.ImageList_swigregister
ImageList_swigregister(ImageList)
class TrackedQuad(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, TrackedQuad, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, TrackedQuad, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_TrackedQuad()
try:
self.this.append(this)
except:
self.this = this
def __eq__(self, arg2):
return LeapPython.TrackedQuad___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.TrackedQuad___ne__(self, arg2)
def __str__(self):
return LeapPython.TrackedQuad___str__(self)
__swig_getmethods__["width"] = LeapPython.TrackedQuad_width_get
if _newclass:
width = _swig_property(LeapPython.TrackedQuad_width_get)
__swig_getmethods__["height"] = LeapPython.TrackedQuad_height_get
if _newclass:
height = _swig_property(LeapPython.TrackedQuad_height_get)
__swig_getmethods__["resolution_x"] = LeapPython.TrackedQuad_resolution_x_get
if _newclass:
resolution_x = _swig_property(LeapPython.TrackedQuad_resolution_x_get)
__swig_getmethods__["resolution_y"] = LeapPython.TrackedQuad_resolution_y_get
if _newclass:
resolution_y = _swig_property(LeapPython.TrackedQuad_resolution_y_get)
__swig_getmethods__["visible"] = LeapPython.TrackedQuad_visible_get
if _newclass:
visible = _swig_property(LeapPython.TrackedQuad_visible_get)
__swig_getmethods__["orientation"] = LeapPython.TrackedQuad_orientation_get
if _newclass:
orientation = _swig_property(LeapPython.TrackedQuad_orientation_get)
__swig_getmethods__["position"] = LeapPython.TrackedQuad_position_get
if _newclass:
position = _swig_property(LeapPython.TrackedQuad_position_get)
__swig_getmethods__["masks"] = LeapPython.TrackedQuad_masks_get
if _newclass:
masks = _swig_property(LeapPython.TrackedQuad_masks_get)
__swig_getmethods__["images"] = LeapPython.TrackedQuad_images_get
if _newclass:
images = _swig_property(LeapPython.TrackedQuad_images_get)
__swig_getmethods__["is_valid"] = LeapPython.TrackedQuad_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.TrackedQuad_is_valid_get)
__swig_destroy__ = LeapPython.delete_TrackedQuad
__del__ = lambda self: None
TrackedQuad_swigregister = LeapPython.TrackedQuad_swigregister
TrackedQuad_swigregister(TrackedQuad)
TrackedQuad.invalid = LeapPython.cvar.TrackedQuad_invalid
class MaskList(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, MaskList, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, MaskList, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_MaskList()
try:
self.this.append(this)
except:
self.this = this
def __len__(self):
return LeapPython.MaskList___len__(self)
def __getitem__(self, index):
return LeapPython.MaskList___getitem__(self, index)
def append(self, other):
return LeapPython.MaskList_append(self, other)
__swig_getmethods__["is_empty"] = LeapPython.MaskList_is_empty_get
if _newclass:
is_empty = _swig_property(LeapPython.MaskList_is_empty_get)
def __iter__(self):
_pos = 0
while _pos < len(self):
yield self[_pos]
_pos += 1
__swig_destroy__ = LeapPython.delete_MaskList
__del__ = lambda self: None
MaskList_swigregister = LeapPython.MaskList_swigregister
MaskList_swigregister(MaskList)
class InteractionBox(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, InteractionBox, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, InteractionBox, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_InteractionBox()
try:
self.this.append(this)
except:
self.this = this
def normalize_point(self, position, clamp=True):
return LeapPython.InteractionBox_normalize_point(self, position, clamp)
def denormalize_point(self, normalizedPosition):
return LeapPython.InteractionBox_denormalize_point(self, normalizedPosition)
def __eq__(self, arg2):
return LeapPython.InteractionBox___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.InteractionBox___ne__(self, arg2)
def __str__(self):
return LeapPython.InteractionBox___str__(self)
__swig_getmethods__["center"] = LeapPython.InteractionBox_center_get
if _newclass:
center = _swig_property(LeapPython.InteractionBox_center_get)
__swig_getmethods__["width"] = LeapPython.InteractionBox_width_get
if _newclass:
width = _swig_property(LeapPython.InteractionBox_width_get)
__swig_getmethods__["height"] = LeapPython.InteractionBox_height_get
if _newclass:
height = _swig_property(LeapPython.InteractionBox_height_get)
__swig_getmethods__["depth"] = LeapPython.InteractionBox_depth_get
if _newclass:
depth = _swig_property(LeapPython.InteractionBox_depth_get)
__swig_getmethods__["is_valid"] = LeapPython.InteractionBox_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.InteractionBox_is_valid_get)
__swig_destroy__ = LeapPython.delete_InteractionBox
__del__ = lambda self: None
InteractionBox_swigregister = LeapPython.InteractionBox_swigregister
InteractionBox_swigregister(InteractionBox)
InteractionBox.invalid = LeapPython.cvar.InteractionBox_invalid
class Frame(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Frame, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Frame, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Frame()
try:
self.this.append(this)
except:
self.this = this
def hand(self, id):
return LeapPython.Frame_hand(self, id)
def pointable(self, id):
return LeapPython.Frame_pointable(self, id)
def finger(self, id):
return LeapPython.Frame_finger(self, id)
def tool(self, id):
return LeapPython.Frame_tool(self, id)
def gesture(self, id):
return LeapPython.Frame_gesture(self, id)
def gestures(self, *args):
return LeapPython.Frame_gestures(self, *args)
def translation(self, sinceFrame):
return LeapPython.Frame_translation(self, sinceFrame)
def translation_probability(self, sinceFrame):
return LeapPython.Frame_translation_probability(self, sinceFrame)
def rotation_axis(self, sinceFrame):
return LeapPython.Frame_rotation_axis(self, sinceFrame)
def rotation_angle(self, *args):
return LeapPython.Frame_rotation_angle(self, *args)
def rotation_matrix(self, sinceFrame):
return LeapPython.Frame_rotation_matrix(self, sinceFrame)
def rotation_probability(self, sinceFrame):
return LeapPython.Frame_rotation_probability(self, sinceFrame)
def scale_factor(self, sinceFrame):
return LeapPython.Frame_scale_factor(self, sinceFrame)
def scale_probability(self, sinceFrame):
return LeapPython.Frame_scale_probability(self, sinceFrame)
def __eq__(self, arg2):
return LeapPython.Frame___eq__(self, arg2)
def __ne__(self, arg2):
return LeapPython.Frame___ne__(self, arg2)
def serialize(self, ptr):
return LeapPython.Frame_serialize(self, ptr)
def deserialize(self, ptr, length):
return LeapPython.Frame_deserialize(self, ptr, length)
def __str__(self):
return LeapPython.Frame___str__(self)
__swig_getmethods__["id"] = LeapPython.Frame_id_get
if _newclass:
id = _swig_property(LeapPython.Frame_id_get)
__swig_getmethods__["timestamp"] = LeapPython.Frame_timestamp_get
if _newclass:
timestamp = _swig_property(LeapPython.Frame_timestamp_get)
__swig_getmethods__["current_frames_per_second"] = LeapPython.Frame_current_frames_per_second_get
if _newclass:
current_frames_per_second = _swig_property(LeapPython.Frame_current_frames_per_second_get)
__swig_getmethods__["pointables"] = LeapPython.Frame_pointables_get
if _newclass:
pointables = _swig_property(LeapPython.Frame_pointables_get)
__swig_getmethods__["fingers"] = LeapPython.Frame_fingers_get
if _newclass:
fingers = _swig_property(LeapPython.Frame_fingers_get)
__swig_getmethods__["tools"] = LeapPython.Frame_tools_get
if _newclass:
tools = _swig_property(LeapPython.Frame_tools_get)
__swig_getmethods__["hands"] = LeapPython.Frame_hands_get
if _newclass:
hands = _swig_property(LeapPython.Frame_hands_get)
__swig_getmethods__["images"] = LeapPython.Frame_images_get
if _newclass:
images = _swig_property(LeapPython.Frame_images_get)
__swig_getmethods__["is_valid"] = LeapPython.Frame_is_valid_get
if _newclass:
is_valid = _swig_property(LeapPython.Frame_is_valid_get)
__swig_getmethods__["interaction_box"] = LeapPython.Frame_interaction_box_get
if _newclass:
interaction_box = _swig_property(LeapPython.Frame_interaction_box_get)
__swig_getmethods__["serialize_length"] = LeapPython.Frame_serialize_length_get
if _newclass:
serialize_length = _swig_property(LeapPython.Frame_serialize_length_get)
__swig_getmethods__["tracked_quad"] = LeapPython.Frame_tracked_quad_get
if _newclass:
tracked_quad = _swig_property(LeapPython.Frame_tracked_quad_get)
def serialize(self):
length = self.serialize_length
str = byte_array(length)
LeapPython.Frame_serialize(self, str)
return (str, length)
def deserialize(self, tup):
LeapPython.Frame_deserialize(self, tup[0], tup[1])
__swig_getmethods__["serialize"] = serialize
if _newclass:serialize = _swig_property(serialize)
__swig_destroy__ = LeapPython.delete_Frame
__del__ = lambda self: None
Frame_swigregister = LeapPython.Frame_swigregister
Frame_swigregister(Frame)
Frame.invalid = LeapPython.cvar.Frame_invalid
class BugReport(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, BugReport, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, BugReport, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_BugReport()
try:
self.this.append(this)
except:
self.this = this
def begin_recording(self):
return LeapPython.BugReport_begin_recording(self)
def end_recording(self):
return LeapPython.BugReport_end_recording(self)
__swig_getmethods__["is_active"] = LeapPython.BugReport_is_active_get
if _newclass:
is_active = _swig_property(LeapPython.BugReport_is_active_get)
__swig_getmethods__["progress"] = LeapPython.BugReport_progress_get
if _newclass:
progress = _swig_property(LeapPython.BugReport_progress_get)
__swig_getmethods__["duration"] = LeapPython.BugReport_duration_get
if _newclass:
duration = _swig_property(LeapPython.BugReport_duration_get)
__swig_destroy__ = LeapPython.delete_BugReport
__del__ = lambda self: None
BugReport_swigregister = LeapPython.BugReport_swigregister
BugReport_swigregister(BugReport)
class Config(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Config, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Config, name)
__repr__ = _swig_repr
def __init__(self):
this = LeapPython.new_Config()
try:
self.this.append(this)
except:
self.this = this
TYPE_UNKNOWN = LeapPython.Config_TYPE_UNKNOWN
TYPE_BOOLEAN = LeapPython.Config_TYPE_BOOLEAN
TYPE_INT32 = LeapPython.Config_TYPE_INT32
TYPE_FLOAT = LeapPython.Config_TYPE_FLOAT
TYPE_STRING = LeapPython.Config_TYPE_STRING
def save(self):
return LeapPython.Config_save(self)
def get(self, *args):
type = LeapPython.Config_type(self, *args)
if type == LeapPython.Config_TYPE_BOOLEAN:
return LeapPython.Config_get_bool(self, *args)
elif type == LeapPython.Config_TYPE_INT32:
return LeapPython.Config_get_int_32(self, *args)
elif type == LeapPython.Config_TYPE_FLOAT:
return LeapPython.Config_get_float(self, *args)
elif type == LeapPython.Config_TYPE_STRING:
return LeapPython.Config_get_string(self, *args)
return None
def set(self, *args):
type = LeapPython.Config_type(self, *args[:-1]) # Do not pass value through
if type == LeapPython.Config_TYPE_BOOLEAN:
return LeapPython.Config_set_bool(self, *args)
elif type == LeapPython.Config_TYPE_INT32:
return LeapPython.Config_set_int_32(self, *args)
elif type == LeapPython.Config_TYPE_FLOAT:
return LeapPython.Config_set_float(self, *args)
elif type == LeapPython.Config_TYPE_STRING:
return LeapPython.Config_set_string(self, *args)
return False
__swig_destroy__ = LeapPython.delete_Config
__del__ = lambda self: None
Config_swigregister = LeapPython.Config_swigregister
Config_swigregister(Config)
class Controller(Interface):
__swig_setmethods__ = {}
for _s in [Interface]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Controller, name, value)
__swig_getmethods__ = {}
for _s in [Interface]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Controller, name)
__repr__ = _swig_repr
__swig_destroy__ = LeapPython.delete_Controller
__del__ = lambda self: None
def __init__(self, *args):
this = LeapPython.new_Controller(*args)
try:
self.this.append(this)
except:
self.this = this
def is_service_connected(self):
return LeapPython.Controller_is_service_connected(self)
POLICY_DEFAULT = LeapPython.Controller_POLICY_DEFAULT
POLICY_BACKGROUND_FRAMES = LeapPython.Controller_POLICY_BACKGROUND_FRAMES
POLICY_IMAGES = LeapPython.Controller_POLICY_IMAGES
POLICY_OPTIMIZE_HMD = LeapPython.Controller_POLICY_OPTIMIZE_HMD
def set_policy_flags(self, flags):
return LeapPython.Controller_set_policy_flags(self, flags)
def set_policy(self, policy):
return LeapPython.Controller_set_policy(self, policy)
def clear_policy(self, policy):
return LeapPython.Controller_clear_policy(self, policy)
def is_policy_set(self, policy):
return LeapPython.Controller_is_policy_set(self, policy)
def add_listener(self, listener):
return LeapPython.Controller_add_listener(self, listener)
def remove_listener(self, listener):
return LeapPython.Controller_remove_listener(self, listener)
def frame(self, history=0):
return LeapPython.Controller_frame(self, history)
def enable_gesture(self, type, enable=True):
return LeapPython.Controller_enable_gesture(self, type, enable)
def is_gesture_enabled(self, type):
return LeapPython.Controller_is_gesture_enabled(self, type)
def now(self):
return LeapPython.Controller_now(self)
__swig_getmethods__["is_connected"] = LeapPython.Controller_is_connected_get
if _newclass:
is_connected = _swig_property(LeapPython.Controller_is_connected_get)
__swig_getmethods__["has_focus"] = LeapPython.Controller_has_focus_get
if _newclass:
has_focus = _swig_property(LeapPython.Controller_has_focus_get)
__swig_getmethods__["policy_flags"] = LeapPython.Controller_policy_flags_get
if _newclass:
policy_flags = _swig_property(LeapPython.Controller_policy_flags_get)
__swig_getmethods__["config"] = LeapPython.Controller_config_get
if _newclass:
config = _swig_property(LeapPython.Controller_config_get)
__swig_getmethods__["images"] = LeapPython.Controller_images_get
if _newclass:
images = _swig_property(LeapPython.Controller_images_get)
__swig_getmethods__["located_screens"] = LeapPython.Controller_located_screens_get
if _newclass:
located_screens = _swig_property(LeapPython.Controller_located_screens_get)
__swig_getmethods__["devices"] = LeapPython.Controller_devices_get
if _newclass:
devices = _swig_property(LeapPython.Controller_devices_get)
__swig_getmethods__["tracked_quad"] = LeapPython.Controller_tracked_quad_get
if _newclass:
tracked_quad = _swig_property(LeapPython.Controller_tracked_quad_get)
__swig_getmethods__["bug_report"] = LeapPython.Controller_bug_report_get
if _newclass:
bug_report = _swig_property(LeapPython.Controller_bug_report_get)
Controller_swigregister = LeapPython.Controller_swigregister
Controller_swigregister(Controller)
class Listener(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Listener, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Listener, name)
__repr__ = _swig_repr
def __init__(self):
if self.__class__ == Listener:
_self = None
else:
_self = self
this = LeapPython.new_Listener(_self, )
try:
self.this.append(this)
except:
self.this = this
__swig_destroy__ = LeapPython.delete_Listener
__del__ = lambda self: None
def on_init(self, arg0):
return LeapPython.Listener_on_init(self, arg0)
def on_connect(self, arg0):
return LeapPython.Listener_on_connect(self, arg0)
def on_disconnect(self, arg0):
return LeapPython.Listener_on_disconnect(self, arg0)
def on_exit(self, arg0):
return LeapPython.Listener_on_exit(self, arg0)
def on_frame(self, arg0):
return LeapPython.Listener_on_frame(self, arg0)
def on_focus_gained(self, arg0):
return LeapPython.Listener_on_focus_gained(self, arg0)
def on_focus_lost(self, arg0):
return LeapPython.Listener_on_focus_lost(self, arg0)
def on_service_connect(self, arg0):
return LeapPython.Listener_on_service_connect(self, arg0)
def on_service_disconnect(self, arg0):
return LeapPython.Listener_on_service_disconnect(self, arg0)
def on_device_change(self, arg0):
return LeapPython.Listener_on_device_change(self, arg0)
def on_images(self, arg0):
return LeapPython.Listener_on_images(self, arg0)
def __disown__(self):
self.this.disown()
LeapPython.disown_Listener(self)
return weakref_proxy(self)
Listener_swigregister = LeapPython.Listener_swigregister
Listener_swigregister(Listener)
# This file is compatible with both classic and new-style classes.
| mit |
agendaTCC/AgendaTCC | tccweb/apps/departamentos/signals.py | 1 | 1329 | # from django.db.models.signals import post_save, m2m_changed
# from django.contrib.auth.models import Group
# from django.db.models import Q
# from disciplinas.models import Disciplina
# from models import Monitor
# def transformar_monitor(sender,**kwargs):
# monitor_do_grupo = kwargs['instance']
# monitor = monitor_do_grupo.monitor.get_profile()
# disciplinas_ = monitor_do_grupo.grupo.disciplinas.all()
# busca = Q()
# for disciplina in disciplinas_:
# busca.add(Q(titulo = disciplina.id) & Q(semestre = monitor_do_grupo.semestre) & Q(ano = monitor_do_grupo.ano), busca.OR)
# disciplinas = Disciplina.objects.filter(busca)
# if int(monitor.funcao) == 4:
# default_group = Group.objects.get(id = 6)
# print default_group
# elif int(monitor.funcao) == 2 or int(monitor.funcao) == 3:
# default_group = Group.objects.get(id = 7)
# else:
# default_group = Group.objects.get(id = int(monitor.funcao))
# default_group.user_set.add(monitor.id)
# default_group.save()
# for disciplina in disciplinas:
# if monitor not in disciplina.monitores.all():
# disciplina.monitores.add(monitor)
# post_save.connect(transformar_monitor, sender=Monitor, dispatch_uid="bancas.models.Monitor")
# | gpl-2.0 |
sdesbure/rfxcom_collectd | node_modules/rfxcom/node_modules/serialport/node_modules/node-gyp/legacy/tools/gyp/pylib/gyp/generator/gypsh.py | 2779 | 1665 | # Copyright (c) 2011 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""gypsh output module
gypsh is a GYP shell. It's not really a generator per se. All it does is
fire up an interactive Python session with a few local variables set to the
variables passed to the generator. Like gypd, it's intended as a debugging
aid, to facilitate the exploration of .gyp structures after being processed
by the input module.
The expected usage is "gyp -f gypsh -D OS=desired_os".
"""
import code
import sys
# All of this stuff about generator variables was lovingly ripped from gypd.py.
# That module has a much better description of what's going on and why.
_generator_identity_variables = [
'EXECUTABLE_PREFIX',
'EXECUTABLE_SUFFIX',
'INTERMEDIATE_DIR',
'PRODUCT_DIR',
'RULE_INPUT_ROOT',
'RULE_INPUT_DIRNAME',
'RULE_INPUT_EXT',
'RULE_INPUT_NAME',
'RULE_INPUT_PATH',
'SHARED_INTERMEDIATE_DIR',
]
generator_default_variables = {
}
for v in _generator_identity_variables:
generator_default_variables[v] = '<(%s)' % v
def GenerateOutput(target_list, target_dicts, data, params):
locals = {
'target_list': target_list,
'target_dicts': target_dicts,
'data': data,
}
# Use a banner that looks like the stock Python one and like what
# code.interact uses by default, but tack on something to indicate what
# locals are available, and identify gypsh.
banner='Python %s on %s\nlocals.keys() = %s\ngypsh' % \
(sys.version, sys.platform, repr(sorted(locals.keys())))
code.interact(banner, local=locals)
| mit |
mkmelin/bedrock | tests/functional/firefox/test_family_navigation.py | 3 | 1057 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import pytest
from pages.firefox.family_navigation import FirefoxPage
@pytest.mark.nondestructive
@pytest.mark.parametrize(('slug', 'locale'), [
('dnt', None),
('interest-dashboard', None)])
def test_family_navigation_active_nav(slug, locale, base_url, selenium):
locale = locale or 'en-US'
page = FirefoxPage(selenium, base_url, locale, slug=slug).open()
assert page.family_navigation.active_primary_nav_id == 'desktop'
@pytest.mark.nondestructive
@pytest.mark.parametrize(('slug', 'locale'), [
('dnt', None),
('interest-dashboard', None)])
def test_family_navigation_adjunct_menu(slug, locale, base_url, selenium):
locale = locale or 'en-US'
page = FirefoxPage(selenium, base_url, locale, slug=slug).open()
page.family_navigation.open_adjunct_menu()
assert page.family_navigation.is_adjunct_menu_displayed
| mpl-2.0 |
olgabrani/synnefo | snf-astakos-app/astakos/im/migrations/0077_base_projects.py | 10 | 30281 | # encoding: utf-8
import datetime
from dateutil.relativedelta import relativedelta
from south.db import db
from south.v2 import DataMigration
from django.db import models
CLOSED_POLICY = 3
ACTIVATED = 1
class Migration(DataMigration):
def new_chain(self, orm):
return orm.Chain.objects.create()
def base_resources(self, orm, user):
resources = orm.Resource.objects.all()
grants = {}
for resource in resources:
grants[resource] = resource.uplimit
objs = orm.AstakosUserQuota.objects.select_related()
custom_quota = objs.filter(user=user)
for cq in custom_quota:
grants[cq.resource] = cq.capacity
tuples = []
for resource, capacity in grants.iteritems():
tuples.append((resource, capacity, capacity))
return tuples
def set_resources(self, project, grants):
for resource, m_capacity, p_capacity in grants:
g = project.projectresourcequota_set
g.create(resource=resource,
member_capacity=m_capacity,
project_capacity=p_capacity)
def make_base_project(self, orm, user):
chain = self.new_chain(orm)
orm.Project.objects.create(
id=chain.chain,
uuid=user.uuid,
last_application=None,
owner=None,
realname=("system:" + user.uuid),
homepage="",
description=("system project for user " + user.username),
end_date=(datetime.datetime.now() + relativedelta(years=100)),
member_join_policy=CLOSED_POLICY,
member_leave_policy=CLOSED_POLICY,
limit_on_members_number=1,
private=True,
is_base=True)
user.base_project_id = chain.chain
user.save()
def new_membership(self, orm, project, user):
m = orm.ProjectMembership.objects.create(
project=project, person=user, state=1)
now = datetime.datetime.now()
m.log.create(from_state=None, to_state=1, date=now)
def enable_base_project(self, orm, user):
project = user.base_project
project.name = project.realname
project.state = ACTIVATED
project.save()
base_grants = self.base_resources(orm, user)
self.set_resources(project, base_grants)
self.new_membership(orm, project, user)
def forwards(self, orm):
acc_users = orm.AstakosUser.objects.filter(moderated=True,
is_rejected=False)
for user in acc_users:
self.make_base_project(orm, user)
self.enable_base_project(orm, user)
def backwards(self, orm):
"Write your backwards methods here."
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2014, 1, 27, 15, 9, 56, 442174)'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2014, 1, 27, 15, 9, 56, 442123)'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'im.additionalmail': {
'Meta': {'object_name': 'AdditionalMail'},
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"})
},
'im.approvalterms': {
'Meta': {'object_name': 'ApprovalTerms'},
'date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'db_index': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'im.astakosuser': {
'Meta': {'object_name': 'AstakosUser', '_ormbases': ['auth.User']},
'accepted_email': ('django.db.models.fields.EmailField', [], {'default': 'None', 'max_length': '75', 'null': 'True', 'blank': 'True'}),
'accepted_policy': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'activation_sent': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'affiliation': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'auth_token': ('django.db.models.fields.CharField', [], {'max_length': '64', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'auth_token_created': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'auth_token_expires': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'base_project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'base_user'", 'null': 'True', 'to': "orm['im.Project']"}),
'date_signed_terms': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deactivated_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deactivated_reason': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True'}),
'disturbed_quota': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'email_verified': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'has_credits': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'has_signed_terms': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'invitations': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'is_rejected': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_verified': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'level': ('django.db.models.fields.IntegerField', [], {'default': '4'}),
'moderated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderated_at': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'moderated_data': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['im.Resource']", 'null': 'True', 'through': "orm['im.AstakosUserQuota']", 'symmetrical': 'False'}),
'rejected_reason': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {}),
'user_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True', 'primary_key': 'True'}),
'uuid': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'verification_code': ('django.db.models.fields.CharField', [], {'max_length': '255', 'unique': 'True', 'null': 'True'}),
'verified_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'im.astakosuserauthprovider': {
'Meta': {'ordering': "('module', 'created')", 'unique_together': "(('identifier', 'module', 'user'),)", 'object_name': 'AstakosUserAuthProvider'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'affiliation': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'auth_backend': ('django.db.models.fields.CharField', [], {'default': "'astakos'", 'max_length': '255'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'identifier': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'info_data': ('django.db.models.fields.TextField', [], {'default': "''", 'null': 'True', 'blank': 'True'}),
'module': ('django.db.models.fields.CharField', [], {'default': "'local'", 'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_providers'", 'to': "orm['im.AstakosUser']"})
},
'im.astakosuserquota': {
'Meta': {'unique_together': "(('resource', 'user'),)", 'object_name': 'AstakosUserQuota'},
'capacity': ('django.db.models.fields.BigIntegerField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'resource': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Resource']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"})
},
'im.authproviderpolicyprofile': {
'Meta': {'ordering': "['priority']", 'object_name': 'AuthProviderPolicyProfile'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'authpolicy_profiles'", 'symmetrical': 'False', 'to': "orm['auth.Group']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_exclusive': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'policy_add': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_automoderate': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_create': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_limit': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'policy_login': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_remove': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_required': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_switch': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'priority': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'provider': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'authpolicy_profiles'", 'symmetrical': 'False', 'to': "orm['im.AstakosUser']"})
},
'im.chain': {
'Meta': {'object_name': 'Chain'},
'chain': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'im.component': {
'Meta': {'object_name': 'Component'},
'auth_token': ('django.db.models.fields.CharField', [], {'max_length': '64', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'auth_token_created': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'auth_token_expires': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'base_url': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'}),
'url': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'null': 'True'})
},
'im.emailchange': {
'Meta': {'object_name': 'EmailChange'},
'activation_key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '40', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new_email_address': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'requested_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'emailchanges'", 'unique': 'True', 'to': "orm['im.AstakosUser']"})
},
'im.endpoint': {
'Meta': {'object_name': 'Endpoint'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'service': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'endpoints'", 'to': "orm['im.Service']"})
},
'im.endpointdata': {
'Meta': {'unique_together': "(('endpoint', 'key'),)", 'object_name': 'EndpointData'},
'endpoint': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'data'", 'to': "orm['im.Endpoint']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '1024'})
},
'im.invitation': {
'Meta': {'object_name': 'Invitation'},
'code': ('django.db.models.fields.BigIntegerField', [], {'db_index': 'True'}),
'consumed': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'inviter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'invitations_sent'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'is_consumed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'realname': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'im.pendingthirdpartyuser': {
'Meta': {'unique_together': "(('provider', 'third_party_identifier'),)", 'object_name': 'PendingThirdPartyUser'},
'affiliation': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'null': 'True', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'info': ('django.db.models.fields.TextField', [], {'default': "''", 'null': 'True', 'blank': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'provider': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'third_party_identifier': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'token': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'im.project': {
'Meta': {'object_name': 'Project'},
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'end_date': ('django.db.models.fields.DateTimeField', [], {}),
'homepage': ('django.db.models.fields.URLField', [], {'max_length': '255'}),
'id': ('django.db.models.fields.BigIntegerField', [], {'primary_key': 'True', 'db_column': "'id'"}),
'is_base': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_application': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'last_of_project'", 'null': 'True', 'to': "orm['im.ProjectApplication']"}),
'limit_on_members_number': ('django.db.models.fields.BigIntegerField', [], {}),
'member_join_policy': ('django.db.models.fields.IntegerField', [], {}),
'member_leave_policy': ('django.db.models.fields.IntegerField', [], {}),
'members': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['im.AstakosUser']", 'through': "orm['im.ProjectMembership']", 'symmetrical': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80', 'unique': 'True', 'null': 'True', 'db_index': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projs_owned'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'private': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'realname': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
'resource_grants': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['im.Resource']", 'null': 'True', 'through': "orm['im.ProjectResourceQuota']", 'blank': 'True'}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'}),
'uuid': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'im.projectapplication': {
'Meta': {'unique_together': "(('chain', 'id'),)", 'object_name': 'ProjectApplication'},
'applicant': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_applied'", 'to': "orm['im.AstakosUser']"}),
'chain': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'chained_apps'", 'db_column': "'chain'", 'to': "orm['im.Project']"}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'homepage': ('django.db.models.fields.URLField', [], {'max_length': '255', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'issue_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'limit_on_members_number': ('django.db.models.fields.BigIntegerField', [], {'null': 'True'}),
'member_join_policy': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'member_leave_policy': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80', 'null': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_owned'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'private': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'resource_grants': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['im.Resource']", 'null': 'True', 'through': "orm['im.ProjectResourceGrant']", 'blank': 'True'}),
'response': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'response_actor': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'responded_apps'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'response_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'start_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'}),
'waive_actor': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'waived_apps'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'waive_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'waive_reason': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'})
},
'im.projectlock': {
'Meta': {'object_name': 'ProjectLock'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'im.projectlog': {
'Meta': {'object_name': 'ProjectLog'},
'actor': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']", 'null': 'True'}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {}),
'from_state': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'log'", 'to': "orm['im.Project']"}),
'reason': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'to_state': ('django.db.models.fields.IntegerField', [], {})
},
'im.projectmembership': {
'Meta': {'unique_together': "(('person', 'project'),)", 'object_name': 'ProjectMembership'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'person': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'})
},
'im.projectmembershiplog': {
'Meta': {'object_name': 'ProjectMembershipLog'},
'actor': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']", 'null': 'True'}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {}),
'from_state': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'membership': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'log'", 'to': "orm['im.ProjectMembership']"}),
'reason': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'to_state': ('django.db.models.fields.IntegerField', [], {})
},
'im.projectresourcegrant': {
'Meta': {'unique_together': "(('resource', 'project_application'),)", 'object_name': 'ProjectResourceGrant'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'member_capacity': ('django.db.models.fields.BigIntegerField', [], {}),
'project_application': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.ProjectApplication']"}),
'project_capacity': ('django.db.models.fields.BigIntegerField', [], {}),
'resource': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Resource']"})
},
'im.projectresourcequota': {
'Meta': {'unique_together': "(('resource', 'project'),)", 'object_name': 'ProjectResourceQuota'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'member_capacity': ('django.db.models.fields.BigIntegerField', [], {'default': '0'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Project']"}),
'project_capacity': ('django.db.models.fields.BigIntegerField', [], {'default': '0'}),
'resource': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Resource']"})
},
'im.resource': {
'Meta': {'object_name': 'Resource'},
'api_visible': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'desc': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'project_default': ('django.db.models.fields.BigIntegerField', [], {}),
'service_origin': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'service_type': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'ui_visible': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'unit': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'uplimit': ('django.db.models.fields.BigIntegerField', [], {'default': '0'})
},
'im.service': {
'Meta': {'object_name': 'Service'},
'component': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Component']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'im.sessioncatalog': {
'Meta': {'object_name': 'SessionCatalog'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'session_key': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'sessions'", 'null': 'True', 'to': "orm['im.AstakosUser']"})
},
'im.usersetting': {
'Meta': {'unique_together': "(('user', 'setting'),)", 'object_name': 'UserSetting'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'setting': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"}),
'value': ('django.db.models.fields.IntegerField', [], {})
}
}
complete_apps = ['im']
| gpl-3.0 |
groboclown/whimbrel | modules/core/installer/module_core/schema.py | 1 | 1446 | """
Describes the core database tables.
"""
from whimbrel.install.db import DbTableDef
CORE_DB_TABLES = {
"workflow_exec": DbTableDef(
version=1,
pk=["workflow_exec_id", "S", "workflow_name", "S"],
indexes={
"state": "S",
"start_time_epoch": "N"
},
attributes={
"workflow_request_id": "S",
"start_time": "L[N,N,N,N,N,N]",
"workflow_version": "N"
},
stream=False
),
"activity_exec": DbTableDef(
version=1,
pk=["activity_exec_id", "S", "workflow_exec_id", "S"],
indexes={
"state": "S",
"start_time_epoch": "N",
"heartbeat_time_epoch": "N"
},
attributes={
"activity_name": "S",
"workflow_name": "S",
"queue_time_epoch": "N",
"end_time_epoch": "N",
"heartbeat_enabled": "BOOL",
"activity_version": "N",
"queue_time": "L[N,N,N,N,N,N]",
"start_time": "L[N,N,N,N,N,N]",
"end_time": "L[N,N,N,N,N,N]"
},
stream=False
),
"activity_exec_dependency": DbTableDef(
version=1,
pk=["activity_exec_dependency_id", "S", "activity_exec_id", "S"],
indexes={
"workflow_exec_id": "S",
"dependent_activity_exec_id": "S"
},
attributes={},
stream=False
)
}
| apache-2.0 |
lxneng/incubator-airflow | airflow/contrib/sensors/sagemaker_tuning_sensor.py | 3 | 2439 | # -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from airflow.contrib.hooks.sagemaker_hook import SageMakerHook
from airflow.contrib.sensors.sagemaker_base_sensor import SageMakerBaseSensor
from airflow.utils.decorators import apply_defaults
class SageMakerTuningSensor(SageMakerBaseSensor):
"""
Asks for the state of the tuning state until it reaches a terminal state.
The sensor will error if the job errors, throwing a AirflowException
containing the failure reason.
:param job_name: job_name of the tuning instance to check the state of
:type job_name: string
:param region_name: The AWS region_name
:type region_name: string
"""
template_fields = ['job_name']
template_ext = ()
@apply_defaults
def __init__(self,
job_name,
region_name=None,
*args,
**kwargs):
super(SageMakerTuningSensor, self).__init__(*args, **kwargs)
self.job_name = job_name
self.region_name = region_name
def non_terminal_states(self):
return ['InProgress', 'Stopping', 'Stopped']
def failed_states(self):
return ['Failed']
def get_sagemaker_response(self):
sagemaker = SageMakerHook(
aws_conn_id=self.aws_conn_id,
region_name=self.region_name
)
self.log.info('Poking Sagemaker Tuning Job %s', self.job_name)
return sagemaker.describe_tuning_job(self.job_name)
def get_failed_reason_from_response(self, response):
return response['FailureReason']
def state_from_response(self, response):
return response['HyperParameterTuningJobStatus']
| apache-2.0 |
c0defreak/python-for-android | python3-alpha/python3-src/Tools/msi/msi.py | 45 | 62636 | # Python MSI Generator
# (C) 2003 Martin v. Loewis
# See "FOO" in comments refers to MSDN sections with the title FOO.
import msilib, schema, sequence, os, glob, time, re, shutil, zipfile
from msilib import Feature, CAB, Directory, Dialog, Binary, add_data
import uisample
from win32com.client import constants
from distutils.spawn import find_executable
from uuids import product_codes
import tempfile
# Settings can be overridden in config.py below
# 0 for official python.org releases
# 1 for intermediate releases by anybody, with
# a new product code for every package.
snapshot = 1
# 1 means that file extension is px, not py,
# and binaries start with x
testpackage = 0
# Location of build tree
srcdir = os.path.abspath("../..")
# Text to be displayed as the version in dialogs etc.
# goes into file name and ProductCode. Defaults to
# current_version.day for Snapshot, current_version otherwise
full_current_version = None
# Is Tcl available at all?
have_tcl = True
# path to PCbuild directory
PCBUILD="PCbuild"
# msvcrt version
MSVCR = "90"
# Name of certificate in default store to sign MSI with
certname = None
# Make a zip file containing the PDB files for this build?
pdbzip = True
try:
from config import *
except ImportError:
pass
# Extract current version from Include/patchlevel.h
lines = open(srcdir + "/Include/patchlevel.h").readlines()
major = minor = micro = level = serial = None
levels = {
'PY_RELEASE_LEVEL_ALPHA':0xA,
'PY_RELEASE_LEVEL_BETA': 0xB,
'PY_RELEASE_LEVEL_GAMMA':0xC,
'PY_RELEASE_LEVEL_FINAL':0xF
}
for l in lines:
if not l.startswith("#define"):
continue
l = l.split()
if len(l) != 3:
continue
_, name, value = l
if name == 'PY_MAJOR_VERSION': major = value
if name == 'PY_MINOR_VERSION': minor = value
if name == 'PY_MICRO_VERSION': micro = value
if name == 'PY_RELEASE_LEVEL': level = levels[value]
if name == 'PY_RELEASE_SERIAL': serial = value
short_version = major+"."+minor
# See PC/make_versioninfo.c
FIELD3 = 1000*int(micro) + 10*level + int(serial)
current_version = "%s.%d" % (short_version, FIELD3)
# This should never change. The UpgradeCode of this package can be
# used in the Upgrade table of future packages to make the future
# package replace this one. See "UpgradeCode Property".
# upgrade_code gets set to upgrade_code_64 when we have determined
# that the target is Win64.
upgrade_code_snapshot='{92A24481-3ECB-40FC-8836-04B7966EC0D5}'
upgrade_code='{65E6DE48-A358-434D-AA4F-4AF72DB4718F}'
upgrade_code_64='{6A965A0C-6EE6-4E3A-9983-3263F56311EC}'
if snapshot:
current_version = "%s.%s.%s" % (major, minor, int(time.time()/3600/24))
product_code = msilib.gen_uuid()
else:
product_code = product_codes[current_version]
if full_current_version is None:
full_current_version = current_version
extensions = [
'bz2.pyd',
'pyexpat.pyd',
'select.pyd',
'unicodedata.pyd',
'winsound.pyd',
'_elementtree.pyd',
'_socket.pyd',
'_ssl.pyd',
'_testcapi.pyd',
'_tkinter.pyd',
'_msi.pyd',
'_ctypes.pyd',
'_ctypes_test.pyd',
'_sqlite3.pyd',
'_hashlib.pyd',
'_multiprocessing.pyd'
]
# Well-known component UUIDs
# These are needed for SharedDLLs reference counter; if
# a different UUID was used for each incarnation of, say,
# python24.dll, an upgrade would set the reference counter
# from 1 to 2 (due to what I consider a bug in MSI)
# Using the same UUID is fine since these files are versioned,
# so Installer will always keep the newest version.
# NOTE: All uuids are self generated.
pythondll_uuid = {
"24":"{9B81E618-2301-4035-AC77-75D9ABEB7301}",
"25":"{2e41b118-38bd-4c1b-a840-6977efd1b911}",
"26":"{34ebecac-f046-4e1c-b0e3-9bac3cdaacfa}",
"27":"{4fe21c76-1760-437b-a2f2-99909130a175}",
"30":"{6953bc3b-6768-4291-8410-7914ce6e2ca8}",
"31":"{4afcba0b-13e4-47c3-bebe-477428b46913}",
"32":"{3ff95315-1096-4d31-bd86-601d5438ad5e}",
} [major+minor]
# Compute the name that Sphinx gives to the docfile
docfile = ""
if int(micro):
docfile = micro
if level < 0xf:
if level == 0xC:
docfile += "rc%s" % (serial,)
else:
docfile += '%x%s' % (level, serial)
docfile = 'python%s%s%s.chm' % (major, minor, docfile)
# Build the mingw import library, libpythonXY.a
# This requires 'nm' and 'dlltool' executables on your PATH
def build_mingw_lib(lib_file, def_file, dll_file, mingw_lib):
warning = "WARNING: %s - libpythonXX.a not built"
nm = find_executable('nm')
dlltool = find_executable('dlltool')
if not nm or not dlltool:
print(warning % "nm and/or dlltool were not found")
return False
nm_command = '%s -Cs %s' % (nm, lib_file)
dlltool_command = "%s --dllname %s --def %s --output-lib %s" % \
(dlltool, dll_file, def_file, mingw_lib)
export_match = re.compile(r"^_imp__(.*) in python\d+\.dll").match
f = open(def_file,'w')
f.write("LIBRARY %s\n" % dll_file)
f.write("EXPORTS\n")
nm_pipe = os.popen(nm_command)
for line in nm_pipe.readlines():
m = export_match(line)
if m:
f.write(m.group(1)+"\n")
f.close()
exit = nm_pipe.close()
if exit:
print(warning % "nm did not run successfully")
return False
if os.system(dlltool_command) != 0:
print(warning % "dlltool did not run successfully")
return False
return True
# Target files (.def and .a) go in PCBuild directory
lib_file = os.path.join(srcdir, PCBUILD, "python%s%s.lib" % (major, minor))
def_file = os.path.join(srcdir, PCBUILD, "python%s%s.def" % (major, minor))
dll_file = "python%s%s.dll" % (major, minor)
mingw_lib = os.path.join(srcdir, PCBUILD, "libpython%s%s.a" % (major, minor))
have_mingw = build_mingw_lib(lib_file, def_file, dll_file, mingw_lib)
# Determine the target architecture
dll_path = os.path.join(srcdir, PCBUILD, dll_file)
msilib.set_arch_from_file(dll_path)
if msilib.pe_type(dll_path) != msilib.pe_type("msisupport.dll"):
raise SystemError("msisupport.dll for incorrect architecture")
if msilib.Win64:
upgrade_code = upgrade_code_64
# Bump the last digit of the code by one, so that 32-bit and 64-bit
# releases get separate product codes
digit = hex((int(product_code[-2],16)+1)%16)[-1]
product_code = product_code[:-2] + digit + '}'
if testpackage:
ext = 'px'
testprefix = 'x'
else:
ext = 'py'
testprefix = ''
if msilib.Win64:
SystemFolderName = "[System64Folder]"
registry_component = 4|256
else:
SystemFolderName = "[SystemFolder]"
registry_component = 4
msilib.reset()
# condition in which to install pythonxy.dll in system32:
# a) it is Windows 9x or
# b) it is NT, the user is privileged, and has chosen per-machine installation
sys32cond = "(Windows9x or (Privileged and ALLUSERS))"
def build_database():
"""Generate an empty database, with just the schema and the
Summary information stream."""
if snapshot:
uc = upgrade_code_snapshot
else:
uc = upgrade_code
if msilib.Win64:
productsuffix = " (64-bit)"
else:
productsuffix = ""
# schema represents the installer 2.0 database schema.
# sequence is the set of standard sequences
# (ui/execute, admin/advt/install)
msiname = "python-%s%s.msi" % (full_current_version, msilib.arch_ext)
db = msilib.init_database(msiname,
schema, ProductName="Python "+full_current_version+productsuffix,
ProductCode=product_code,
ProductVersion=current_version,
Manufacturer=u"Python Software Foundation",
request_uac = True)
# The default sequencing of the RemoveExistingProducts action causes
# removal of files that got just installed. Place it after
# InstallInitialize, so we first uninstall everything, but still roll
# back in case the installation is interrupted
msilib.change_sequence(sequence.InstallExecuteSequence,
"RemoveExistingProducts", 1510)
msilib.add_tables(db, sequence)
# We cannot set ALLUSERS in the property table, as this cannot be
# reset if the user choses a per-user installation. Instead, we
# maintain WhichUsers, which can be "ALL" or "JUSTME". The UI manages
# this property, and when the execution starts, ALLUSERS is set
# accordingly.
add_data(db, "Property", [("UpgradeCode", uc),
("WhichUsers", "ALL"),
("ProductLine", "Python%s%s" % (major, minor)),
])
db.Commit()
return db, msiname
def remove_old_versions(db):
"Fill the upgrade table."
start = "%s.%s.0" % (major, minor)
# This requests that feature selection states of an older
# installation should be forwarded into this one. Upgrading
# requires that both the old and the new installation are
# either both per-machine or per-user.
migrate_features = 1
# See "Upgrade Table". We remove releases with the same major and
# minor version. For an snapshot, we remove all earlier snapshots. For
# a release, we remove all snapshots, and all earlier releases.
if snapshot:
add_data(db, "Upgrade",
[(upgrade_code_snapshot, start,
current_version,
None, # Ignore language
migrate_features,
None, # Migrate ALL features
"REMOVEOLDSNAPSHOT")])
props = "REMOVEOLDSNAPSHOT"
else:
add_data(db, "Upgrade",
[(upgrade_code, start, current_version,
None, migrate_features, None, "REMOVEOLDVERSION"),
(upgrade_code_snapshot, start, "%s.%d.0" % (major, int(minor)+1),
None, migrate_features, None, "REMOVEOLDSNAPSHOT")])
props = "REMOVEOLDSNAPSHOT;REMOVEOLDVERSION"
props += ";TARGETDIR;DLLDIR"
# Installer collects the product codes of the earlier releases in
# these properties. In order to allow modification of the properties,
# they must be declared as secure. See "SecureCustomProperties Property"
add_data(db, "Property", [("SecureCustomProperties", props)])
class PyDialog(Dialog):
"""Dialog class with a fixed layout: controls at the top, then a ruler,
then a list of buttons: back, next, cancel. Optionally a bitmap at the
left."""
def __init__(self, *args, **kw):
"""Dialog(database, name, x, y, w, h, attributes, title, first,
default, cancel, bitmap=true)"""
Dialog.__init__(self, *args)
ruler = self.h - 36
bmwidth = 152*ruler/328
if kw.get("bitmap", True):
self.bitmap("Bitmap", 0, 0, bmwidth, ruler, "PythonWin")
self.line("BottomLine", 0, ruler, self.w, 0)
def title(self, title):
"Set the title text of the dialog at the top."
# name, x, y, w, h, flags=Visible|Enabled|Transparent|NoPrefix,
# text, in VerdanaBold10
self.text("Title", 135, 10, 220, 60, 0x30003,
r"{\VerdanaBold10}%s" % title)
def back(self, title, next, name = "Back", active = 1):
"""Add a back button with a given title, the tab-next button,
its name in the Control table, possibly initially disabled.
Return the button, so that events can be associated"""
if active:
flags = 3 # Visible|Enabled
else:
flags = 1 # Visible
return self.pushbutton(name, 180, self.h-27 , 56, 17, flags, title, next)
def cancel(self, title, next, name = "Cancel", active = 1):
"""Add a cancel button with a given title, the tab-next button,
its name in the Control table, possibly initially disabled.
Return the button, so that events can be associated"""
if active:
flags = 3 # Visible|Enabled
else:
flags = 1 # Visible
return self.pushbutton(name, 304, self.h-27, 56, 17, flags, title, next)
def next(self, title, next, name = "Next", active = 1):
"""Add a Next button with a given title, the tab-next button,
its name in the Control table, possibly initially disabled.
Return the button, so that events can be associated"""
if active:
flags = 3 # Visible|Enabled
else:
flags = 1 # Visible
return self.pushbutton(name, 236, self.h-27, 56, 17, flags, title, next)
def xbutton(self, name, title, next, xpos):
"""Add a button with a given title, the tab-next button,
its name in the Control table, giving its x position; the
y-position is aligned with the other buttons.
Return the button, so that events can be associated"""
return self.pushbutton(name, int(self.w*xpos - 28), self.h-27, 56, 17, 3, title, next)
def add_ui(db):
x = y = 50
w = 370
h = 300
title = "[ProductName] Setup"
# see "Dialog Style Bits"
modal = 3 # visible | modal
modeless = 1 # visible
track_disk_space = 32
add_data(db, 'ActionText', uisample.ActionText)
add_data(db, 'UIText', uisample.UIText)
# Bitmaps
if not os.path.exists(srcdir+r"\PC\python_icon.exe"):
raise RuntimeError("Run icons.mak in PC directory")
add_data(db, "Binary",
[("PythonWin", msilib.Binary(r"%s\PCbuild\installer.bmp" % srcdir)), # 152x328 pixels
("py.ico",msilib.Binary(srcdir+r"\PC\py.ico")),
])
add_data(db, "Icon",
[("python_icon.exe", msilib.Binary(srcdir+r"\PC\python_icon.exe"))])
# Scripts
# CheckDir sets TargetExists if TARGETDIR exists.
# UpdateEditIDLE sets the REGISTRY.tcl component into
# the installed/uninstalled state according to both the
# Extensions and TclTk features.
if os.system("nmake /nologo /c /f msisupport.mak") != 0:
raise RuntimeError("'nmake /f msisupport.mak' failed")
add_data(db, "Binary", [("Script", msilib.Binary("msisupport.dll"))])
# See "Custom Action Type 1"
if msilib.Win64:
CheckDir = "CheckDir"
UpdateEditIDLE = "UpdateEditIDLE"
else:
CheckDir = "_CheckDir@4"
UpdateEditIDLE = "_UpdateEditIDLE@4"
add_data(db, "CustomAction",
[("CheckDir", 1, "Script", CheckDir)])
if have_tcl:
add_data(db, "CustomAction",
[("UpdateEditIDLE", 1, "Script", UpdateEditIDLE)])
# UI customization properties
add_data(db, "Property",
# See "DefaultUIFont Property"
[("DefaultUIFont", "DlgFont8"),
# See "ErrorDialog Style Bit"
("ErrorDialog", "ErrorDlg"),
("Progress1", "Install"), # modified in maintenance type dlg
("Progress2", "installs"),
("MaintenanceForm_Action", "Repair")])
# Fonts, see "TextStyle Table"
add_data(db, "TextStyle",
[("DlgFont8", "Tahoma", 9, None, 0),
("DlgFontBold8", "Tahoma", 8, None, 1), #bold
("VerdanaBold10", "Verdana", 10, None, 1),
("VerdanaRed9", "Verdana", 9, 255, 0),
])
compileargs = r'-Wi "[TARGETDIR]Lib\compileall.py" -f -x "bad_coding|badsyntax|site-packages|py2_|lib2to3\\tests" "[TARGETDIR]Lib"'
lib2to3args = r'-c "import lib2to3.pygram, lib2to3.patcomp;lib2to3.patcomp.PatternCompiler()"'
# See "CustomAction Table"
add_data(db, "CustomAction", [
# msidbCustomActionTypeFirstSequence + msidbCustomActionTypeTextData + msidbCustomActionTypeProperty
# See "Custom Action Type 51",
# "Custom Action Execution Scheduling Options"
("InitialTargetDir", 307, "TARGETDIR",
"[WindowsVolume]Python%s%s" % (major, minor)),
("SetDLLDirToTarget", 307, "DLLDIR", "[TARGETDIR]"),
("SetDLLDirToSystem32", 307, "DLLDIR", SystemFolderName),
# msidbCustomActionTypeExe + msidbCustomActionTypeSourceFile
# See "Custom Action Type 18"
("CompilePyc", 18, "python.exe", compileargs),
("CompilePyo", 18, "python.exe", "-O "+compileargs),
("CompileGrammar", 18, "python.exe", lib2to3args),
])
# UI Sequences, see "InstallUISequence Table", "Using a Sequence Table"
# Numbers indicate sequence; see sequence.py for how these action integrate
add_data(db, "InstallUISequence",
[("PrepareDlg", "Not Privileged or Windows9x or Installed", 140),
("WhichUsersDlg", "Privileged and not Windows9x and not Installed", 141),
("InitialTargetDir", 'TARGETDIR=""', 750),
# In the user interface, assume all-users installation if privileged.
("SetDLLDirToSystem32", 'DLLDIR="" and ' + sys32cond, 751),
("SetDLLDirToTarget", 'DLLDIR="" and not ' + sys32cond, 752),
("SelectDirectoryDlg", "Not Installed", 1230),
# XXX no support for resume installations yet
#("ResumeDlg", "Installed AND (RESUME OR Preselected)", 1240),
("MaintenanceTypeDlg", "Installed AND NOT RESUME AND NOT Preselected", 1250),
("ProgressDlg", None, 1280)])
add_data(db, "AdminUISequence",
[("InitialTargetDir", 'TARGETDIR=""', 750),
("SetDLLDirToTarget", 'DLLDIR=""', 751),
])
# Execute Sequences
add_data(db, "InstallExecuteSequence",
[("InitialTargetDir", 'TARGETDIR=""', 750),
("SetDLLDirToSystem32", 'DLLDIR="" and ' + sys32cond, 751),
("SetDLLDirToTarget", 'DLLDIR="" and not ' + sys32cond, 752),
("UpdateEditIDLE", None, 1050),
("CompilePyc", "COMPILEALL", 6800),
("CompilePyo", "COMPILEALL", 6801),
("CompileGrammar", "COMPILEALL", 6802),
])
add_data(db, "AdminExecuteSequence",
[("InitialTargetDir", 'TARGETDIR=""', 750),
("SetDLLDirToTarget", 'DLLDIR=""', 751),
("CompilePyc", "COMPILEALL", 6800),
("CompilePyo", "COMPILEALL", 6801),
("CompileGrammar", "COMPILEALL", 6802),
])
#####################################################################
# Standard dialogs: FatalError, UserExit, ExitDialog
fatal=PyDialog(db, "FatalError", x, y, w, h, modal, title,
"Finish", "Finish", "Finish")
fatal.title("[ProductName] Installer ended prematurely")
fatal.back("< Back", "Finish", active = 0)
fatal.cancel("Cancel", "Back", active = 0)
fatal.text("Description1", 135, 70, 220, 80, 0x30003,
"[ProductName] setup ended prematurely because of an error. Your system has not been modified. To install this program at a later time, please run the installation again.")
fatal.text("Description2", 135, 155, 220, 20, 0x30003,
"Click the Finish button to exit the Installer.")
c=fatal.next("Finish", "Cancel", name="Finish")
# See "ControlEvent Table". Parameters are the event, the parameter
# to the action, and optionally the condition for the event, and the order
# of events.
c.event("EndDialog", "Exit")
user_exit=PyDialog(db, "UserExit", x, y, w, h, modal, title,
"Finish", "Finish", "Finish")
user_exit.title("[ProductName] Installer was interrupted")
user_exit.back("< Back", "Finish", active = 0)
user_exit.cancel("Cancel", "Back", active = 0)
user_exit.text("Description1", 135, 70, 220, 80, 0x30003,
"[ProductName] setup was interrupted. Your system has not been modified. "
"To install this program at a later time, please run the installation again.")
user_exit.text("Description2", 135, 155, 220, 20, 0x30003,
"Click the Finish button to exit the Installer.")
c = user_exit.next("Finish", "Cancel", name="Finish")
c.event("EndDialog", "Exit")
exit_dialog = PyDialog(db, "ExitDialog", x, y, w, h, modal, title,
"Finish", "Finish", "Finish")
exit_dialog.title("Completing the [ProductName] Installer")
exit_dialog.back("< Back", "Finish", active = 0)
exit_dialog.cancel("Cancel", "Back", active = 0)
exit_dialog.text("Acknowledgements", 135, 95, 220, 120, 0x30003,
"Special Windows thanks to:\n"
" Mark Hammond, without whose years of freely \n"
" shared Windows expertise, Python for Windows \n"
" would still be Python for DOS.")
c = exit_dialog.text("warning", 135, 200, 220, 40, 0x30003,
"{\\VerdanaRed9}Warning: Python 2.5.x is the last "
"Python release for Windows 9x.")
c.condition("Hide", "NOT Version9X")
exit_dialog.text("Description", 135, 235, 220, 20, 0x30003,
"Click the Finish button to exit the Installer.")
c = exit_dialog.next("Finish", "Cancel", name="Finish")
c.event("EndDialog", "Return")
#####################################################################
# Required dialog: FilesInUse, ErrorDlg
inuse = PyDialog(db, "FilesInUse",
x, y, w, h,
19, # KeepModeless|Modal|Visible
title,
"Retry", "Retry", "Retry", bitmap=False)
inuse.text("Title", 15, 6, 200, 15, 0x30003,
r"{\DlgFontBold8}Files in Use")
inuse.text("Description", 20, 23, 280, 20, 0x30003,
"Some files that need to be updated are currently in use.")
inuse.text("Text", 20, 55, 330, 50, 3,
"The following applications are using files that need to be updated by this setup. Close these applications and then click Retry to continue the installation or Cancel to exit it.")
inuse.control("List", "ListBox", 20, 107, 330, 130, 7, "FileInUseProcess",
None, None, None)
c=inuse.back("Exit", "Ignore", name="Exit")
c.event("EndDialog", "Exit")
c=inuse.next("Ignore", "Retry", name="Ignore")
c.event("EndDialog", "Ignore")
c=inuse.cancel("Retry", "Exit", name="Retry")
c.event("EndDialog","Retry")
# See "Error Dialog". See "ICE20" for the required names of the controls.
error = Dialog(db, "ErrorDlg",
50, 10, 330, 101,
65543, # Error|Minimize|Modal|Visible
title,
"ErrorText", None, None)
error.text("ErrorText", 50,9,280,48,3, "")
error.control("ErrorIcon", "Icon", 15, 9, 24, 24, 5242881, None, "py.ico", None, None)
error.pushbutton("N",120,72,81,21,3,"No",None).event("EndDialog","ErrorNo")
error.pushbutton("Y",240,72,81,21,3,"Yes",None).event("EndDialog","ErrorYes")
error.pushbutton("A",0,72,81,21,3,"Abort",None).event("EndDialog","ErrorAbort")
error.pushbutton("C",42,72,81,21,3,"Cancel",None).event("EndDialog","ErrorCancel")
error.pushbutton("I",81,72,81,21,3,"Ignore",None).event("EndDialog","ErrorIgnore")
error.pushbutton("O",159,72,81,21,3,"Ok",None).event("EndDialog","ErrorOk")
error.pushbutton("R",198,72,81,21,3,"Retry",None).event("EndDialog","ErrorRetry")
#####################################################################
# Global "Query Cancel" dialog
cancel = Dialog(db, "CancelDlg", 50, 10, 260, 85, 3, title,
"No", "No", "No")
cancel.text("Text", 48, 15, 194, 30, 3,
"Are you sure you want to cancel [ProductName] installation?")
cancel.control("Icon", "Icon", 15, 15, 24, 24, 5242881, None,
"py.ico", None, None)
c=cancel.pushbutton("Yes", 72, 57, 56, 17, 3, "Yes", "No")
c.event("EndDialog", "Exit")
c=cancel.pushbutton("No", 132, 57, 56, 17, 3, "No", "Yes")
c.event("EndDialog", "Return")
#####################################################################
# Global "Wait for costing" dialog
costing = Dialog(db, "WaitForCostingDlg", 50, 10, 260, 85, modal, title,
"Return", "Return", "Return")
costing.text("Text", 48, 15, 194, 30, 3,
"Please wait while the installer finishes determining your disk space requirements.")
costing.control("Icon", "Icon", 15, 15, 24, 24, 5242881, None,
"py.ico", None, None)
c = costing.pushbutton("Return", 102, 57, 56, 17, 3, "Return", None)
c.event("EndDialog", "Exit")
#####################################################################
# Preparation dialog: no user input except cancellation
prep = PyDialog(db, "PrepareDlg", x, y, w, h, modeless, title,
"Cancel", "Cancel", "Cancel")
prep.text("Description", 135, 70, 220, 40, 0x30003,
"Please wait while the Installer prepares to guide you through the installation.")
prep.title("Welcome to the [ProductName] Installer")
c=prep.text("ActionText", 135, 110, 220, 20, 0x30003, "Pondering...")
c.mapping("ActionText", "Text")
c=prep.text("ActionData", 135, 135, 220, 30, 0x30003, None)
c.mapping("ActionData", "Text")
prep.back("Back", None, active=0)
prep.next("Next", None, active=0)
c=prep.cancel("Cancel", None)
c.event("SpawnDialog", "CancelDlg")
#####################################################################
# Target directory selection
seldlg = PyDialog(db, "SelectDirectoryDlg", x, y, w, h, modal, title,
"Next", "Next", "Cancel")
seldlg.title("Select Destination Directory")
c = seldlg.text("Existing", 135, 25, 235, 30, 0x30003,
"{\VerdanaRed9}This update will replace your existing [ProductLine] installation.")
c.condition("Hide", 'REMOVEOLDVERSION="" and REMOVEOLDSNAPSHOT=""')
seldlg.text("Description", 135, 50, 220, 40, 0x30003,
"Please select a directory for the [ProductName] files.")
seldlg.back("< Back", None, active=0)
c = seldlg.next("Next >", "Cancel")
c.event("DoAction", "CheckDir", "TargetExistsOk<>1", order=1)
# If the target exists, but we found that we are going to remove old versions, don't bother
# confirming that the target directory exists. Strictly speaking, we should determine that
# the target directory is indeed the target of the product that we are going to remove, but
# I don't know how to do that.
c.event("SpawnDialog", "ExistingDirectoryDlg", 'TargetExists=1 and REMOVEOLDVERSION="" and REMOVEOLDSNAPSHOT=""', 2)
c.event("SetTargetPath", "TARGETDIR", 'TargetExists=0 or REMOVEOLDVERSION<>"" or REMOVEOLDSNAPSHOT<>""', 3)
c.event("SpawnWaitDialog", "WaitForCostingDlg", "CostingComplete=1", 4)
c.event("NewDialog", "SelectFeaturesDlg", 'TargetExists=0 or REMOVEOLDVERSION<>"" or REMOVEOLDSNAPSHOT<>""', 5)
c = seldlg.cancel("Cancel", "DirectoryCombo")
c.event("SpawnDialog", "CancelDlg")
seldlg.control("DirectoryCombo", "DirectoryCombo", 135, 70, 172, 80, 393219,
"TARGETDIR", None, "DirectoryList", None)
seldlg.control("DirectoryList", "DirectoryList", 135, 90, 208, 136, 3, "TARGETDIR",
None, "PathEdit", None)
seldlg.control("PathEdit", "PathEdit", 135, 230, 206, 16, 3, "TARGETDIR", None, "Next", None)
c = seldlg.pushbutton("Up", 306, 70, 18, 18, 3, "Up", None)
c.event("DirectoryListUp", "0")
c = seldlg.pushbutton("NewDir", 324, 70, 30, 18, 3, "New", None)
c.event("DirectoryListNew", "0")
#####################################################################
# SelectFeaturesDlg
features = PyDialog(db, "SelectFeaturesDlg", x, y, w, h, modal|track_disk_space,
title, "Tree", "Next", "Cancel")
features.title("Customize [ProductName]")
features.text("Description", 135, 35, 220, 15, 0x30003,
"Select the way you want features to be installed.")
features.text("Text", 135,45,220,30, 3,
"Click on the icons in the tree below to change the way features will be installed.")
c=features.back("< Back", "Next")
c.event("NewDialog", "SelectDirectoryDlg")
c=features.next("Next >", "Cancel")
c.mapping("SelectionNoItems", "Enabled")
c.event("SpawnDialog", "DiskCostDlg", "OutOfDiskSpace=1", order=1)
c.event("EndDialog", "Return", "OutOfDiskSpace<>1", order=2)
c=features.cancel("Cancel", "Tree")
c.event("SpawnDialog", "CancelDlg")
# The browse property is not used, since we have only a single target path (selected already)
features.control("Tree", "SelectionTree", 135, 75, 220, 95, 7, "_BrowseProperty",
"Tree of selections", "Back", None)
#c=features.pushbutton("Reset", 42, 243, 56, 17, 3, "Reset", "DiskCost")
#c.mapping("SelectionNoItems", "Enabled")
#c.event("Reset", "0")
features.control("Box", "GroupBox", 135, 170, 225, 90, 1, None, None, None, None)
c=features.xbutton("DiskCost", "Disk &Usage", None, 0.10)
c.mapping("SelectionNoItems","Enabled")
c.event("SpawnDialog", "DiskCostDlg")
c=features.xbutton("Advanced", "Advanced", None, 0.30)
c.event("SpawnDialog", "AdvancedDlg")
c=features.text("ItemDescription", 140, 180, 210, 30, 3,
"Multiline description of the currently selected item.")
c.mapping("SelectionDescription","Text")
c=features.text("ItemSize", 140, 210, 210, 45, 3,
"The size of the currently selected item.")
c.mapping("SelectionSize", "Text")
#####################################################################
# Disk cost
cost = PyDialog(db, "DiskCostDlg", x, y, w, h, modal, title,
"OK", "OK", "OK", bitmap=False)
cost.text("Title", 15, 6, 200, 15, 0x30003,
"{\DlgFontBold8}Disk Space Requirements")
cost.text("Description", 20, 20, 280, 20, 0x30003,
"The disk space required for the installation of the selected features.")
cost.text("Text", 20, 53, 330, 60, 3,
"The highlighted volumes (if any) do not have enough disk space "
"available for the currently selected features. You can either "
"remove some files from the highlighted volumes, or choose to "
"install less features onto local drive(s), or select different "
"destination drive(s).")
cost.control("VolumeList", "VolumeCostList", 20, 100, 330, 150, 393223,
None, "{120}{70}{70}{70}{70}", None, None)
cost.xbutton("OK", "Ok", None, 0.5).event("EndDialog", "Return")
#####################################################################
# WhichUsers Dialog. Only available on NT, and for privileged users.
# This must be run before FindRelatedProducts, because that will
# take into account whether the previous installation was per-user
# or per-machine. We currently don't support going back to this
# dialog after "Next" was selected; to support this, we would need to
# find how to reset the ALLUSERS property, and how to re-run
# FindRelatedProducts.
# On Windows9x, the ALLUSERS property is ignored on the command line
# and in the Property table, but installer fails according to the documentation
# if a dialog attempts to set ALLUSERS.
whichusers = PyDialog(db, "WhichUsersDlg", x, y, w, h, modal, title,
"AdminInstall", "Next", "Cancel")
whichusers.title("Select whether to install [ProductName] for all users of this computer.")
# A radio group with two options: allusers, justme
g = whichusers.radiogroup("AdminInstall", 135, 60, 235, 80, 3,
"WhichUsers", "", "Next")
g.condition("Disable", "VersionNT=600") # Not available on Vista and Windows 2008
g.add("ALL", 0, 5, 150, 20, "Install for all users")
g.add("JUSTME", 0, 25, 235, 20, "Install just for me (not available on Windows Vista)")
whichusers.back("Back", None, active=0)
c = whichusers.next("Next >", "Cancel")
c.event("[ALLUSERS]", "1", 'WhichUsers="ALL"', 1)
c.event("EndDialog", "Return", order = 2)
c = whichusers.cancel("Cancel", "AdminInstall")
c.event("SpawnDialog", "CancelDlg")
#####################################################################
# Advanced Dialog.
advanced = PyDialog(db, "AdvancedDlg", x, y, w, h, modal, title,
"CompilePyc", "Ok", "Ok")
advanced.title("Advanced Options for [ProductName]")
# A radio group with two options: allusers, justme
advanced.checkbox("CompilePyc", 135, 60, 230, 50, 3,
"COMPILEALL", "Compile .py files to byte code after installation", "Ok")
c = advanced.cancel("Ok", "CompilePyc", name="Ok") # Button just has location of cancel button.
c.event("EndDialog", "Return")
#####################################################################
# Existing Directory dialog
dlg = Dialog(db, "ExistingDirectoryDlg", 50, 30, 200, 80, modal, title,
"No", "No", "No")
dlg.text("Title", 10, 20, 180, 40, 3,
"[TARGETDIR] exists. Are you sure you want to overwrite existing files?")
c=dlg.pushbutton("Yes", 30, 60, 55, 17, 3, "Yes", "No")
c.event("[TargetExists]", "0", order=1)
c.event("[TargetExistsOk]", "1", order=2)
c.event("EndDialog", "Return", order=3)
c=dlg.pushbutton("No", 115, 60, 55, 17, 3, "No", "Yes")
c.event("EndDialog", "Return")
#####################################################################
# Installation Progress dialog (modeless)
progress = PyDialog(db, "ProgressDlg", x, y, w, h, modeless, title,
"Cancel", "Cancel", "Cancel", bitmap=False)
progress.text("Title", 20, 15, 200, 15, 0x30003,
"{\DlgFontBold8}[Progress1] [ProductName]")
progress.text("Text", 35, 65, 300, 30, 3,
"Please wait while the Installer [Progress2] [ProductName]. "
"This may take several minutes.")
progress.text("StatusLabel", 35, 100, 35, 20, 3, "Status:")
c=progress.text("ActionText", 70, 100, w-70, 20, 3, "Pondering...")
c.mapping("ActionText", "Text")
#c=progress.text("ActionData", 35, 140, 300, 20, 3, None)
#c.mapping("ActionData", "Text")
c=progress.control("ProgressBar", "ProgressBar", 35, 120, 300, 10, 65537,
None, "Progress done", None, None)
c.mapping("SetProgress", "Progress")
progress.back("< Back", "Next", active=False)
progress.next("Next >", "Cancel", active=False)
progress.cancel("Cancel", "Back").event("SpawnDialog", "CancelDlg")
# Maintenance type: repair/uninstall
maint = PyDialog(db, "MaintenanceTypeDlg", x, y, w, h, modal, title,
"Next", "Next", "Cancel")
maint.title("Welcome to the [ProductName] Setup Wizard")
maint.text("BodyText", 135, 63, 230, 42, 3,
"Select whether you want to repair or remove [ProductName].")
g=maint.radiogroup("RepairRadioGroup", 135, 108, 230, 60, 3,
"MaintenanceForm_Action", "", "Next")
g.add("Change", 0, 0, 200, 17, "&Change [ProductName]")
g.add("Repair", 0, 18, 200, 17, "&Repair [ProductName]")
g.add("Remove", 0, 36, 200, 17, "Re&move [ProductName]")
maint.back("< Back", None, active=False)
c=maint.next("Finish", "Cancel")
# Change installation: Change progress dialog to "Change", then ask
# for feature selection
c.event("[Progress1]", "Change", 'MaintenanceForm_Action="Change"', 1)
c.event("[Progress2]", "changes", 'MaintenanceForm_Action="Change"', 2)
# Reinstall: Change progress dialog to "Repair", then invoke reinstall
# Also set list of reinstalled features to "ALL"
c.event("[REINSTALL]", "ALL", 'MaintenanceForm_Action="Repair"', 5)
c.event("[Progress1]", "Repairing", 'MaintenanceForm_Action="Repair"', 6)
c.event("[Progress2]", "repairs", 'MaintenanceForm_Action="Repair"', 7)
c.event("Reinstall", "ALL", 'MaintenanceForm_Action="Repair"', 8)
# Uninstall: Change progress to "Remove", then invoke uninstall
# Also set list of removed features to "ALL"
c.event("[REMOVE]", "ALL", 'MaintenanceForm_Action="Remove"', 11)
c.event("[Progress1]", "Removing", 'MaintenanceForm_Action="Remove"', 12)
c.event("[Progress2]", "removes", 'MaintenanceForm_Action="Remove"', 13)
c.event("Remove", "ALL", 'MaintenanceForm_Action="Remove"', 14)
# Close dialog when maintenance action scheduled
c.event("EndDialog", "Return", 'MaintenanceForm_Action<>"Change"', 20)
c.event("NewDialog", "SelectFeaturesDlg", 'MaintenanceForm_Action="Change"', 21)
maint.cancel("Cancel", "RepairRadioGroup").event("SpawnDialog", "CancelDlg")
# See "Feature Table". The feature level is 1 for all features,
# and the feature attributes are 0 for the DefaultFeature, and
# FollowParent for all other features. The numbers are the Display
# column.
def add_features(db):
# feature attributes:
# msidbFeatureAttributesFollowParent == 2
# msidbFeatureAttributesDisallowAdvertise == 8
# Features that need to be installed with together with the main feature
# (i.e. additional Python libraries) need to follow the parent feature.
# Features that have no advertisement trigger (e.g. the test suite)
# must not support advertisement
global default_feature, tcltk, htmlfiles, tools, testsuite, ext_feature, private_crt
default_feature = Feature(db, "DefaultFeature", "Python",
"Python Interpreter and Libraries",
1, directory = "TARGETDIR")
shared_crt = Feature(db, "SharedCRT", "MSVCRT", "C Run-Time (system-wide)", 0,
level=0)
private_crt = Feature(db, "PrivateCRT", "MSVCRT", "C Run-Time (private)", 0,
level=0)
add_data(db, "Condition", [("SharedCRT", 1, sys32cond),
("PrivateCRT", 1, "not "+sys32cond)])
# We don't support advertisement of extensions
ext_feature = Feature(db, "Extensions", "Register Extensions",
"Make this Python installation the default Python installation", 3,
parent = default_feature, attributes=2|8)
if have_tcl:
tcltk = Feature(db, "TclTk", "Tcl/Tk", "Tkinter, IDLE, pydoc", 5,
parent = default_feature, attributes=2)
htmlfiles = Feature(db, "Documentation", "Documentation",
"Python HTMLHelp File", 7, parent = default_feature)
tools = Feature(db, "Tools", "Utility Scripts",
"Python utility scripts (Tools/", 9,
parent = default_feature, attributes=2)
testsuite = Feature(db, "Testsuite", "Test suite",
"Python test suite (Lib/test/)", 11,
parent = default_feature, attributes=2|8)
def extract_msvcr90():
# Find the redistributable files
if msilib.Win64:
arch = "amd64"
else:
arch = "x86"
dir = os.path.join(os.environ['VS90COMNTOOLS'], r"..\..\VC\redist\%s\Microsoft.VC90.CRT" % arch)
result = []
installer = msilib.MakeInstaller()
# omit msvcm90 and msvcp90, as they aren't really needed
files = ["Microsoft.VC90.CRT.manifest", "msvcr90.dll"]
for f in files:
path = os.path.join(dir, f)
kw = {'src':path}
if f.endswith('.dll'):
kw['version'] = installer.FileVersion(path, 0)
kw['language'] = installer.FileVersion(path, 1)
result.append((f, kw))
return result
def generate_license():
import shutil, glob
out = open("LICENSE.txt", "w")
shutil.copyfileobj(open(os.path.join(srcdir, "LICENSE")), out)
shutil.copyfileobj(open("crtlicense.txt"), out)
for name, pat, file in (("bzip2","bzip2-*", "LICENSE"),
("openssl", "openssl-*", "LICENSE"),
("Tcl", "tcl8*", "license.terms"),
("Tk", "tk8*", "license.terms"),
("Tix", "tix-*", "license.terms")):
out.write("\nThis copy of Python includes a copy of %s, which is licensed under the following terms:\n\n" % name)
dirs = glob.glob(srcdir+"/../"+pat)
if not dirs:
raise ValueError, "Could not find "+srcdir+"/../"+pat
if len(dirs) > 2:
raise ValueError, "Multiple copies of "+pat
dir = dirs[0]
shutil.copyfileobj(open(os.path.join(dir, file)), out)
out.close()
class PyDirectory(Directory):
"""By default, all components in the Python installer
can run from source."""
def __init__(self, *args, **kw):
if "componentflags" not in kw:
kw['componentflags'] = 2 #msidbComponentAttributesOptional
Directory.__init__(self, *args, **kw)
def check_unpackaged(self):
self.unpackaged_files.discard('__pycache__')
self.unpackaged_files.discard('.svn')
if self.unpackaged_files:
print "Warning: Unpackaged files in %s" % self.absolute
print self.unpackaged_files
# See "File Table", "Component Table", "Directory Table",
# "FeatureComponents Table"
def add_files(db):
cab = CAB("python")
tmpfiles = []
# Add all executables, icons, text files into the TARGETDIR component
root = PyDirectory(db, cab, None, srcdir, "TARGETDIR", "SourceDir")
default_feature.set_current()
if not msilib.Win64:
root.add_file("%s/w9xpopen.exe" % PCBUILD)
root.add_file("README.txt", src="README")
root.add_file("NEWS.txt", src="Misc/NEWS")
generate_license()
root.add_file("LICENSE.txt", src=os.path.abspath("LICENSE.txt"))
root.start_component("python.exe", keyfile="python.exe")
root.add_file("%s/python.exe" % PCBUILD)
root.start_component("pythonw.exe", keyfile="pythonw.exe")
root.add_file("%s/pythonw.exe" % PCBUILD)
# msidbComponentAttributesSharedDllRefCount = 8, see "Component Table"
dlldir = PyDirectory(db, cab, root, srcdir, "DLLDIR", ".")
pydll = "python%s%s.dll" % (major, minor)
pydllsrc = os.path.join(srcdir, PCBUILD, pydll)
dlldir.start_component("DLLDIR", flags = 8, keyfile = pydll, uuid = pythondll_uuid)
installer = msilib.MakeInstaller()
pyversion = installer.FileVersion(pydllsrc, 0)
if not snapshot:
# For releases, the Python DLL has the same version as the
# installer package.
assert pyversion.split(".")[:3] == current_version.split(".")
dlldir.add_file("%s/python%s%s.dll" % (PCBUILD, major, minor),
version=pyversion,
language=installer.FileVersion(pydllsrc, 1))
DLLs = PyDirectory(db, cab, root, srcdir + "/" + PCBUILD, "DLLs", "DLLS|DLLs")
# msvcr90.dll: Need to place the DLL and the manifest into the root directory,
# plus another copy of the manifest in the DLLs directory, with the manifest
# pointing to the root directory
root.start_component("msvcr90", feature=private_crt)
# Results are ID,keyword pairs
manifest, crtdll = extract_msvcr90()
root.add_file(manifest[0], **manifest[1])
root.add_file(crtdll[0], **crtdll[1])
# Copy the manifest
# Actually, don't do that anymore - no DLL in DLLs should have a manifest
# dependency on msvcr90.dll anymore, so this should not be necessary
#manifest_dlls = manifest[0]+".root"
#open(manifest_dlls, "w").write(open(manifest[1]['src']).read().replace("msvcr","../msvcr"))
#DLLs.start_component("msvcr90_dlls", feature=private_crt)
#DLLs.add_file(manifest[0], src=os.path.abspath(manifest_dlls))
# Now start the main component for the DLLs directory;
# no regular files have been added to the directory yet.
DLLs.start_component()
# Check if _ctypes.pyd exists
have_ctypes = os.path.exists(srcdir+"/%s/_ctypes.pyd" % PCBUILD)
if not have_ctypes:
print("WARNING: _ctypes.pyd not found, ctypes will not be included")
extensions.remove("_ctypes.pyd")
# Add all .py files in Lib, except tkinter, test
dirs = []
pydirs = [(root,"Lib")]
while pydirs:
# Commit every now and then, or else installer will complain
db.Commit()
parent, dir = pydirs.pop()
if dir == ".svn" or dir == '__pycache__' or dir.startswith("plat-"):
continue
elif dir in ["tkinter", "idlelib", "Icons"]:
if not have_tcl:
continue
tcltk.set_current()
elif dir in ['test', 'tests', 'data', 'output']:
# test: Lib, Lib/email, Lib/ctypes, Lib/sqlite3
# tests: Lib/distutils
# data: Lib/email/test
# output: Lib/test
testsuite.set_current()
elif not have_ctypes and dir == "ctypes":
continue
else:
default_feature.set_current()
lib = PyDirectory(db, cab, parent, dir, dir, "%s|%s" % (parent.make_short(dir), dir))
# Add additional files
dirs.append(lib)
lib.glob("*.txt")
if dir=='site-packages':
lib.add_file("README.txt", src="README")
continue
files = lib.glob("*.py")
files += lib.glob("*.pyw")
if files:
# Add an entry to the RemoveFile table to remove bytecode files.
lib.remove_pyc()
# package READMEs if present
lib.glob("README")
if dir=='Lib':
lib.add_file('wsgiref.egg-info')
if dir=='test' and parent.physical=='Lib':
lib.add_file("185test.db")
lib.add_file("audiotest.au")
lib.add_file("sgml_input.html")
lib.add_file("testtar.tar")
lib.add_file("test_difflib_expect.html")
lib.add_file("check_soundcard.vbs")
lib.add_file("empty.vbs")
lib.add_file("Sine-1000Hz-300ms.aif")
lib.glob("*.uue")
lib.glob("*.pem")
lib.glob("*.pck")
lib.glob("cfgparser.*")
lib.add_file("zip_cp437_header.zip")
lib.add_file("zipdir.zip")
if dir=='capath':
lib.glob("*.0")
if dir=='tests' and parent.physical=='distutils':
lib.add_file("Setup.sample")
if dir=='decimaltestdata':
lib.glob("*.decTest")
if dir=='xmltestdata':
lib.glob("*.xml")
lib.add_file("test.xml.out")
if dir=='output':
lib.glob("test_*")
if dir=='sndhdrdata':
lib.glob("sndhdr.*")
if dir=='idlelib':
lib.glob("*.def")
lib.add_file("idle.bat")
lib.add_file("ChangeLog")
if dir=="Icons":
lib.glob("*.gif")
lib.add_file("idle.icns")
if dir=="command" and parent.physical=="distutils":
lib.glob("wininst*.exe")
lib.add_file("command_template")
if dir=="lib2to3":
lib.removefile("pickle", "*.pickle")
if dir=="macholib":
lib.add_file("README.ctypes")
lib.glob("fetch_macholib*")
if dir=='turtledemo':
lib.add_file("turtle.cfg")
if dir=="pydoc_data":
lib.add_file("_pydoc.css")
if dir=="data" and parent.physical=="test" and parent.basedir.physical=="email":
# This should contain all non-.svn files listed in subversion
for f in os.listdir(lib.absolute):
if f.endswith(".txt") or f==".svn":continue
if f.endswith(".au") or f.endswith(".gif"):
lib.add_file(f)
else:
print("WARNING: New file %s in email/test/data" % f)
for f in os.listdir(lib.absolute):
if os.path.isdir(os.path.join(lib.absolute, f)):
pydirs.append((lib, f))
for d in dirs:
d.check_unpackaged()
# Add DLLs
default_feature.set_current()
lib = DLLs
lib.add_file("py.ico", src=srcdir+"/PC/py.ico")
lib.add_file("pyc.ico", src=srcdir+"/PC/pyc.ico")
dlls = []
tclfiles = []
for f in extensions:
if f=="_tkinter.pyd":
continue
if not os.path.exists(srcdir + "/" + PCBUILD + "/" + f):
print("WARNING: Missing extension", f)
continue
dlls.append(f)
lib.add_file(f)
lib.add_file('python3.dll')
# Add sqlite
if msilib.msi_type=="Intel64;1033":
sqlite_arch = "/ia64"
elif msilib.msi_type=="x64;1033":
sqlite_arch = "/amd64"
tclsuffix = "64"
else:
sqlite_arch = ""
tclsuffix = ""
lib.add_file("sqlite3.dll")
if have_tcl:
if not os.path.exists("%s/%s/_tkinter.pyd" % (srcdir, PCBUILD)):
print("WARNING: Missing _tkinter.pyd")
else:
lib.start_component("TkDLLs", tcltk)
lib.add_file("_tkinter.pyd")
dlls.append("_tkinter.pyd")
tcldir = os.path.normpath(srcdir+("/../tcltk%s/bin" % tclsuffix))
for f in glob.glob1(tcldir, "*.dll"):
lib.add_file(f, src=os.path.join(tcldir, f))
# check whether there are any unknown extensions
for f in glob.glob1(srcdir+"/"+PCBUILD, "*.pyd"):
if f.endswith("_d.pyd"): continue # debug version
if f in dlls: continue
print("WARNING: Unknown extension", f)
# Add headers
default_feature.set_current()
lib = PyDirectory(db, cab, root, "include", "include", "INCLUDE|include")
lib.glob("*.h")
lib.add_file("pyconfig.h", src="../PC/pyconfig.h")
# Add import libraries
lib = PyDirectory(db, cab, root, PCBUILD, "libs", "LIBS|libs")
for f in dlls:
lib.add_file(f.replace('pyd','lib'))
lib.add_file('python%s%s.lib' % (major, minor))
lib.add_file('python3.lib')
# Add the mingw-format library
if have_mingw:
lib.add_file('libpython%s%s.a' % (major, minor))
if have_tcl:
# Add Tcl/Tk
tcldirs = [(root, '../tcltk%s/lib' % tclsuffix, 'tcl')]
tcltk.set_current()
while tcldirs:
parent, phys, dir = tcldirs.pop()
lib = PyDirectory(db, cab, parent, phys, dir, "%s|%s" % (parent.make_short(dir), dir))
if not os.path.exists(lib.absolute):
continue
for f in os.listdir(lib.absolute):
if os.path.isdir(os.path.join(lib.absolute, f)):
tcldirs.append((lib, f, f))
else:
lib.add_file(f)
# Add tools
tools.set_current()
tooldir = PyDirectory(db, cab, root, "Tools", "Tools", "TOOLS|Tools")
for f in ['i18n', 'pynche', 'Scripts']:
lib = PyDirectory(db, cab, tooldir, f, f, "%s|%s" % (tooldir.make_short(f), f))
lib.glob("*.py")
lib.glob("*.pyw", exclude=['pydocgui.pyw'])
lib.remove_pyc()
lib.glob("*.txt")
if f == "pynche":
x = PyDirectory(db, cab, lib, "X", "X", "X|X")
x.glob("*.txt")
if os.path.exists(os.path.join(lib.absolute, "README")):
lib.add_file("README.txt", src="README")
if f == 'Scripts':
lib.add_file("2to3.py", src="2to3")
if have_tcl:
lib.start_component("pydocgui.pyw", tcltk, keyfile="pydocgui.pyw")
lib.add_file("pydocgui.pyw")
# Add documentation
htmlfiles.set_current()
lib = PyDirectory(db, cab, root, "Doc", "Doc", "DOC|Doc")
lib.start_component("documentation", keyfile=docfile)
lib.add_file(docfile, src="build/htmlhelp/"+docfile)
cab.commit(db)
for f in tmpfiles:
os.unlink(f)
# See "Registry Table", "Component Table"
def add_registry(db):
# File extensions, associated with the REGISTRY.def component
# IDLE verbs depend on the tcltk feature.
# msidbComponentAttributesRegistryKeyPath = 4
# -1 for Root specifies "dependent on ALLUSERS property"
tcldata = []
if have_tcl:
tcldata = [
("REGISTRY.tcl", msilib.gen_uuid(), "TARGETDIR", registry_component, None,
"py.IDLE")]
add_data(db, "Component",
# msidbComponentAttributesRegistryKeyPath = 4
[("REGISTRY", msilib.gen_uuid(), "TARGETDIR", registry_component, None,
"InstallPath"),
("REGISTRY.doc", msilib.gen_uuid(), "TARGETDIR", registry_component, None,
"Documentation"),
("REGISTRY.def", msilib.gen_uuid(), "TARGETDIR", registry_component,
None, None)] + tcldata)
# See "FeatureComponents Table".
# The association between TclTk and pythonw.exe is necessary to make ICE59
# happy, because the installer otherwise believes that the IDLE and PyDoc
# shortcuts might get installed without pythonw.exe being install. This
# is not true, since installing TclTk will install the default feature, which
# will cause pythonw.exe to be installed.
# REGISTRY.tcl is not associated with any feature, as it will be requested
# through a custom action
tcldata = []
if have_tcl:
tcldata = [(tcltk.id, "pythonw.exe")]
add_data(db, "FeatureComponents",
[(default_feature.id, "REGISTRY"),
(htmlfiles.id, "REGISTRY.doc"),
(ext_feature.id, "REGISTRY.def")] +
tcldata
)
# Extensions are not advertised. For advertised extensions,
# we would need separate binaries that install along with the
# extension.
pat = r"Software\Classes\%sPython.%sFile\shell\%s\command"
ewi = "Edit with IDLE"
pat2 = r"Software\Classes\%sPython.%sFile\DefaultIcon"
pat3 = r"Software\Classes\%sPython.%sFile"
pat4 = r"Software\Classes\%sPython.%sFile\shellex\DropHandler"
tcl_verbs = []
if have_tcl:
tcl_verbs=[
("py.IDLE", -1, pat % (testprefix, "", ewi), "",
r'"[TARGETDIR]pythonw.exe" "[TARGETDIR]Lib\idlelib\idle.pyw" -e "%1"',
"REGISTRY.tcl"),
("pyw.IDLE", -1, pat % (testprefix, "NoCon", ewi), "",
r'"[TARGETDIR]pythonw.exe" "[TARGETDIR]Lib\idlelib\idle.pyw" -e "%1"',
"REGISTRY.tcl"),
]
add_data(db, "Registry",
[# Extensions
("py.ext", -1, r"Software\Classes\."+ext, "",
"Python.File", "REGISTRY.def"),
("pyw.ext", -1, r"Software\Classes\."+ext+'w', "",
"Python.NoConFile", "REGISTRY.def"),
("pyc.ext", -1, r"Software\Classes\."+ext+'c', "",
"Python.CompiledFile", "REGISTRY.def"),
("pyo.ext", -1, r"Software\Classes\."+ext+'o', "",
"Python.CompiledFile", "REGISTRY.def"),
# MIME types
("py.mime", -1, r"Software\Classes\."+ext, "Content Type",
"text/plain", "REGISTRY.def"),
("pyw.mime", -1, r"Software\Classes\."+ext+'w', "Content Type",
"text/plain", "REGISTRY.def"),
#Verbs
("py.open", -1, pat % (testprefix, "", "open"), "",
r'"[TARGETDIR]python.exe" "%1" %*', "REGISTRY.def"),
("pyw.open", -1, pat % (testprefix, "NoCon", "open"), "",
r'"[TARGETDIR]pythonw.exe" "%1" %*', "REGISTRY.def"),
("pyc.open", -1, pat % (testprefix, "Compiled", "open"), "",
r'"[TARGETDIR]python.exe" "%1" %*', "REGISTRY.def"),
] + tcl_verbs + [
#Icons
("py.icon", -1, pat2 % (testprefix, ""), "",
r'[DLLs]py.ico', "REGISTRY.def"),
("pyw.icon", -1, pat2 % (testprefix, "NoCon"), "",
r'[DLLs]py.ico', "REGISTRY.def"),
("pyc.icon", -1, pat2 % (testprefix, "Compiled"), "",
r'[DLLs]pyc.ico', "REGISTRY.def"),
# Descriptions
("py.txt", -1, pat3 % (testprefix, ""), "",
"Python File", "REGISTRY.def"),
("pyw.txt", -1, pat3 % (testprefix, "NoCon"), "",
"Python File (no console)", "REGISTRY.def"),
("pyc.txt", -1, pat3 % (testprefix, "Compiled"), "",
"Compiled Python File", "REGISTRY.def"),
# Drop Handler
("py.drop", -1, pat4 % (testprefix, ""), "",
"{60254CA5-953B-11CF-8C96-00AA00B8708C}", "REGISTRY.def"),
("pyw.drop", -1, pat4 % (testprefix, "NoCon"), "",
"{60254CA5-953B-11CF-8C96-00AA00B8708C}", "REGISTRY.def"),
("pyc.drop", -1, pat4 % (testprefix, "Compiled"), "",
"{60254CA5-953B-11CF-8C96-00AA00B8708C}", "REGISTRY.def"),
])
# Registry keys
prefix = r"Software\%sPython\PythonCore\%s" % (testprefix, short_version)
add_data(db, "Registry",
[("InstallPath", -1, prefix+r"\InstallPath", "", "[TARGETDIR]", "REGISTRY"),
("InstallGroup", -1, prefix+r"\InstallPath\InstallGroup", "",
"Python %s" % short_version, "REGISTRY"),
("PythonPath", -1, prefix+r"\PythonPath", "",
r"[TARGETDIR]Lib;[TARGETDIR]DLLs", "REGISTRY"),
("Documentation", -1, prefix+r"\Help\Main Python Documentation", "",
"[TARGETDIR]Doc\\"+docfile , "REGISTRY.doc"),
("Modules", -1, prefix+r"\Modules", "+", None, "REGISTRY"),
("AppPaths", -1, r"Software\Microsoft\Windows\CurrentVersion\App Paths\Python.exe",
"", r"[TARGETDIR]Python.exe", "REGISTRY.def"),
("DisplayIcon", -1,
r"Software\Microsoft\Windows\CurrentVersion\Uninstall\%s" % product_code,
"DisplayIcon", "[TARGETDIR]python.exe", "REGISTRY")
])
# Shortcuts, see "Shortcut Table"
add_data(db, "Directory",
[("ProgramMenuFolder", "TARGETDIR", "."),
("MenuDir", "ProgramMenuFolder", "PY%s%s|%sPython %s.%s" % (major,minor,testprefix,major,minor))])
add_data(db, "RemoveFile",
[("MenuDir", "TARGETDIR", None, "MenuDir", 2)])
tcltkshortcuts = []
if have_tcl:
tcltkshortcuts = [
("IDLE", "MenuDir", "IDLE|IDLE (Python GUI)", "pythonw.exe",
tcltk.id, r'"[TARGETDIR]Lib\idlelib\idle.pyw"', None, None, "python_icon.exe", 0, None, "TARGETDIR"),
("PyDoc", "MenuDir", "MODDOCS|Module Docs", "pythonw.exe",
tcltk.id, r'"[TARGETDIR]Tools\scripts\pydocgui.pyw"', None, None, "python_icon.exe", 0, None, "TARGETDIR"),
]
add_data(db, "Shortcut",
tcltkshortcuts +
[# Advertised shortcuts: targets are features, not files
("Python", "MenuDir", "PYTHON|Python (command line)", "python.exe",
default_feature.id, None, None, None, "python_icon.exe", 2, None, "TARGETDIR"),
# Advertising the Manual breaks on (some?) Win98, and the shortcut lacks an
# icon first.
#("Manual", "MenuDir", "MANUAL|Python Manuals", "documentation",
# htmlfiles.id, None, None, None, None, None, None, None),
## Non-advertised shortcuts: must be associated with a registry component
("Manual", "MenuDir", "MANUAL|Python Manuals", "REGISTRY.doc",
"[#%s]" % docfile, None,
None, None, None, None, None, None),
("Uninstall", "MenuDir", "UNINST|Uninstall Python", "REGISTRY",
SystemFolderName+"msiexec", "/x%s" % product_code,
None, None, None, None, None, None),
])
db.Commit()
def build_pdbzip():
pdbexclude = ['kill_python.pdb', 'make_buildinfo.pdb',
'make_versioninfo.pdb']
path = "python-%s%s-pdb.zip" % (full_current_version, msilib.arch_ext)
pdbzip = zipfile.ZipFile(path, 'w')
for f in glob.glob1(os.path.join(srcdir, PCBUILD), "*.pdb"):
if f not in pdbexclude and not f.endswith('_d.pdb'):
pdbzip.write(os.path.join(srcdir, PCBUILD, f), f)
pdbzip.close()
db,msiname = build_database()
try:
add_features(db)
add_ui(db)
add_files(db)
add_registry(db)
remove_old_versions(db)
db.Commit()
finally:
del db
# Merge CRT into MSI file. This requires the database to be closed.
mod_dir = os.path.join(os.environ["ProgramFiles"], "Common Files", "Merge Modules")
if msilib.Win64:
modules = ["Microsoft_VC90_CRT_x86_x64.msm", "policy_9_0_Microsoft_VC90_CRT_x86_x64.msm"]
else:
modules = ["Microsoft_VC90_CRT_x86.msm","policy_9_0_Microsoft_VC90_CRT_x86.msm"]
for i, n in enumerate(modules):
modules[i] = os.path.join(mod_dir, n)
def merge(msi, feature, rootdir, modules):
cab_and_filecount = []
# Step 1: Merge databases, extract cabfiles
m = msilib.MakeMerge2()
m.OpenLog("merge.log")
m.OpenDatabase(msi)
for module in modules:
print module
m.OpenModule(module,0)
m.Merge(feature, rootdir)
print "Errors:"
for e in m.Errors:
print e.Type, e.ModuleTable, e.DatabaseTable
print " Modkeys:",
for s in e.ModuleKeys: print s,
print
print " DBKeys:",
for s in e.DatabaseKeys: print s,
print
cabname = tempfile.mktemp(suffix=".cab")
m.ExtractCAB(cabname)
cab_and_filecount.append((cabname, len(m.ModuleFiles)))
m.CloseModule()
m.CloseDatabase(True)
m.CloseLog()
# Step 2: Add CAB files
i = msilib.MakeInstaller()
db = i.OpenDatabase(msi, constants.msiOpenDatabaseModeTransact)
v = db.OpenView("SELECT LastSequence FROM Media")
v.Execute(None)
maxmedia = -1
while 1:
r = v.Fetch()
if not r: break
seq = r.IntegerData(1)
if seq > maxmedia:
maxmedia = seq
print "Start of Media", maxmedia
for cabname, count in cab_and_filecount:
stream = "merged%d" % maxmedia
msilib.add_data(db, "Media",
[(maxmedia+1, maxmedia+count, None, "#"+stream, None, None)])
msilib.add_stream(db, stream, cabname)
os.unlink(cabname)
maxmedia += count
# The merge module sets ALLUSERS to 1 in the property table.
# This is undesired; delete that
v = db.OpenView("DELETE FROM Property WHERE Property='ALLUSERS'")
v.Execute(None)
v.Close()
db.Commit()
merge(msiname, "SharedCRT", "TARGETDIR", modules)
# certname (from config.py) should be (a substring of)
# the certificate subject, e.g. "Python Software Foundation"
if certname:
os.system('signtool sign /n "%s" /t http://timestamp.verisign.com/scripts/timestamp.dll %s' % (certname, msiname))
if pdbzip:
build_pdbzip()
| apache-2.0 |
noelbk/neutron-juniper | neutron/db/migration/alembic_migrations/versions/51b4de912379_cisco_nexus_ml2_mech.py | 28 | 2339 | # Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Cisco Nexus ML2 mechanism driver
Revision ID: 51b4de912379
Revises: 66a59a7f516
Create Date: 2013-08-20 15:31:40.553634
"""
# revision identifiers, used by Alembic.
revision = '51b4de912379'
down_revision = '66a59a7f516'
migration_for_plugins = [
'neutron.plugins.ml2.plugin.Ml2Plugin'
]
from alembic import op
import sqlalchemy as sa
from neutron.db import migration
def upgrade(active_plugins=None, options=None):
if not migration.should_run(active_plugins, migration_for_plugins):
return
op.create_table(
'cisco_ml2_nexusport_bindings',
sa.Column('binding_id', sa.Integer(), nullable=False),
sa.Column('port_id', sa.String(length=255), nullable=True),
sa.Column('vlan_id', sa.Integer(), autoincrement=False,
nullable=False),
sa.Column('switch_ip', sa.String(length=255), nullable=True),
sa.Column('instance_id', sa.String(length=255), nullable=True),
sa.PrimaryKeyConstraint('binding_id'),
)
op.create_table(
'cisco_ml2_credentials',
sa.Column('credential_id', sa.String(length=255), nullable=True),
sa.Column('tenant_id', sa.String(length=255), nullable=False),
sa.Column('credential_name', sa.String(length=255), nullable=False),
sa.Column('user_name', sa.String(length=255), nullable=True),
sa.Column('password', sa.String(length=255), nullable=True),
sa.PrimaryKeyConstraint('tenant_id', 'credential_name'),
)
def downgrade(active_plugins=None, options=None):
if not migration.should_run(active_plugins, migration_for_plugins):
return
op.drop_table('cisco_ml2_credentials')
op.drop_table('cisco_ml2_nexusport_bindings')
| apache-2.0 |
seanli9jan/tensorflow | tensorflow/contrib/model_pruning/examples/cifar10/cifar10_input.py | 46 | 9613 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Routine for decoding the CIFAR-10 binary file format."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
# Process images of this size. Note that this differs from the original CIFAR
# image size of 32 x 32. If one alters this number, then the entire model
# architecture will change and any model would need to be retrained.
IMAGE_SIZE = 24
# Global constants describing the CIFAR-10 data set.
NUM_CLASSES = 10
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000
def read_cifar10(filename_queue):
"""Reads and parses examples from CIFAR10 data files.
Recommendation: if you want N-way read parallelism, call this function
N times. This will give you N independent Readers reading different
files & positions within those files, which will give better mixing of
examples.
Args:
filename_queue: A queue of strings with the filenames to read from.
Returns:
An object representing a single example, with the following fields:
height: number of rows in the result (32)
width: number of columns in the result (32)
depth: number of color channels in the result (3)
key: a scalar string Tensor describing the filename & record number
for this example.
label: an int32 Tensor with the label in the range 0..9.
uint8image: a [height, width, depth] uint8 Tensor with the image data
"""
class CIFAR10Record(object):
pass
result = CIFAR10Record()
# Dimensions of the images in the CIFAR-10 dataset.
# See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the
# input format.
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
image_bytes = result.height * result.width * result.depth
# Every record consists of a label followed by the image, with a
# fixed number of bytes for each.
record_bytes = label_bytes + image_bytes
# Read a record, getting filenames from the filename_queue. No
# header or footer in the CIFAR-10 format, so we leave header_bytes
# and footer_bytes at their default of 0.
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
result.key, value = reader.read(filename_queue)
# Convert from a string to a vector of uint8 that is record_bytes long.
record_bytes = tf.decode_raw(value, tf.uint8)
# The first bytes represent the label, which we convert from uint8->int32.
result.label = tf.cast(
tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)
# The remaining bytes after the label represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
depth_major = tf.reshape(
tf.strided_slice(record_bytes, [label_bytes],
[label_bytes + image_bytes]),
[result.depth, result.height, result.width])
# Convert from [depth, height, width] to [height, width, depth].
result.uint8image = tf.transpose(depth_major, [1, 2, 0])
return result
def _generate_image_and_label_batch(image, label, min_queue_examples,
batch_size, shuffle):
"""Construct a queued batch of images and labels.
Args:
image: 3-D Tensor of [height, width, 3] of type.float32.
label: 1-D Tensor of type.int32
min_queue_examples: int32, minimum number of samples to retain
in the queue that provides of batches of examples.
batch_size: Number of images per batch.
shuffle: boolean indicating whether to use a shuffling queue.
Returns:
images: Images. 4D tensor of [batch_size, height, width, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
# Create a queue that shuffles the examples, and then
# read 'batch_size' images + labels from the example queue.
num_preprocess_threads = 16
if shuffle:
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
else:
images, label_batch = tf.train.batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
# Display the training images in the visualizer.
tf.summary.image('images', images)
return images, tf.reshape(label_batch, [batch_size])
def distorted_inputs(data_dir, batch_size):
"""Construct distorted input for CIFAR training using the Reader ops.
Args:
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
filenames = [
os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)
]
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for training the network. Note the many random
# distortions applied to the image.
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)
distorted_image = tf.image.random_contrast(
distorted_image, lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(distorted_image)
# Set the shapes of tensors.
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * min_fraction_of_examples_in_queue)
print('Filling queue with %d CIFAR images before starting to train. '
'This will take a few minutes.' % min_queue_examples)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(
float_image,
read_input.label,
min_queue_examples,
batch_size,
shuffle=True)
def inputs(eval_data, data_dir, batch_size):
"""Construct input for CIFAR evaluation using the Reader ops.
Args:
eval_data: bool, indicating if one should use the train or eval data set.
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
if not eval_data:
filenames = [
os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)
]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
else:
filenames = [os.path.join(data_dir, 'test_batch.bin')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for evaluation.
# Crop the central [height, width] of the image.
resized_image = tf.image.resize_image_with_crop_or_pad(
reshaped_image, width, height)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(resized_image)
# Set the shapes of tensors.
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(
num_examples_per_epoch * min_fraction_of_examples_in_queue)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(
float_image,
read_input.label,
min_queue_examples,
batch_size,
shuffle=False)
| apache-2.0 |
adfernandes/pcp | src/pcp/pidstat/test/none_handler_printer_decorator_test.py | 6 | 1463 | #!/usr/bin/env pmpython
#
# Copyright (C) 2016 Sitaram Shelke.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
import unittest
from mock import Mock
from pcp_pidstat import NoneHandlingPrinterDecorator
class TestNoneHandlingPrinterDecorator(unittest.TestCase):
def test_print_report_without_none_values(self):
printer = Mock()
printer.Print = Mock()
printer_decorator = NoneHandlingPrinterDecorator(printer)
printer_decorator.Print("123\t1000\t1\t2.43\t1.24\t0.0\t3.67\t1\tprocess_1")
printer.Print.assert_called_with("123\t1000\t1\t2.43\t1.24\t0.0\t3.67\t1\tprocess_1")
def test_print_report_with_none_values(self):
printer = Mock()
printer.Print = Mock()
printer_decorator = NoneHandlingPrinterDecorator(printer)
printer_decorator.Print("123\t1000\t1\tNone\t1.24\t0.0\tNone\t1\tprocess_1")
printer.Print.assert_called_with("123\t1000\t1\t?\t1.24\t0.0\t?\t1\tprocess_1")
if __name__ == "__main__":
unittest.main()
| lgpl-2.1 |
solintegra/addons | lunch/wizard/lunch_cancel.py | 440 | 1274 | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2012 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
class lunch_cancel(osv.Model):
""" lunch cancel """
_name = 'lunch.cancel'
_description = 'cancel lunch order'
def cancel(self,cr,uid,ids,context=None):
return self.pool.get('lunch.order.line').cancel(cr, uid, ids, context=context)
| agpl-3.0 |
salamer/django | django/forms/models.py | 72 | 54654 | """
Helper functions for creating Form classes from Django models
and database field objects.
"""
from __future__ import unicode_literals
from collections import OrderedDict
from itertools import chain
from django.core.exceptions import (
NON_FIELD_ERRORS, FieldError, ImproperlyConfigured, ValidationError,
)
from django.forms.fields import ChoiceField, Field
from django.forms.forms import BaseForm, DeclarativeFieldsMetaclass
from django.forms.formsets import BaseFormSet, formset_factory
from django.forms.utils import ErrorList
from django.forms.widgets import (
HiddenInput, MultipleHiddenInput, SelectMultiple,
)
from django.utils import six
from django.utils.encoding import force_text, smart_text
from django.utils.text import capfirst, get_text_list
from django.utils.translation import ugettext, ugettext_lazy as _
__all__ = (
'ModelForm', 'BaseModelForm', 'model_to_dict', 'fields_for_model',
'ModelChoiceField', 'ModelMultipleChoiceField', 'ALL_FIELDS',
'BaseModelFormSet', 'modelformset_factory', 'BaseInlineFormSet',
'inlineformset_factory', 'modelform_factory',
)
ALL_FIELDS = '__all__'
def construct_instance(form, instance, fields=None, exclude=None):
"""
Constructs and returns a model instance from the bound ``form``'s
``cleaned_data``, but does not save the returned instance to the
database.
"""
from django.db import models
opts = instance._meta
cleaned_data = form.cleaned_data
file_field_list = []
for f in opts.fields:
if not f.editable or isinstance(f, models.AutoField) \
or f.name not in cleaned_data:
continue
if fields is not None and f.name not in fields:
continue
if exclude and f.name in exclude:
continue
# Defer saving file-type fields until after the other fields, so a
# callable upload_to can use the values from other fields.
if isinstance(f, models.FileField):
file_field_list.append(f)
else:
f.save_form_data(instance, cleaned_data[f.name])
for f in file_field_list:
f.save_form_data(instance, cleaned_data[f.name])
return instance
# ModelForms #################################################################
def model_to_dict(instance, fields=None, exclude=None):
"""
Returns a dict containing the data in ``instance`` suitable for passing as
a Form's ``initial`` keyword argument.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned dict.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned dict, even if they are listed in
the ``fields`` argument.
"""
# avoid a circular import
from django.db.models.fields.related import ManyToManyField
opts = instance._meta
data = {}
for f in chain(opts.concrete_fields, opts.virtual_fields, opts.many_to_many):
if not getattr(f, 'editable', False):
continue
if fields and f.name not in fields:
continue
if exclude and f.name in exclude:
continue
if isinstance(f, ManyToManyField):
# If the object doesn't have a primary key yet, just use an empty
# list for its m2m fields. Calling f.value_from_object will raise
# an exception.
if instance.pk is None:
data[f.name] = []
else:
# MultipleChoiceWidget needs a list of pks, not object instances.
qs = f.value_from_object(instance)
if qs._result_cache is not None:
data[f.name] = [item.pk for item in qs]
else:
data[f.name] = list(qs.values_list('pk', flat=True))
else:
data[f.name] = f.value_from_object(instance)
return data
def fields_for_model(model, fields=None, exclude=None, widgets=None,
formfield_callback=None, localized_fields=None,
labels=None, help_texts=None, error_messages=None,
field_classes=None):
"""
Returns a ``OrderedDict`` containing form fields for the given model.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned fields.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned fields, even if they are listed
in the ``fields`` argument.
``widgets`` is a dictionary of model field names mapped to a widget.
``formfield_callback`` is a callable that takes a model field and returns
a form field.
``localized_fields`` is a list of names of fields which should be localized.
``labels`` is a dictionary of model field names mapped to a label.
``help_texts`` is a dictionary of model field names mapped to a help text.
``error_messages`` is a dictionary of model field names mapped to a
dictionary of error messages.
``field_classes`` is a dictionary of model field names mapped to a form
field class.
"""
field_list = []
ignored = []
opts = model._meta
# Avoid circular import
from django.db.models.fields import Field as ModelField
sortable_virtual_fields = [f for f in opts.virtual_fields
if isinstance(f, ModelField)]
for f in sorted(chain(opts.concrete_fields, sortable_virtual_fields, opts.many_to_many)):
if not getattr(f, 'editable', False):
continue
if fields is not None and f.name not in fields:
continue
if exclude and f.name in exclude:
continue
kwargs = {}
if widgets and f.name in widgets:
kwargs['widget'] = widgets[f.name]
if localized_fields == ALL_FIELDS or (localized_fields and f.name in localized_fields):
kwargs['localize'] = True
if labels and f.name in labels:
kwargs['label'] = labels[f.name]
if help_texts and f.name in help_texts:
kwargs['help_text'] = help_texts[f.name]
if error_messages and f.name in error_messages:
kwargs['error_messages'] = error_messages[f.name]
if field_classes and f.name in field_classes:
kwargs['form_class'] = field_classes[f.name]
if formfield_callback is None:
formfield = f.formfield(**kwargs)
elif not callable(formfield_callback):
raise TypeError('formfield_callback must be a function or callable')
else:
formfield = formfield_callback(f, **kwargs)
if formfield:
field_list.append((f.name, formfield))
else:
ignored.append(f.name)
field_dict = OrderedDict(field_list)
if fields:
field_dict = OrderedDict(
[(f, field_dict.get(f)) for f in fields
if ((not exclude) or (exclude and f not in exclude)) and (f not in ignored)]
)
return field_dict
class ModelFormOptions(object):
def __init__(self, options=None):
self.model = getattr(options, 'model', None)
self.fields = getattr(options, 'fields', None)
self.exclude = getattr(options, 'exclude', None)
self.widgets = getattr(options, 'widgets', None)
self.localized_fields = getattr(options, 'localized_fields', None)
self.labels = getattr(options, 'labels', None)
self.help_texts = getattr(options, 'help_texts', None)
self.error_messages = getattr(options, 'error_messages', None)
self.field_classes = getattr(options, 'field_classes', None)
class ModelFormMetaclass(DeclarativeFieldsMetaclass):
def __new__(mcs, name, bases, attrs):
formfield_callback = attrs.pop('formfield_callback', None)
new_class = super(ModelFormMetaclass, mcs).__new__(mcs, name, bases, attrs)
if bases == (BaseModelForm,):
return new_class
opts = new_class._meta = ModelFormOptions(getattr(new_class, 'Meta', None))
# We check if a string was passed to `fields` or `exclude`,
# which is likely to be a mistake where the user typed ('foo') instead
# of ('foo',)
for opt in ['fields', 'exclude', 'localized_fields']:
value = getattr(opts, opt)
if isinstance(value, six.string_types) and value != ALL_FIELDS:
msg = ("%(model)s.Meta.%(opt)s cannot be a string. "
"Did you mean to type: ('%(value)s',)?" % {
'model': new_class.__name__,
'opt': opt,
'value': value,
})
raise TypeError(msg)
if opts.model:
# If a model is defined, extract form fields from it.
if opts.fields is None and opts.exclude is None:
raise ImproperlyConfigured(
"Creating a ModelForm without either the 'fields' attribute "
"or the 'exclude' attribute is prohibited; form %s "
"needs updating." % name
)
if opts.fields == ALL_FIELDS:
# Sentinel for fields_for_model to indicate "get the list of
# fields from the model"
opts.fields = None
fields = fields_for_model(opts.model, opts.fields, opts.exclude,
opts.widgets, formfield_callback,
opts.localized_fields, opts.labels,
opts.help_texts, opts.error_messages,
opts.field_classes)
# make sure opts.fields doesn't specify an invalid field
none_model_fields = [k for k, v in six.iteritems(fields) if not v]
missing_fields = (set(none_model_fields) -
set(new_class.declared_fields.keys()))
if missing_fields:
message = 'Unknown field(s) (%s) specified for %s'
message = message % (', '.join(missing_fields),
opts.model.__name__)
raise FieldError(message)
# Override default model fields with any custom declared ones
# (plus, include all the other declared fields).
fields.update(new_class.declared_fields)
else:
fields = new_class.declared_fields
new_class.base_fields = fields
return new_class
class BaseModelForm(BaseForm):
def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
initial=None, error_class=ErrorList, label_suffix=None,
empty_permitted=False, instance=None):
opts = self._meta
if opts.model is None:
raise ValueError('ModelForm has no model class specified.')
if instance is None:
# if we didn't get an instance, instantiate a new one
self.instance = opts.model()
object_data = {}
else:
self.instance = instance
object_data = model_to_dict(instance, opts.fields, opts.exclude)
# if initial was provided, it should override the values from instance
if initial is not None:
object_data.update(initial)
# self._validate_unique will be set to True by BaseModelForm.clean().
# It is False by default so overriding self.clean() and failing to call
# super will stop validate_unique from being called.
self._validate_unique = False
super(BaseModelForm, self).__init__(data, files, auto_id, prefix, object_data,
error_class, label_suffix, empty_permitted)
# Apply ``limit_choices_to`` to each field.
for field_name in self.fields:
formfield = self.fields[field_name]
if hasattr(formfield, 'queryset') and hasattr(formfield, 'get_limit_choices_to'):
limit_choices_to = formfield.get_limit_choices_to()
if limit_choices_to is not None:
formfield.queryset = formfield.queryset.complex_filter(limit_choices_to)
def _get_validation_exclusions(self):
"""
For backwards-compatibility, several types of fields need to be
excluded from model validation. See the following tickets for
details: #12507, #12521, #12553
"""
exclude = []
# Build up a list of fields that should be excluded from model field
# validation and unique checks.
for f in self.instance._meta.fields:
field = f.name
# Exclude fields that aren't on the form. The developer may be
# adding these values to the model after form validation.
if field not in self.fields:
exclude.append(f.name)
# Don't perform model validation on fields that were defined
# manually on the form and excluded via the ModelForm's Meta
# class. See #12901.
elif self._meta.fields and field not in self._meta.fields:
exclude.append(f.name)
elif self._meta.exclude and field in self._meta.exclude:
exclude.append(f.name)
# Exclude fields that failed form validation. There's no need for
# the model fields to validate them as well.
elif field in self._errors.keys():
exclude.append(f.name)
# Exclude empty fields that are not required by the form, if the
# underlying model field is required. This keeps the model field
# from raising a required error. Note: don't exclude the field from
# validation if the model field allows blanks. If it does, the blank
# value may be included in a unique check, so cannot be excluded
# from validation.
else:
form_field = self.fields[field]
field_value = self.cleaned_data.get(field)
if not f.blank and not form_field.required and field_value in form_field.empty_values:
exclude.append(f.name)
return exclude
def clean(self):
self._validate_unique = True
return self.cleaned_data
def _update_errors(self, errors):
# Override any validation error messages defined at the model level
# with those defined at the form level.
opts = self._meta
for field, messages in errors.error_dict.items():
if (field == NON_FIELD_ERRORS and opts.error_messages and
NON_FIELD_ERRORS in opts.error_messages):
error_messages = opts.error_messages[NON_FIELD_ERRORS]
elif field in self.fields:
error_messages = self.fields[field].error_messages
else:
continue
for message in messages:
if (isinstance(message, ValidationError) and
message.code in error_messages):
message.message = error_messages[message.code]
self.add_error(None, errors)
def _post_clean(self):
opts = self._meta
exclude = self._get_validation_exclusions()
# Foreign Keys being used to represent inline relationships
# are excluded from basic field value validation. This is for two
# reasons: firstly, the value may not be supplied (#12507; the
# case of providing new values to the admin); secondly the
# object being referred to may not yet fully exist (#12749).
# However, these fields *must* be included in uniqueness checks,
# so this can't be part of _get_validation_exclusions().
for name, field in self.fields.items():
if isinstance(field, InlineForeignKeyField):
exclude.append(name)
# Update the model instance with self.cleaned_data.
self.instance = construct_instance(self, self.instance, opts.fields, exclude)
try:
self.instance.full_clean(exclude=exclude, validate_unique=False)
except ValidationError as e:
self._update_errors(e)
# Validate uniqueness if needed.
if self._validate_unique:
self.validate_unique()
def validate_unique(self):
"""
Calls the instance's validate_unique() method and updates the form's
validation errors if any were raised.
"""
exclude = self._get_validation_exclusions()
try:
self.instance.validate_unique(exclude=exclude)
except ValidationError as e:
self._update_errors(e)
def _save_m2m(self):
"""
Save the many-to-many fields and generic relations for this form.
"""
cleaned_data = self.cleaned_data
exclude = self._meta.exclude
fields = self._meta.fields
opts = self.instance._meta
# Note that for historical reasons we want to include also
# virtual_fields here. (GenericRelation was previously a fake
# m2m field).
for f in chain(opts.many_to_many, opts.virtual_fields):
if not hasattr(f, 'save_form_data'):
continue
if fields and f.name not in fields:
continue
if exclude and f.name in exclude:
continue
if f.name in cleaned_data:
f.save_form_data(self.instance, cleaned_data[f.name])
def save(self, commit=True):
"""
Save this form's self.instance object if commit=True. Otherwise, add
a save_m2m() method to the form which can be called after the instance
is saved manually at a later time. Return the model instance.
"""
if self.errors:
raise ValueError(
"The %s could not be %s because the data didn't validate." % (
self.instance._meta.object_name,
'created' if self.instance._state.adding else 'changed',
)
)
if commit:
# If committing, save the instance and the m2m data immediately.
self.instance.save()
self._save_m2m()
else:
# If not committing, add a method to the form to allow deferred
# saving of m2m data.
self.save_m2m = self._save_m2m
return self.instance
save.alters_data = True
class ModelForm(six.with_metaclass(ModelFormMetaclass, BaseModelForm)):
pass
def modelform_factory(model, form=ModelForm, fields=None, exclude=None,
formfield_callback=None, widgets=None, localized_fields=None,
labels=None, help_texts=None, error_messages=None,
field_classes=None):
"""
Returns a ModelForm containing form fields for the given model.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned fields. If omitted or '__all__',
all fields will be used.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned fields, even if they are listed
in the ``fields`` argument.
``widgets`` is a dictionary of model field names mapped to a widget.
``localized_fields`` is a list of names of fields which should be localized.
``formfield_callback`` is a callable that takes a model field and returns
a form field.
``labels`` is a dictionary of model field names mapped to a label.
``help_texts`` is a dictionary of model field names mapped to a help text.
``error_messages`` is a dictionary of model field names mapped to a
dictionary of error messages.
``field_classes`` is a dictionary of model field names mapped to a form
field class.
"""
# Create the inner Meta class. FIXME: ideally, we should be able to
# construct a ModelForm without creating and passing in a temporary
# inner class.
# Build up a list of attributes that the Meta object will have.
attrs = {'model': model}
if fields is not None:
attrs['fields'] = fields
if exclude is not None:
attrs['exclude'] = exclude
if widgets is not None:
attrs['widgets'] = widgets
if localized_fields is not None:
attrs['localized_fields'] = localized_fields
if labels is not None:
attrs['labels'] = labels
if help_texts is not None:
attrs['help_texts'] = help_texts
if error_messages is not None:
attrs['error_messages'] = error_messages
if field_classes is not None:
attrs['field_classes'] = field_classes
# If parent form class already has an inner Meta, the Meta we're
# creating needs to inherit from the parent's inner meta.
parent = (object,)
if hasattr(form, 'Meta'):
parent = (form.Meta, object)
Meta = type(str('Meta'), parent, attrs)
# Give this new form class a reasonable name.
class_name = model.__name__ + str('Form')
# Class attributes for the new form class.
form_class_attrs = {
'Meta': Meta,
'formfield_callback': formfield_callback
}
if (getattr(Meta, 'fields', None) is None and
getattr(Meta, 'exclude', None) is None):
raise ImproperlyConfigured(
"Calling modelform_factory without defining 'fields' or "
"'exclude' explicitly is prohibited."
)
# Instantiate type(form) in order to use the same metaclass as form.
return type(form)(class_name, (form,), form_class_attrs)
# ModelFormSets ##############################################################
class BaseModelFormSet(BaseFormSet):
"""
A ``FormSet`` for editing a queryset and/or adding new objects to it.
"""
model = None
def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
queryset=None, **kwargs):
self.queryset = queryset
self.initial_extra = kwargs.pop('initial', None)
defaults = {'data': data, 'files': files, 'auto_id': auto_id, 'prefix': prefix}
defaults.update(kwargs)
super(BaseModelFormSet, self).__init__(**defaults)
def initial_form_count(self):
"""Returns the number of forms that are required in this FormSet."""
if not (self.data or self.files):
return len(self.get_queryset())
return super(BaseModelFormSet, self).initial_form_count()
def _existing_object(self, pk):
if not hasattr(self, '_object_dict'):
self._object_dict = {o.pk: o for o in self.get_queryset()}
return self._object_dict.get(pk)
def _get_to_python(self, field):
"""
If the field is a related field, fetch the concrete field's (that
is, the ultimate pointed-to field's) to_python.
"""
while field.remote_field is not None:
field = field.remote_field.get_related_field()
return field.to_python
def _construct_form(self, i, **kwargs):
if self.is_bound and i < self.initial_form_count():
pk_key = "%s-%s" % (self.add_prefix(i), self.model._meta.pk.name)
pk = self.data[pk_key]
pk_field = self.model._meta.pk
to_python = self._get_to_python(pk_field)
pk = to_python(pk)
kwargs['instance'] = self._existing_object(pk)
if i < self.initial_form_count() and 'instance' not in kwargs:
kwargs['instance'] = self.get_queryset()[i]
if i >= self.initial_form_count() and self.initial_extra:
# Set initial values for extra forms
try:
kwargs['initial'] = self.initial_extra[i - self.initial_form_count()]
except IndexError:
pass
return super(BaseModelFormSet, self)._construct_form(i, **kwargs)
def get_queryset(self):
if not hasattr(self, '_queryset'):
if self.queryset is not None:
qs = self.queryset
else:
qs = self.model._default_manager.get_queryset()
# If the queryset isn't already ordered we need to add an
# artificial ordering here to make sure that all formsets
# constructed from this queryset have the same form order.
if not qs.ordered:
qs = qs.order_by(self.model._meta.pk.name)
# Removed queryset limiting here. As per discussion re: #13023
# on django-dev, max_num should not prevent existing
# related objects/inlines from being displayed.
self._queryset = qs
return self._queryset
def save_new(self, form, commit=True):
"""Saves and returns a new model instance for the given form."""
return form.save(commit=commit)
def save_existing(self, form, instance, commit=True):
"""Saves and returns an existing model instance for the given form."""
return form.save(commit=commit)
def delete_existing(self, obj, commit=True):
"""Deletes an existing model instance."""
if commit:
obj.delete()
def save(self, commit=True):
"""Saves model instances for every form, adding and changing instances
as necessary, and returns the list of instances.
"""
if not commit:
self.saved_forms = []
def save_m2m():
for form in self.saved_forms:
form.save_m2m()
self.save_m2m = save_m2m
return self.save_existing_objects(commit) + self.save_new_objects(commit)
save.alters_data = True
def clean(self):
self.validate_unique()
def validate_unique(self):
# Collect unique_checks and date_checks to run from all the forms.
all_unique_checks = set()
all_date_checks = set()
forms_to_delete = self.deleted_forms
valid_forms = [form for form in self.forms if form.is_valid() and form not in forms_to_delete]
for form in valid_forms:
exclude = form._get_validation_exclusions()
unique_checks, date_checks = form.instance._get_unique_checks(exclude=exclude)
all_unique_checks = all_unique_checks.union(set(unique_checks))
all_date_checks = all_date_checks.union(set(date_checks))
errors = []
# Do each of the unique checks (unique and unique_together)
for uclass, unique_check in all_unique_checks:
seen_data = set()
for form in valid_forms:
# get data for each field of each of unique_check
row_data = (form.cleaned_data[field]
for field in unique_check if field in form.cleaned_data)
# Reduce Model instances to their primary key values
row_data = tuple(d._get_pk_val() if hasattr(d, '_get_pk_val') else d
for d in row_data)
if row_data and None not in row_data:
# if we've already seen it then we have a uniqueness failure
if row_data in seen_data:
# poke error messages into the right places and mark
# the form as invalid
errors.append(self.get_unique_error_message(unique_check))
form._errors[NON_FIELD_ERRORS] = self.error_class([self.get_form_error()])
# remove the data from the cleaned_data dict since it was invalid
for field in unique_check:
if field in form.cleaned_data:
del form.cleaned_data[field]
# mark the data as seen
seen_data.add(row_data)
# iterate over each of the date checks now
for date_check in all_date_checks:
seen_data = set()
uclass, lookup, field, unique_for = date_check
for form in valid_forms:
# see if we have data for both fields
if (form.cleaned_data and form.cleaned_data[field] is not None
and form.cleaned_data[unique_for] is not None):
# if it's a date lookup we need to get the data for all the fields
if lookup == 'date':
date = form.cleaned_data[unique_for]
date_data = (date.year, date.month, date.day)
# otherwise it's just the attribute on the date/datetime
# object
else:
date_data = (getattr(form.cleaned_data[unique_for], lookup),)
data = (form.cleaned_data[field],) + date_data
# if we've already seen it then we have a uniqueness failure
if data in seen_data:
# poke error messages into the right places and mark
# the form as invalid
errors.append(self.get_date_error_message(date_check))
form._errors[NON_FIELD_ERRORS] = self.error_class([self.get_form_error()])
# remove the data from the cleaned_data dict since it was invalid
del form.cleaned_data[field]
# mark the data as seen
seen_data.add(data)
if errors:
raise ValidationError(errors)
def get_unique_error_message(self, unique_check):
if len(unique_check) == 1:
return ugettext("Please correct the duplicate data for %(field)s.") % {
"field": unique_check[0],
}
else:
return ugettext("Please correct the duplicate data for %(field)s, "
"which must be unique.") % {
"field": get_text_list(unique_check, six.text_type(_("and"))),
}
def get_date_error_message(self, date_check):
return ugettext("Please correct the duplicate data for %(field_name)s "
"which must be unique for the %(lookup)s in %(date_field)s.") % {
'field_name': date_check[2],
'date_field': date_check[3],
'lookup': six.text_type(date_check[1]),
}
def get_form_error(self):
return ugettext("Please correct the duplicate values below.")
def save_existing_objects(self, commit=True):
self.changed_objects = []
self.deleted_objects = []
if not self.initial_forms:
return []
saved_instances = []
forms_to_delete = self.deleted_forms
for form in self.initial_forms:
obj = form.instance
if form in forms_to_delete:
# If the pk is None, it means that the object can't be
# deleted again. Possible reason for this is that the
# object was already deleted from the DB. Refs #14877.
if obj.pk is None:
continue
self.deleted_objects.append(obj)
self.delete_existing(obj, commit=commit)
elif form.has_changed():
self.changed_objects.append((obj, form.changed_data))
saved_instances.append(self.save_existing(form, obj, commit=commit))
if not commit:
self.saved_forms.append(form)
return saved_instances
def save_new_objects(self, commit=True):
self.new_objects = []
for form in self.extra_forms:
if not form.has_changed():
continue
# If someone has marked an add form for deletion, don't save the
# object.
if self.can_delete and self._should_delete_form(form):
continue
self.new_objects.append(self.save_new(form, commit=commit))
if not commit:
self.saved_forms.append(form)
return self.new_objects
def add_fields(self, form, index):
"""Add a hidden field for the object's primary key."""
from django.db.models import AutoField, OneToOneField, ForeignKey
self._pk_field = pk = self.model._meta.pk
# If a pk isn't editable, then it won't be on the form, so we need to
# add it here so we can tell which object is which when we get the
# data back. Generally, pk.editable should be false, but for some
# reason, auto_created pk fields and AutoField's editable attribute is
# True, so check for that as well.
def pk_is_not_editable(pk):
return ((not pk.editable) or (pk.auto_created or isinstance(pk, AutoField))
or (pk.remote_field and pk.remote_field.parent_link and pk_is_not_editable(pk.remote_field.model._meta.pk)))
if pk_is_not_editable(pk) or pk.name not in form.fields:
if form.is_bound:
# If we're adding the related instance, ignore its primary key
# as it could be an auto-generated default which isn't actually
# in the database.
pk_value = None if form.instance._state.adding else form.instance.pk
else:
try:
if index is not None:
pk_value = self.get_queryset()[index].pk
else:
pk_value = None
except IndexError:
pk_value = None
if isinstance(pk, OneToOneField) or isinstance(pk, ForeignKey):
qs = pk.remote_field.model._default_manager.get_queryset()
else:
qs = self.model._default_manager.get_queryset()
qs = qs.using(form.instance._state.db)
if form._meta.widgets:
widget = form._meta.widgets.get(self._pk_field.name, HiddenInput)
else:
widget = HiddenInput
form.fields[self._pk_field.name] = ModelChoiceField(qs, initial=pk_value, required=False, widget=widget)
super(BaseModelFormSet, self).add_fields(form, index)
def modelformset_factory(model, form=ModelForm, formfield_callback=None,
formset=BaseModelFormSet, extra=1, can_delete=False,
can_order=False, max_num=None, fields=None, exclude=None,
widgets=None, validate_max=False, localized_fields=None,
labels=None, help_texts=None, error_messages=None,
min_num=None, validate_min=False, field_classes=None):
"""
Returns a FormSet class for the given Django model class.
"""
meta = getattr(form, 'Meta', None)
if meta is None:
meta = type(str('Meta'), (object,), {})
if (getattr(meta, 'fields', fields) is None and
getattr(meta, 'exclude', exclude) is None):
raise ImproperlyConfigured(
"Calling modelformset_factory without defining 'fields' or "
"'exclude' explicitly is prohibited."
)
form = modelform_factory(model, form=form, fields=fields, exclude=exclude,
formfield_callback=formfield_callback,
widgets=widgets, localized_fields=localized_fields,
labels=labels, help_texts=help_texts,
error_messages=error_messages, field_classes=field_classes)
FormSet = formset_factory(form, formset, extra=extra, min_num=min_num, max_num=max_num,
can_order=can_order, can_delete=can_delete,
validate_min=validate_min, validate_max=validate_max)
FormSet.model = model
return FormSet
# InlineFormSets #############################################################
class BaseInlineFormSet(BaseModelFormSet):
"""A formset for child objects related to a parent."""
def __init__(self, data=None, files=None, instance=None,
save_as_new=False, prefix=None, queryset=None, **kwargs):
if instance is None:
self.instance = self.fk.remote_field.model()
else:
self.instance = instance
self.save_as_new = save_as_new
if queryset is None:
queryset = self.model._default_manager
if self.instance.pk is not None:
qs = queryset.filter(**{self.fk.name: self.instance})
else:
qs = queryset.none()
super(BaseInlineFormSet, self).__init__(data, files, prefix=prefix,
queryset=qs, **kwargs)
def initial_form_count(self):
if self.save_as_new:
return 0
return super(BaseInlineFormSet, self).initial_form_count()
def _construct_form(self, i, **kwargs):
form = super(BaseInlineFormSet, self)._construct_form(i, **kwargs)
if self.save_as_new:
# Remove the primary key from the form's data, we are only
# creating new instances
form.data[form.add_prefix(self._pk_field.name)] = None
# Remove the foreign key from the form's data
form.data[form.add_prefix(self.fk.name)] = None
# Set the fk value here so that the form can do its validation.
fk_value = self.instance.pk
if self.fk.remote_field.field_name != self.fk.remote_field.model._meta.pk.name:
fk_value = getattr(self.instance, self.fk.remote_field.field_name)
fk_value = getattr(fk_value, 'pk', fk_value)
setattr(form.instance, self.fk.get_attname(), fk_value)
return form
@classmethod
def get_default_prefix(cls):
return cls.fk.remote_field.get_accessor_name(model=cls.model).replace('+', '')
def save_new(self, form, commit=True):
# Ensure the latest copy of the related instance is present on each
# form (it may have been saved after the formset was originally
# instantiated).
setattr(form.instance, self.fk.name, self.instance)
# Use commit=False so we can assign the parent key afterwards, then
# save the object.
obj = form.save(commit=False)
pk_value = getattr(self.instance, self.fk.remote_field.field_name)
setattr(obj, self.fk.get_attname(), getattr(pk_value, 'pk', pk_value))
if commit:
obj.save()
# form.save_m2m() can be called via the formset later on if commit=False
if commit and hasattr(form, 'save_m2m'):
form.save_m2m()
return obj
def add_fields(self, form, index):
super(BaseInlineFormSet, self).add_fields(form, index)
if self._pk_field == self.fk:
name = self._pk_field.name
kwargs = {'pk_field': True}
else:
# The foreign key field might not be on the form, so we poke at the
# Model field to get the label, since we need that for error messages.
name = self.fk.name
kwargs = {
'label': getattr(form.fields.get(name), 'label', capfirst(self.fk.verbose_name))
}
if self.fk.remote_field.field_name != self.fk.remote_field.model._meta.pk.name:
kwargs['to_field'] = self.fk.remote_field.field_name
# If we're adding a new object, ignore a parent's auto-generated key
# as it will be regenerated on the save request.
if self.instance._state.adding:
if kwargs.get('to_field') is not None:
to_field = self.instance._meta.get_field(kwargs['to_field'])
else:
to_field = self.instance._meta.pk
if to_field.has_default():
setattr(self.instance, to_field.attname, None)
form.fields[name] = InlineForeignKeyField(self.instance, **kwargs)
# Add the generated field to form._meta.fields if it's defined to make
# sure validation isn't skipped on that field.
if form._meta.fields:
if isinstance(form._meta.fields, tuple):
form._meta.fields = list(form._meta.fields)
form._meta.fields.append(self.fk.name)
def get_unique_error_message(self, unique_check):
unique_check = [field for field in unique_check if field != self.fk.name]
return super(BaseInlineFormSet, self).get_unique_error_message(unique_check)
def _get_foreign_key(parent_model, model, fk_name=None, can_fail=False):
"""
Finds and returns the ForeignKey from model to parent if there is one
(returns None if can_fail is True and no such field exists). If fk_name is
provided, assume it is the name of the ForeignKey field. Unless can_fail is
True, an exception is raised if there is no ForeignKey from model to
parent_model.
"""
# avoid circular import
from django.db.models import ForeignKey
opts = model._meta
if fk_name:
fks_to_parent = [f for f in opts.fields if f.name == fk_name]
if len(fks_to_parent) == 1:
fk = fks_to_parent[0]
if not isinstance(fk, ForeignKey) or \
(fk.remote_field.model != parent_model and
fk.remote_field.model not in parent_model._meta.get_parent_list()):
raise ValueError(
"fk_name '%s' is not a ForeignKey to '%s'." % (fk_name, parent_model._meta.label)
)
elif len(fks_to_parent) == 0:
raise ValueError(
"'%s' has no field named '%s'." % (model._meta.label, fk_name)
)
else:
# Try to discover what the ForeignKey from model to parent_model is
fks_to_parent = [
f for f in opts.fields
if isinstance(f, ForeignKey)
and (f.remote_field.model == parent_model
or f.remote_field.model in parent_model._meta.get_parent_list())
]
if len(fks_to_parent) == 1:
fk = fks_to_parent[0]
elif len(fks_to_parent) == 0:
if can_fail:
return
raise ValueError(
"'%s' has no ForeignKey to '%s'." % (
model._meta.label,
parent_model._meta.label,
)
)
else:
raise ValueError(
"'%s' has more than one ForeignKey to '%s'." % (
model._meta.label,
parent_model._meta.label,
)
)
return fk
def inlineformset_factory(parent_model, model, form=ModelForm,
formset=BaseInlineFormSet, fk_name=None,
fields=None, exclude=None, extra=3, can_order=False,
can_delete=True, max_num=None, formfield_callback=None,
widgets=None, validate_max=False, localized_fields=None,
labels=None, help_texts=None, error_messages=None,
min_num=None, validate_min=False, field_classes=None):
"""
Returns an ``InlineFormSet`` for the given kwargs.
You must provide ``fk_name`` if ``model`` has more than one ``ForeignKey``
to ``parent_model``.
"""
fk = _get_foreign_key(parent_model, model, fk_name=fk_name)
# enforce a max_num=1 when the foreign key to the parent model is unique.
if fk.unique:
max_num = 1
kwargs = {
'form': form,
'formfield_callback': formfield_callback,
'formset': formset,
'extra': extra,
'can_delete': can_delete,
'can_order': can_order,
'fields': fields,
'exclude': exclude,
'min_num': min_num,
'max_num': max_num,
'widgets': widgets,
'validate_min': validate_min,
'validate_max': validate_max,
'localized_fields': localized_fields,
'labels': labels,
'help_texts': help_texts,
'error_messages': error_messages,
'field_classes': field_classes,
}
FormSet = modelformset_factory(model, **kwargs)
FormSet.fk = fk
return FormSet
# Fields #####################################################################
class InlineForeignKeyField(Field):
"""
A basic integer field that deals with validating the given value to a
given parent instance in an inline.
"""
widget = HiddenInput
default_error_messages = {
'invalid_choice': _('The inline foreign key did not match the parent instance primary key.'),
}
def __init__(self, parent_instance, *args, **kwargs):
self.parent_instance = parent_instance
self.pk_field = kwargs.pop("pk_field", False)
self.to_field = kwargs.pop("to_field", None)
if self.parent_instance is not None:
if self.to_field:
kwargs["initial"] = getattr(self.parent_instance, self.to_field)
else:
kwargs["initial"] = self.parent_instance.pk
kwargs["required"] = False
super(InlineForeignKeyField, self).__init__(*args, **kwargs)
def clean(self, value):
if value in self.empty_values:
if self.pk_field:
return None
# if there is no value act as we did before.
return self.parent_instance
# ensure the we compare the values as equal types.
if self.to_field:
orig = getattr(self.parent_instance, self.to_field)
else:
orig = self.parent_instance.pk
if force_text(value) != force_text(orig):
raise ValidationError(self.error_messages['invalid_choice'], code='invalid_choice')
return self.parent_instance
def has_changed(self, initial, data):
return False
class ModelChoiceIterator(object):
def __init__(self, field):
self.field = field
self.queryset = field.queryset
def __iter__(self):
if self.field.empty_label is not None:
yield ("", self.field.empty_label)
for obj in self.queryset.iterator():
yield self.choice(obj)
def __len__(self):
return (len(self.queryset) +
(1 if self.field.empty_label is not None else 0))
def choice(self, obj):
return (self.field.prepare_value(obj), self.field.label_from_instance(obj))
class ModelChoiceField(ChoiceField):
"""A ChoiceField whose choices are a model QuerySet."""
# This class is a subclass of ChoiceField for purity, but it doesn't
# actually use any of ChoiceField's implementation.
default_error_messages = {
'invalid_choice': _('Select a valid choice. That choice is not one of'
' the available choices.'),
}
def __init__(self, queryset, empty_label="---------",
required=True, widget=None, label=None, initial=None,
help_text='', to_field_name=None, limit_choices_to=None,
*args, **kwargs):
if required and (initial is not None):
self.empty_label = None
else:
self.empty_label = empty_label
# Call Field instead of ChoiceField __init__() because we don't need
# ChoiceField.__init__().
Field.__init__(self, required, widget, label, initial, help_text,
*args, **kwargs)
self.queryset = queryset
self.limit_choices_to = limit_choices_to # limit the queryset later.
self.to_field_name = to_field_name
def get_limit_choices_to(self):
"""
Returns ``limit_choices_to`` for this form field.
If it is a callable, it will be invoked and the result will be
returned.
"""
if callable(self.limit_choices_to):
return self.limit_choices_to()
return self.limit_choices_to
def __deepcopy__(self, memo):
result = super(ChoiceField, self).__deepcopy__(memo)
# Need to force a new ModelChoiceIterator to be created, bug #11183
result.queryset = result.queryset
return result
def _get_queryset(self):
return self._queryset
def _set_queryset(self, queryset):
self._queryset = queryset
self.widget.choices = self.choices
queryset = property(_get_queryset, _set_queryset)
# this method will be used to create object labels by the QuerySetIterator.
# Override it to customize the label.
def label_from_instance(self, obj):
"""
This method is used to convert objects into strings; it's used to
generate the labels for the choices presented by this object. Subclasses
can override this method to customize the display of the choices.
"""
return smart_text(obj)
def _get_choices(self):
# If self._choices is set, then somebody must have manually set
# the property self.choices. In this case, just return self._choices.
if hasattr(self, '_choices'):
return self._choices
# Otherwise, execute the QuerySet in self.queryset to determine the
# choices dynamically. Return a fresh ModelChoiceIterator that has not been
# consumed. Note that we're instantiating a new ModelChoiceIterator *each*
# time _get_choices() is called (and, thus, each time self.choices is
# accessed) so that we can ensure the QuerySet has not been consumed. This
# construct might look complicated but it allows for lazy evaluation of
# the queryset.
return ModelChoiceIterator(self)
choices = property(_get_choices, ChoiceField._set_choices)
def prepare_value(self, value):
if hasattr(value, '_meta'):
if self.to_field_name:
return value.serializable_value(self.to_field_name)
else:
return value.pk
return super(ModelChoiceField, self).prepare_value(value)
def to_python(self, value):
if value in self.empty_values:
return None
try:
key = self.to_field_name or 'pk'
value = self.queryset.get(**{key: value})
except (ValueError, TypeError, self.queryset.model.DoesNotExist):
raise ValidationError(self.error_messages['invalid_choice'], code='invalid_choice')
return value
def validate(self, value):
return Field.validate(self, value)
def has_changed(self, initial, data):
initial_value = initial if initial is not None else ''
data_value = data if data is not None else ''
return force_text(self.prepare_value(initial_value)) != force_text(data_value)
class ModelMultipleChoiceField(ModelChoiceField):
"""A MultipleChoiceField whose choices are a model QuerySet."""
widget = SelectMultiple
hidden_widget = MultipleHiddenInput
default_error_messages = {
'list': _('Enter a list of values.'),
'invalid_choice': _('Select a valid choice. %(value)s is not one of the'
' available choices.'),
'invalid_pk_value': _('"%(pk)s" is not a valid value for a primary key.')
}
def __init__(self, queryset, required=True, widget=None, label=None,
initial=None, help_text='', *args, **kwargs):
super(ModelMultipleChoiceField, self).__init__(queryset, None,
required, widget, label, initial, help_text, *args, **kwargs)
def to_python(self, value):
if not value:
return []
return list(self._check_values(value))
def clean(self, value):
if self.required and not value:
raise ValidationError(self.error_messages['required'], code='required')
elif not self.required and not value:
return self.queryset.none()
if not isinstance(value, (list, tuple)):
raise ValidationError(self.error_messages['list'], code='list')
qs = self._check_values(value)
# Since this overrides the inherited ModelChoiceField.clean
# we run custom validators here
self.run_validators(value)
return qs
def _check_values(self, value):
"""
Given a list of possible PK values, returns a QuerySet of the
corresponding objects. Raises a ValidationError if a given value is
invalid (not a valid PK, not in the queryset, etc.)
"""
key = self.to_field_name or 'pk'
# deduplicate given values to avoid creating many querysets or
# requiring the database backend deduplicate efficiently.
try:
value = frozenset(value)
except TypeError:
# list of lists isn't hashable, for example
raise ValidationError(
self.error_messages['list'],
code='list',
)
for pk in value:
try:
self.queryset.filter(**{key: pk})
except (ValueError, TypeError):
raise ValidationError(
self.error_messages['invalid_pk_value'],
code='invalid_pk_value',
params={'pk': pk},
)
qs = self.queryset.filter(**{'%s__in' % key: value})
pks = set(force_text(getattr(o, key)) for o in qs)
for val in value:
if force_text(val) not in pks:
raise ValidationError(
self.error_messages['invalid_choice'],
code='invalid_choice',
params={'value': val},
)
return qs
def prepare_value(self, value):
if (hasattr(value, '__iter__') and
not isinstance(value, six.text_type) and
not hasattr(value, '_meta')):
return [super(ModelMultipleChoiceField, self).prepare_value(v) for v in value]
return super(ModelMultipleChoiceField, self).prepare_value(value)
def has_changed(self, initial, data):
if initial is None:
initial = []
if data is None:
data = []
if len(initial) != len(data):
return True
initial_set = set(force_text(value) for value in self.prepare_value(initial))
data_set = set(force_text(value) for value in data)
return data_set != initial_set
def modelform_defines_fields(form_class):
return (form_class is not None and (
hasattr(form_class, '_meta') and
(form_class._meta.fields is not None or
form_class._meta.exclude is not None)
))
| bsd-3-clause |
proxysh/Safejumper-for-Desktop | buildlinux/env64/lib/python2.7/site-packages/pycparser/c_generator.py | 36 | 13829 | #------------------------------------------------------------------------------
# pycparser: c_generator.py
#
# C code generator from pycparser AST nodes.
#
# Copyright (C) 2008-2015, Eli Bendersky
# License: BSD
#------------------------------------------------------------------------------
from . import c_ast
class CGenerator(object):
""" Uses the same visitor pattern as c_ast.NodeVisitor, but modified to
return a value from each visit method, using string accumulation in
generic_visit.
"""
def __init__(self):
# Statements start with indentation of self.indent_level spaces, using
# the _make_indent method
#
self.indent_level = 0
def _make_indent(self):
return ' ' * self.indent_level
def visit(self, node):
method = 'visit_' + node.__class__.__name__
return getattr(self, method, self.generic_visit)(node)
def generic_visit(self, node):
#~ print('generic:', type(node))
if node is None:
return ''
else:
return ''.join(self.visit(c) for c_name, c in node.children())
def visit_Constant(self, n):
return n.value
def visit_ID(self, n):
return n.name
def visit_Pragma(self, n):
ret = '#pragma'
if n.string:
ret += ' ' + n.string
return ret
def visit_ArrayRef(self, n):
arrref = self._parenthesize_unless_simple(n.name)
return arrref + '[' + self.visit(n.subscript) + ']'
def visit_StructRef(self, n):
sref = self._parenthesize_unless_simple(n.name)
return sref + n.type + self.visit(n.field)
def visit_FuncCall(self, n):
fref = self._parenthesize_unless_simple(n.name)
return fref + '(' + self.visit(n.args) + ')'
def visit_UnaryOp(self, n):
operand = self._parenthesize_unless_simple(n.expr)
if n.op == 'p++':
return '%s++' % operand
elif n.op == 'p--':
return '%s--' % operand
elif n.op == 'sizeof':
# Always parenthesize the argument of sizeof since it can be
# a name.
return 'sizeof(%s)' % self.visit(n.expr)
else:
return '%s%s' % (n.op, operand)
def visit_BinaryOp(self, n):
lval_str = self._parenthesize_if(n.left,
lambda d: not self._is_simple_node(d))
rval_str = self._parenthesize_if(n.right,
lambda d: not self._is_simple_node(d))
return '%s %s %s' % (lval_str, n.op, rval_str)
def visit_Assignment(self, n):
rval_str = self._parenthesize_if(
n.rvalue,
lambda n: isinstance(n, c_ast.Assignment))
return '%s %s %s' % (self.visit(n.lvalue), n.op, rval_str)
def visit_IdentifierType(self, n):
return ' '.join(n.names)
def _visit_expr(self, n):
if isinstance(n, c_ast.InitList):
return '{' + self.visit(n) + '}'
elif isinstance(n, c_ast.ExprList):
return '(' + self.visit(n) + ')'
else:
return self.visit(n)
def visit_Decl(self, n, no_type=False):
# no_type is used when a Decl is part of a DeclList, where the type is
# explicitly only for the first declaration in a list.
#
s = n.name if no_type else self._generate_decl(n)
if n.bitsize: s += ' : ' + self.visit(n.bitsize)
if n.init:
s += ' = ' + self._visit_expr(n.init)
return s
def visit_DeclList(self, n):
s = self.visit(n.decls[0])
if len(n.decls) > 1:
s += ', ' + ', '.join(self.visit_Decl(decl, no_type=True)
for decl in n.decls[1:])
return s
def visit_Typedef(self, n):
s = ''
if n.storage: s += ' '.join(n.storage) + ' '
s += self._generate_type(n.type)
return s
def visit_Cast(self, n):
s = '(' + self._generate_type(n.to_type) + ')'
return s + ' ' + self._parenthesize_unless_simple(n.expr)
def visit_ExprList(self, n):
visited_subexprs = []
for expr in n.exprs:
visited_subexprs.append(self._visit_expr(expr))
return ', '.join(visited_subexprs)
def visit_InitList(self, n):
visited_subexprs = []
for expr in n.exprs:
visited_subexprs.append(self._visit_expr(expr))
return ', '.join(visited_subexprs)
def visit_Enum(self, n):
s = 'enum'
if n.name: s += ' ' + n.name
if n.values:
s += ' {'
for i, enumerator in enumerate(n.values.enumerators):
s += enumerator.name
if enumerator.value:
s += ' = ' + self.visit(enumerator.value)
if i != len(n.values.enumerators) - 1:
s += ', '
s += '}'
return s
def visit_FuncDef(self, n):
decl = self.visit(n.decl)
self.indent_level = 0
body = self.visit(n.body)
if n.param_decls:
knrdecls = ';\n'.join(self.visit(p) for p in n.param_decls)
return decl + '\n' + knrdecls + ';\n' + body + '\n'
else:
return decl + '\n' + body + '\n'
def visit_FileAST(self, n):
s = ''
for ext in n.ext:
if isinstance(ext, c_ast.FuncDef):
s += self.visit(ext)
elif isinstance(ext, c_ast.Pragma):
s += self.visit(ext) + '\n'
else:
s += self.visit(ext) + ';\n'
return s
def visit_Compound(self, n):
s = self._make_indent() + '{\n'
self.indent_level += 2
if n.block_items:
s += ''.join(self._generate_stmt(stmt) for stmt in n.block_items)
self.indent_level -= 2
s += self._make_indent() + '}\n'
return s
def visit_EmptyStatement(self, n):
return ';'
def visit_ParamList(self, n):
return ', '.join(self.visit(param) for param in n.params)
def visit_Return(self, n):
s = 'return'
if n.expr: s += ' ' + self.visit(n.expr)
return s + ';'
def visit_Break(self, n):
return 'break;'
def visit_Continue(self, n):
return 'continue;'
def visit_TernaryOp(self, n):
s = '(' + self._visit_expr(n.cond) + ') ? '
s += '(' + self._visit_expr(n.iftrue) + ') : '
s += '(' + self._visit_expr(n.iffalse) + ')'
return s
def visit_If(self, n):
s = 'if ('
if n.cond: s += self.visit(n.cond)
s += ')\n'
s += self._generate_stmt(n.iftrue, add_indent=True)
if n.iffalse:
s += self._make_indent() + 'else\n'
s += self._generate_stmt(n.iffalse, add_indent=True)
return s
def visit_For(self, n):
s = 'for ('
if n.init: s += self.visit(n.init)
s += ';'
if n.cond: s += ' ' + self.visit(n.cond)
s += ';'
if n.next: s += ' ' + self.visit(n.next)
s += ')\n'
s += self._generate_stmt(n.stmt, add_indent=True)
return s
def visit_While(self, n):
s = 'while ('
if n.cond: s += self.visit(n.cond)
s += ')\n'
s += self._generate_stmt(n.stmt, add_indent=True)
return s
def visit_DoWhile(self, n):
s = 'do\n'
s += self._generate_stmt(n.stmt, add_indent=True)
s += self._make_indent() + 'while ('
if n.cond: s += self.visit(n.cond)
s += ');'
return s
def visit_Switch(self, n):
s = 'switch (' + self.visit(n.cond) + ')\n'
s += self._generate_stmt(n.stmt, add_indent=True)
return s
def visit_Case(self, n):
s = 'case ' + self.visit(n.expr) + ':\n'
for stmt in n.stmts:
s += self._generate_stmt(stmt, add_indent=True)
return s
def visit_Default(self, n):
s = 'default:\n'
for stmt in n.stmts:
s += self._generate_stmt(stmt, add_indent=True)
return s
def visit_Label(self, n):
return n.name + ':\n' + self._generate_stmt(n.stmt)
def visit_Goto(self, n):
return 'goto ' + n.name + ';'
def visit_EllipsisParam(self, n):
return '...'
def visit_Struct(self, n):
return self._generate_struct_union(n, 'struct')
def visit_Typename(self, n):
return self._generate_type(n.type)
def visit_Union(self, n):
return self._generate_struct_union(n, 'union')
def visit_NamedInitializer(self, n):
s = ''
for name in n.name:
if isinstance(name, c_ast.ID):
s += '.' + name.name
elif isinstance(name, c_ast.Constant):
s += '[' + name.value + ']'
s += ' = ' + self._visit_expr(n.expr)
return s
def visit_FuncDecl(self, n):
return self._generate_type(n)
def _generate_struct_union(self, n, name):
""" Generates code for structs and unions. name should be either
'struct' or union.
"""
s = name + ' ' + (n.name or '')
if n.decls:
s += '\n'
s += self._make_indent()
self.indent_level += 2
s += '{\n'
for decl in n.decls:
s += self._generate_stmt(decl)
self.indent_level -= 2
s += self._make_indent() + '}'
return s
def _generate_stmt(self, n, add_indent=False):
""" Generation from a statement node. This method exists as a wrapper
for individual visit_* methods to handle different treatment of
some statements in this context.
"""
typ = type(n)
if add_indent: self.indent_level += 2
indent = self._make_indent()
if add_indent: self.indent_level -= 2
if typ in (
c_ast.Decl, c_ast.Assignment, c_ast.Cast, c_ast.UnaryOp,
c_ast.BinaryOp, c_ast.TernaryOp, c_ast.FuncCall, c_ast.ArrayRef,
c_ast.StructRef, c_ast.Constant, c_ast.ID, c_ast.Typedef,
c_ast.ExprList):
# These can also appear in an expression context so no semicolon
# is added to them automatically
#
return indent + self.visit(n) + ';\n'
elif typ in (c_ast.Compound,):
# No extra indentation required before the opening brace of a
# compound - because it consists of multiple lines it has to
# compute its own indentation.
#
return self.visit(n)
else:
return indent + self.visit(n) + '\n'
def _generate_decl(self, n):
""" Generation from a Decl node.
"""
s = ''
if n.funcspec: s = ' '.join(n.funcspec) + ' '
if n.storage: s += ' '.join(n.storage) + ' '
s += self._generate_type(n.type)
return s
def _generate_type(self, n, modifiers=[]):
""" Recursive generation from a type node. n is the type node.
modifiers collects the PtrDecl, ArrayDecl and FuncDecl modifiers
encountered on the way down to a TypeDecl, to allow proper
generation from it.
"""
typ = type(n)
#~ print(n, modifiers)
if typ == c_ast.TypeDecl:
s = ''
if n.quals: s += ' '.join(n.quals) + ' '
s += self.visit(n.type)
nstr = n.declname if n.declname else ''
# Resolve modifiers.
# Wrap in parens to distinguish pointer to array and pointer to
# function syntax.
#
for i, modifier in enumerate(modifiers):
if isinstance(modifier, c_ast.ArrayDecl):
if (i != 0 and isinstance(modifiers[i - 1], c_ast.PtrDecl)):
nstr = '(' + nstr + ')'
nstr += '[' + self.visit(modifier.dim) + ']'
elif isinstance(modifier, c_ast.FuncDecl):
if (i != 0 and isinstance(modifiers[i - 1], c_ast.PtrDecl)):
nstr = '(' + nstr + ')'
nstr += '(' + self.visit(modifier.args) + ')'
elif isinstance(modifier, c_ast.PtrDecl):
if modifier.quals:
nstr = '* %s %s' % (' '.join(modifier.quals), nstr)
else:
nstr = '*' + nstr
if nstr: s += ' ' + nstr
return s
elif typ == c_ast.Decl:
return self._generate_decl(n.type)
elif typ == c_ast.Typename:
return self._generate_type(n.type)
elif typ == c_ast.IdentifierType:
return ' '.join(n.names) + ' '
elif typ in (c_ast.ArrayDecl, c_ast.PtrDecl, c_ast.FuncDecl):
return self._generate_type(n.type, modifiers + [n])
else:
return self.visit(n)
def _parenthesize_if(self, n, condition):
""" Visits 'n' and returns its string representation, parenthesized
if the condition function applied to the node returns True.
"""
s = self._visit_expr(n)
if condition(n):
return '(' + s + ')'
else:
return s
def _parenthesize_unless_simple(self, n):
""" Common use case for _parenthesize_if
"""
return self._parenthesize_if(n, lambda d: not self._is_simple_node(d))
def _is_simple_node(self, n):
""" Returns True for nodes that are "simple" - i.e. nodes that always
have higher precedence than operators.
"""
return isinstance(n,( c_ast.Constant, c_ast.ID, c_ast.ArrayRef,
c_ast.StructRef, c_ast.FuncCall))
| gpl-2.0 |
wetneb/django | tests/utils_tests/test_numberformat.py | 307 | 4049 | # -*- encoding: utf-8 -*-
from __future__ import unicode_literals
from decimal import Decimal
from sys import float_info
from unittest import TestCase
from django.utils.numberformat import format as nformat
class TestNumberFormat(TestCase):
def test_format_number(self):
self.assertEqual(nformat(1234, '.'), '1234')
self.assertEqual(nformat(1234.2, '.'), '1234.2')
self.assertEqual(nformat(1234, '.', decimal_pos=2), '1234.00')
self.assertEqual(nformat(1234, '.', grouping=2, thousand_sep=','),
'1234')
self.assertEqual(nformat(1234, '.', grouping=2, thousand_sep=',',
force_grouping=True), '12,34')
self.assertEqual(nformat(-1234.33, '.', decimal_pos=1), '-1234.3')
def test_format_string(self):
self.assertEqual(nformat('1234', '.'), '1234')
self.assertEqual(nformat('1234.2', '.'), '1234.2')
self.assertEqual(nformat('1234', '.', decimal_pos=2), '1234.00')
self.assertEqual(nformat('1234', '.', grouping=2, thousand_sep=','),
'1234')
self.assertEqual(nformat('1234', '.', grouping=2, thousand_sep=',',
force_grouping=True), '12,34')
self.assertEqual(nformat('-1234.33', '.', decimal_pos=1), '-1234.3')
self.assertEqual(nformat('10000', '.', grouping=3,
thousand_sep='comma', force_grouping=True),
'10comma000')
def test_large_number(self):
most_max = ('{}179769313486231570814527423731704356798070567525844996'
'598917476803157260780028538760589558632766878171540458953'
'514382464234321326889464182768467546703537516986049910576'
'551282076245490090389328944075868508455133942304583236903'
'222948165808559332123348274797826204144723168738177180919'
'29988125040402618412485836{}')
most_max2 = ('{}35953862697246314162905484746340871359614113505168999'
'31978349536063145215600570775211791172655337563430809179'
'07028764928468642653778928365536935093407075033972099821'
'15310256415249098018077865788815173701691026788460916647'
'38064458963316171186642466965495956524082894463374763543'
'61838599762500808052368249716736')
int_max = int(float_info.max)
self.assertEqual(nformat(int_max, '.'), most_max.format('', '8'))
self.assertEqual(nformat(int_max + 1, '.'), most_max.format('', '9'))
self.assertEqual(nformat(int_max * 2, '.'), most_max2.format(''))
self.assertEqual(nformat(0 - int_max, '.'), most_max.format('-', '8'))
self.assertEqual(nformat(-1 - int_max, '.'), most_max.format('-', '9'))
self.assertEqual(nformat(-2 * int_max, '.'), most_max2.format('-'))
def test_decimal_numbers(self):
self.assertEqual(nformat(Decimal('1234'), '.'), '1234')
self.assertEqual(nformat(Decimal('1234.2'), '.'), '1234.2')
self.assertEqual(nformat(Decimal('1234'), '.', decimal_pos=2), '1234.00')
self.assertEqual(nformat(Decimal('1234'), '.', grouping=2, thousand_sep=','), '1234')
self.assertEqual(nformat(Decimal('1234'), '.', grouping=2, thousand_sep=',', force_grouping=True), '12,34')
self.assertEqual(nformat(Decimal('-1234.33'), '.', decimal_pos=1), '-1234.3')
self.assertEqual(nformat(Decimal('0.00000001'), '.', decimal_pos=8), '0.00000001')
def test_decimal_subclass(self):
class EuroDecimal(Decimal):
"""
Wrapper for Decimal which prefixes each amount with the € symbol.
"""
def __format__(self, specifier, **kwargs):
amount = super(EuroDecimal, self).__format__(specifier, **kwargs)
return '€ {}'.format(amount)
price = EuroDecimal('1.23')
self.assertEqual(nformat(price, ','), '€ 1,23')
| bsd-3-clause |
dsaraujo/circulante | django/contrib/flatpages/tests/views.py | 152 | 3360 | import os
from django.conf import settings
from django.contrib.auth.models import User
from django.contrib.flatpages.models import FlatPage
from django.test import TestCase
class FlatpageViewTests(TestCase):
fixtures = ['sample_flatpages']
urls = 'django.contrib.flatpages.tests.urls'
def setUp(self):
self.old_MIDDLEWARE_CLASSES = settings.MIDDLEWARE_CLASSES
flatpage_middleware_class = 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware'
if flatpage_middleware_class in settings.MIDDLEWARE_CLASSES:
settings.MIDDLEWARE_CLASSES = tuple(m for m in settings.MIDDLEWARE_CLASSES if m != flatpage_middleware_class)
self.old_TEMPLATE_DIRS = settings.TEMPLATE_DIRS
settings.TEMPLATE_DIRS = (
os.path.join(
os.path.dirname(__file__),
'templates'
),
)
self.old_LOGIN_URL = settings.LOGIN_URL
settings.LOGIN_URL = '/accounts/login/'
def tearDown(self):
settings.MIDDLEWARE_CLASSES = self.old_MIDDLEWARE_CLASSES
settings.TEMPLATE_DIRS = self.old_TEMPLATE_DIRS
settings.LOGIN_URL = self.old_LOGIN_URL
def test_view_flatpage(self):
"A flatpage can be served through a view"
response = self.client.get('/flatpage_root/flatpage/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, "<p>Isn't it flat!</p>")
def test_view_non_existent_flatpage(self):
"A non-existent flatpage raises 404 when served through a view"
response = self.client.get('/flatpage_root/no_such_flatpage/')
self.assertEqual(response.status_code, 404)
def test_view_authenticated_flatpage(self):
"A flatpage served through a view can require authentication"
response = self.client.get('/flatpage_root/sekrit/')
self.assertRedirects(response, '/accounts/login/?next=/flatpage_root/sekrit/')
User.objects.create_user('testuser', '[email protected]', 's3krit')
self.client.login(username='testuser',password='s3krit')
response = self.client.get('/flatpage_root/sekrit/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, "<p>Isn't it sekrit!</p>")
def test_fallback_flatpage(self):
"A fallback flatpage won't be served if the middleware is disabled"
response = self.client.get('/flatpage/')
self.assertEqual(response.status_code, 404)
def test_fallback_non_existent_flatpage(self):
"A non-existent flatpage won't be served if the fallback middlware is disabled"
response = self.client.get('/no_such_flatpage/')
self.assertEqual(response.status_code, 404)
def test_view_flatpage_special_chars(self):
"A flatpage with special chars in the URL can be served through a view"
fp = FlatPage.objects.create(
url="/some.very_special~chars-here/",
title="A very special page",
content="Isn't it special!",
enable_comments=False,
registration_required=False,
)
fp.sites.add(settings.SITE_ID)
response = self.client.get('/flatpage_root/some.very_special~chars-here/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, "<p>Isn't it special!</p>")
| bsd-3-clause |
ptsneves/ardupilot | mk/PX4/Tools/genmsg/test/test_genmsg_gentools.py | 215 | 9526 | #!/usr/bin/env python
# Software License Agreement (BSD License)
#
# Copyright (c) 2008, Willow Garage, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Willow Garage, Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import os
import sys
TEST_CTX = 'rosgraph_msgs'
def get_test_dir():
return os.path.abspath(os.path.join(os.path.dirname(__file__), 'md5tests'))
def get_test_msg_dir():
return os.path.abspath(os.path.join(os.path.dirname(__file__), 'files'))
def get_search_path():
test_dir = get_test_msg_dir()
search_path = {}
for pkg in ['std_msgs', 'rosgraph_msgs', 'test_ros', 'geometry_msgs']:
search_path[pkg] = [ os.path.join(test_dir, pkg, 'msg') ]
return search_path
def _load_md5_tests(dir_name):
test_dir = os.path.join(get_test_dir(), dir_name)
tests = {}
for f in os.listdir(test_dir):
path = os.path.join(test_dir, f)
if not f.endswith('.txt'):
continue
name = f[:-4]
while name and name[-1].isdigit():
name = name[:-1]
assert bool(name)
if name in tests:
tests[name].append(path)
else:
tests[name] = [path]
return tests
def _compute_md5(msg_context, f):
from genmsg import load_depends, compute_md5
from genmsg.msg_loader import load_msg_from_string
text = open(f, 'r').read()
short_name = os.path.basename(f)[:-len('.msg')]
full_name = "%s/%s"%(TEST_CTX, short_name)
spec = load_msg_from_string(msg_context, text, full_name)
search_path = get_search_path()
load_depends(msg_context, spec, search_path)
return compute_md5(msg_context, spec)
def _compute_md5_text(msg_context, f):
from genmsg import compute_md5_text, load_depends
from genmsg.msg_loader import load_msg_from_string
text = open(f, 'r').read()
short_name = os.path.basename(f)[:-len('.msg')]
full_name = "%s/%s"%(TEST_CTX, short_name)
spec = load_msg_from_string(msg_context, text, full_name)
search_path = get_search_path()
load_depends(msg_context, spec, search_path)
return compute_md5_text(msg_context, spec)
def test_compute_md5_text():
from genmsg import MsgContext
msg_context = MsgContext.create_default()
# this test is just verifying that the md5sum is what it was for cturtle->electric
Header_md5 = "2176decaecbce78abc3b96ef049fabed"
rg_msg_dir = os.path.join(get_test_msg_dir(), TEST_CTX, 'msg')
clock_msg = os.path.join(rg_msg_dir, 'Clock.msg')
# a bit gory, but go ahead and regression test these important messages
assert "time clock" == _compute_md5_text(msg_context, clock_msg)
log_msg = os.path.join(rg_msg_dir, 'Log.msg')
assert "byte DEBUG=1\nbyte INFO=2\nbyte WARN=4\nbyte ERROR=8\nbyte FATAL=16\n%s header\nbyte level\nstring name\nstring msg\nstring file\nstring function\nuint32 line\nstring[] topics"%Header_md5 == _compute_md5_text(msg_context, log_msg)
tests = _load_md5_tests('md5text')
# text file #1 is the reference
for k, files in tests.items():
print("running tests", k)
ref_file = [f for f in files if f.endswith('%s1.txt'%k)]
if not ref_file:
assert False, "failed to load %s"%k
ref_file = ref_file[0]
ref_text = open(ref_file, 'r').read().strip()
print("KEY", k)
files = [f for f in files if not f.endswith('%s1.txt'%k)]
for f in files[1:]:
f_text = _compute_md5_text(msg_context, f)
assert ref_text == f_text, "failed on %s\n%s\n%s: \n[%s]\nvs.\n[%s]\n"%(k, ref_file, f, ref_text, f_text)
def test_md5_equals():
from genmsg import MsgContext
msg_context = MsgContext.create_default()
search_path = get_search_path()
tests = _load_md5_tests('same')
for k, files in tests.items():
print("running tests", k)
md5sum = _compute_md5(msg_context, files[0])
for f in files[1:]:
assert md5sum == _compute_md5(msg_context, f), "failed on %s: \n[%s]\nvs.\n[%s]\n"%(k, _compute_md5_text(msg_context, files[0]), _compute_md5_text(msg_context, f))
def test_md5_not_equals():
from genmsg import MsgContext
msg_context = MsgContext.create_default()
tests = _load_md5_tests('different')
for k, files in tests.items():
print("running tests", k)
md5s = set()
md6md5sum = _compute_md5(msg_context, files[0])
for f in files:
md5s.add(_compute_md5(msg_context, f))
# each md5 should be unique
assert len(md5s) == len(files)
twist_with_covariance_stamped_full_text = """# This represents an estimate twist with reference coordinate frame and timestamp.
Header header
TwistWithCovariance twist
================================================================================
MSG: std_msgs/Header
# Standard metadata for higher-level stamped data types.
# This is generally used to communicate timestamped data
# in a particular coordinate frame.
#
# sequence ID: consecutively increasing ID
uint32 seq
#Two-integer timestamp that is expressed as:
# * stamp.secs: seconds (stamp_secs) since epoch
# * stamp.nsecs: nanoseconds since stamp_secs
# time-handling sugar is provided by the client library
time stamp
#Frame this data is associated with
# 0: no frame
# 1: global frame
string frame_id
================================================================================
MSG: geometry_msgs/TwistWithCovariance
# This expresses velocity in free space with uncertianty.
Twist twist
# Row-major representation of the 6x6 covariance matrix
# The orientation parameters use a fixed-axis representation.
# In order, the parameters are:
# (x, y, z, rotation about X axis, rotation about Y axis, rotation about Z axis)
float64[36] covariance
================================================================================
MSG: geometry_msgs/Twist
# This expresses velocity in free space broken into it's linear and angular parts.
Vector3 linear
Vector3 angular
================================================================================
MSG: geometry_msgs/Vector3
# This represents a vector in free space.
float64 x
float64 y
float64 z"""
log_full_text = """##
## Severity level constants
##
byte DEBUG=1 #debug level
byte INFO=2 #general level
byte WARN=4 #warning level
byte ERROR=8 #error level
byte FATAL=16 #fatal/critical level
##
## Fields
##
Header header
byte level
string name # name of the node
string msg # message
string file # file the message came from
string function # function the message came from
uint32 line # line the message came from
string[] topics # topic names that the node publishes
================================================================================
MSG: std_msgs/Header
# Standard metadata for higher-level stamped data types.
# This is generally used to communicate timestamped data
# in a particular coordinate frame.
#
# sequence ID: consecutively increasing ID
uint32 seq
#Two-integer timestamp that is expressed as:
# * stamp.secs: seconds (stamp_secs) since epoch
# * stamp.nsecs: nanoseconds since stamp_secs
# time-handling sugar is provided by the client library
time stamp
#Frame this data is associated with
# 0: no frame
# 1: global frame
string frame_id
"""
def test_compute_full_text():
from genmsg import MsgContext, compute_full_text, load_msg_by_type, load_depends
msg_context = MsgContext.create_default()
search_path = get_search_path()
# regression test against values used for cturtle-electric
spec = load_msg_by_type(msg_context, 'rosgraph_msgs/Log', search_path)
load_depends(msg_context, spec, search_path)
val = compute_full_text(msg_context, spec)
assert val == log_full_text, "[%s][%s]"%(val, log_full_text)
spec = load_msg_by_type(msg_context, 'geometry_msgs/TwistWithCovarianceStamped', search_path)
load_depends(msg_context, spec, search_path)
val = compute_full_text(msg_context, spec)
assert val == twist_with_covariance_stamped_full_text, "[%s][%s]"%(val, twist_with_covariance_stamped_full_text)
| gpl-3.0 |
Sterncat/opticspy | opticspy/ray_tracing/CodeV_examples/double_gauss/matrix_double_gauss.py | 1 | 3026 | from __future__ import division as __division__
import numpy as np
def T(t,n):
return np.array([[1,t/n],[0,1]])
def R(c,n_left,n_right):
return np.array([[1,0],[-c*(n_right-n_left),1]])
c1 = 1/56.20238
c2 = 1/(152.28580)
c3 = 1/(37.68262)
c4 = 1/10000000
c5 = 1/24.23130
c6 = 1/(-28.37731)
c7 = 1/(1000000)
c8 = 1/(-37.92546)
c9 = 1/(177.41176)
c10 = 1/(-79.41143)
n1 = 1.622292
n2 = 1.607379
n3 = 1.603417
n4 = 1.620408
t1 = 8.750000
t2 = 0.500000
t3 = 12.500000
t4 = 3.800000
t5 = 16.369445
t5a = 13.747957
t6 = 3.800000
t7 = 11
t8 = 0.5
t9 = 7
def ABCD(matrix_list):
M = matrix_list.pop()
while matrix_list:
M = np.dot(M,matrix_list.pop())
return M
R1 = R(c1,1,n1)
T1 = T(t1,n1)
R2 = R(c2,n1,1)
T2 = T(t2,1)
R3 = R(c3,1,n2)
T3 = T(t3,n2)
R4 = R(c4,n2,n3)
T4 = T(t4,n3)
R5 = R(c5,n3,1)
T5 = T(t5+t5a,1)
R6 = R(c6,1,n3)
T6 = T(t6,n3)
R7 = R(c7,n3,n4)
T7 = T(t7,n4)
R8 = R(c8,n4,1)
T8 = T(t8,1)
R9 = R(c9,1,n4)
T9 = T(t9,n4)
R10 = R(c10,n4,1)
print '-----------------------lens data-----------------------'
ABCD_list = [R1,T1,R2,T2,R3,T3,R4,T4,R5,T5,R6,T6,R7,T7,R8,T8,R9,T9,R10]
M2 = ABCD(ABCD_list)
A = M2[0,0]
B = M2[0,1]
C = M2[1,0]
D = M2[1,1]
print A*D-B*C
print 'Front Focal Point F:',D/C
print 'Rear Focal Point F\':',-A/C
print 'Front Principal Point P:', (D-1)/C
print 'Rear Principal Point P\':',(1-A)/C
print 'Front Nodal Point N:',(D-1)/C
print 'Rear Nodal Point N\':',(1-A)/C
print 'Front Focal Length f:',-1/C
print 'Rear Focal Length f\':',-1/C
F = D/C
Fp = -A/C
f = -1/C
fp = -1/C
z = -10000000
zp = f*fp/z
print 'zp:',zp
print 'image position:',Fp + zp
P = (D-1)/C
Pp = (1-A)/C
phi = -C
l = -10000000
lp = 1/(phi + 1/l)
print 'lp',lp
print 'image position 2 l\' = ',lp+Pp
print
print '-----start finding entrance pupil location-----\n'
front3_ABCD_list = [R1,T1,R2,T2,R3,T3,R4,T4,R5]
M2 = ABCD(front3_ABCD_list)
A = M2[0,0]
B = M2[0,1]
C = M2[1,0]
D = M2[1,1]
print A*D-B*C
print 'Front Focal Point F:',D/C
print 'Rear Focal Point F\':',-A/C
print 'Front Principal Point P:', (D-1)/C
print 'Rear Principal Point P\':',(1-A)/C
print 'Front Nodal Point N:',(D-1)/C
print 'Rear Nodal Point N\':',(1-A)/C
print 'Front Focal Length f:',-1/C
print 'Rear Focal Length f\':',-1/C
P = (D-1)/C
Pp = (1-A)/C
phi = -C
lp = t5 - Pp
l = 1/(1/lp-phi)
print 'P',P
print 'P\'',Pp
print 'lp',lp
print 'entrance pupil position l\' = ',l + P
print
print '-----start finding exit pupil location-----'
back3_ABCD_list = [R6,T6,R7,T7,R8,T8,R9,T9,R10]
M2 = ABCD(back3_ABCD_list)
A = M2[0,0]
B = M2[0,1]
C = M2[1,0]
D = M2[1,1]
print A*D-B*C
print 'Front Focal Point F:',D/C
print 'Rear Focal Point F\':',-A/C
print 'Front Principal Point P:', (D-1)/C
print 'Rear Principal Point P\':',(1-A)/C
print 'Front Nodal Point N:',(D-1)/C
print 'Rear Nodal Point N\':',(1-A)/C
print 'Front Focal Length f:',-1/C
print 'Rear Focal Length f\':',-1/C
phi = -C
P = (D-1)/C
Pp = (1-A)/C
l = -(t5a+P)
print 'power', phi
print 'stop position:',l
lp = 1/(1/l + phi)
print 'exit pupil position l\' = ',lp+Pp
| mit |
eldabbagh/gae-boilerplate | bp_includes/external/requests/models.py | 33 | 25349 | # -*- coding: utf-8 -*-
"""
requests.models
~~~~~~~~~~~~~~~
This module contains the primary objects that power Requests.
"""
import collections
import logging
import datetime
from io import BytesIO, UnsupportedOperation
from .hooks import default_hooks
from .structures import CaseInsensitiveDict
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header
from .packages.urllib3.fields import RequestField
from .packages.urllib3.filepost import encode_multipart_formdata
from .packages.urllib3.util import parse_url
from .packages.urllib3.exceptions import DecodeError
from .exceptions import (
HTTPError, RequestException, MissingSchema, InvalidURL,
ChunkedEncodingError, ContentDecodingError)
from .utils import (
guess_filename, get_auth_from_url, requote_uri,
stream_decode_response_unicode, to_key_val_list, parse_header_links,
iter_slices, guess_json_utf, super_len, to_native_string)
from .compat import (
cookielib, urlunparse, urlsplit, urlencode, str, bytes, StringIO,
is_py2, chardet, json, builtin_str, basestring, IncompleteRead)
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
log = logging.getLogger(__name__)
class RequestEncodingMixin(object):
@property
def path_url(self):
"""Build the path URL to use."""
url = []
p = urlsplit(self.url)
path = p.path
if not path:
path = '/'
url.append(path)
query = p.query
if query:
url.append('?')
url.append(query)
return ''.join(url)
@staticmethod
def _encode_params(data):
"""Encode parameters in a piece of data.
Will successfully encode parameters when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if isinstance(data, (str, bytes)):
return data
elif hasattr(data, 'read'):
return data
elif hasattr(data, '__iter__'):
result = []
for k, vs in to_key_val_list(data):
if isinstance(vs, basestring) or not hasattr(vs, '__iter__'):
vs = [vs]
for v in vs:
if v is not None:
result.append(
(k.encode('utf-8') if isinstance(k, str) else k,
v.encode('utf-8') if isinstance(v, str) else v))
return urlencode(result, doseq=True)
else:
return data
@staticmethod
def _encode_files(files, data):
"""Build the body for a multipart/form-data request.
Will successfully encode files when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if (not files):
raise ValueError("Files must be provided.")
elif isinstance(data, basestring):
raise ValueError("Data must not be a string.")
new_fields = []
fields = to_key_val_list(data or {})
files = to_key_val_list(files or {})
for field, val in fields:
if isinstance(val, basestring) or not hasattr(val, '__iter__'):
val = [val]
for v in val:
if v is not None:
# Don't call str() on bytestrings: in Py3 it all goes wrong.
if not isinstance(v, bytes):
v = str(v)
new_fields.append(
(field.decode('utf-8') if isinstance(field, bytes) else field,
v.encode('utf-8') if isinstance(v, str) else v))
for (k, v) in files:
# support for explicit filename
ft = None
fh = None
if isinstance(v, (tuple, list)):
if len(v) == 2:
fn, fp = v
elif len(v) == 3:
fn, fp, ft = v
else:
fn, fp, ft, fh = v
else:
fn = guess_filename(v) or k
fp = v
if isinstance(fp, str):
fp = StringIO(fp)
if isinstance(fp, bytes):
fp = BytesIO(fp)
rf = RequestField(name=k, data=fp.read(),
filename=fn, headers=fh)
rf.make_multipart(content_type=ft)
new_fields.append(rf)
body, content_type = encode_multipart_formdata(new_fields)
return body, content_type
class RequestHooksMixin(object):
def register_hook(self, event, hook):
"""Properly register a hook."""
if event not in self.hooks:
raise ValueError('Unsupported event specified, with event name "%s"' % (event))
if isinstance(hook, collections.Callable):
self.hooks[event].append(hook)
elif hasattr(hook, '__iter__'):
self.hooks[event].extend(h for h in hook if isinstance(h, collections.Callable))
def deregister_hook(self, event, hook):
"""Deregister a previously registered hook.
Returns True if the hook existed, False if not.
"""
try:
self.hooks[event].remove(hook)
return True
except ValueError:
return False
class Request(RequestHooksMixin):
"""A user-created :class:`Request <Request>` object.
Used to prepare a :class:`PreparedRequest <PreparedRequest>`, which is sent to the server.
:param method: HTTP method to use.
:param url: URL to send.
:param headers: dictionary of headers to send.
:param files: dictionary of {filename: fileobject} files to multipart upload.
:param data: the body to attach the request. If a dictionary is provided, form-encoding will take place.
:param params: dictionary of URL parameters to append to the URL.
:param auth: Auth handler or (user, pass) tuple.
:param cookies: dictionary or CookieJar of cookies to attach to this request.
:param hooks: dictionary of callback hooks, for internal usage.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> req.prepare()
<PreparedRequest [GET]>
"""
def __init__(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None):
# Default empty dicts for dict params.
data = [] if data is None else data
files = [] if files is None else files
headers = {} if headers is None else headers
params = {} if params is None else params
hooks = {} if hooks is None else hooks
self.hooks = default_hooks()
for (k, v) in list(hooks.items()):
self.register_hook(event=k, hook=v)
self.method = method
self.url = url
self.headers = headers
self.files = files
self.data = data
self.params = params
self.auth = auth
self.cookies = cookies
def __repr__(self):
return '<Request [%s]>' % (self.method)
def prepare(self):
"""Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it."""
p = PreparedRequest()
p.prepare(
method=self.method,
url=self.url,
headers=self.headers,
files=self.files,
data=self.data,
params=self.params,
auth=self.auth,
cookies=self.cookies,
hooks=self.hooks,
)
return p
class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
"""The fully mutable :class:`PreparedRequest <PreparedRequest>` object,
containing the exact bytes that will be sent to the server.
Generated from either a :class:`Request <Request>` object or manually.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> r = req.prepare()
<PreparedRequest [GET]>
>>> s = requests.Session()
>>> s.send(r)
<Response [200]>
"""
def __init__(self):
#: HTTP verb to send to the server.
self.method = None
#: HTTP URL to send the request to.
self.url = None
#: dictionary of HTTP headers.
self.headers = None
# The `CookieJar` used to create the Cookie header will be stored here
# after prepare_cookies is called
self._cookies = None
#: request body to send to the server.
self.body = None
#: dictionary of callback hooks, for internal usage.
self.hooks = default_hooks()
def prepare(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
self.prepare_hooks(hooks)
def __repr__(self):
return '<PreparedRequest [%s]>' % (self.method)
def copy(self):
p = PreparedRequest()
p.method = self.method
p.url = self.url
p.headers = self.headers.copy()
p._cookies = self._cookies.copy()
p.body = self.body
p.hooks = self.hooks
return p
def prepare_method(self, method):
"""Prepares the given HTTP method."""
self.method = method
if self.method is not None:
self.method = self.method.upper()
def prepare_url(self, url, params):
"""Prepares the given HTTP URL."""
#: Accept objects that have string representations.
try:
url = unicode(url)
except NameError:
# We're on Python 3.
url = str(url)
except UnicodeDecodeError:
pass
# Don't do any URL preparation for oddball schemes
if ':' in url and not url.lower().startswith('http'):
self.url = url
return
# Support for unicode domain names and paths.
scheme, auth, host, port, path, query, fragment = parse_url(url)
if not scheme:
raise MissingSchema("Invalid URL {0!r}: No schema supplied. "
"Perhaps you meant http://{0}?".format(url))
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
# Only want to apply IDNA to the hostname
try:
host = host.encode('idna').decode('utf-8')
except UnicodeError:
raise InvalidURL('URL has an invalid label.')
# Carefully reconstruct the network location
netloc = auth or ''
if netloc:
netloc += '@'
netloc += host
if port:
netloc += ':' + str(port)
# Bare domains aren't valid URLs.
if not path:
path = '/'
if is_py2:
if isinstance(scheme, str):
scheme = scheme.encode('utf-8')
if isinstance(netloc, str):
netloc = netloc.encode('utf-8')
if isinstance(path, str):
path = path.encode('utf-8')
if isinstance(query, str):
query = query.encode('utf-8')
if isinstance(fragment, str):
fragment = fragment.encode('utf-8')
enc_params = self._encode_params(params)
if enc_params:
if query:
query = '%s&%s' % (query, enc_params)
else:
query = enc_params
url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
self.url = url
def prepare_headers(self, headers):
"""Prepares the given HTTP headers."""
if headers:
self.headers = CaseInsensitiveDict((to_native_string(name), value) for name, value in headers.items())
else:
self.headers = CaseInsensitiveDict()
def prepare_body(self, data, files):
"""Prepares the given HTTP body data."""
# Check if file, fo, generator, iterator.
# If not, run through normal process.
# Nottin' on you.
body = None
content_type = None
length = None
is_stream = all([
hasattr(data, '__iter__'),
not isinstance(data, basestring),
not isinstance(data, list),
not isinstance(data, dict)
])
try:
length = super_len(data)
except (TypeError, AttributeError, UnsupportedOperation):
length = None
if is_stream:
body = data
if files:
raise NotImplementedError('Streamed bodies and files are mutually exclusive.')
if length is not None:
self.headers['Content-Length'] = builtin_str(length)
else:
self.headers['Transfer-Encoding'] = 'chunked'
else:
# Multi-part file uploads.
if files:
(body, content_type) = self._encode_files(files, data)
else:
if data:
body = self._encode_params(data)
if isinstance(data, str) or isinstance(data, builtin_str) or hasattr(data, 'read'):
content_type = None
else:
content_type = 'application/x-www-form-urlencoded'
self.prepare_content_length(body)
# Add content-type if it wasn't explicitly provided.
if (content_type) and (not 'content-type' in self.headers):
self.headers['Content-Type'] = content_type
self.body = body
def prepare_content_length(self, body):
if hasattr(body, 'seek') and hasattr(body, 'tell'):
body.seek(0, 2)
self.headers['Content-Length'] = builtin_str(body.tell())
body.seek(0, 0)
elif body is not None:
l = super_len(body)
if l:
self.headers['Content-Length'] = builtin_str(l)
elif self.method not in ('GET', 'HEAD'):
self.headers['Content-Length'] = '0'
def prepare_auth(self, auth, url=''):
"""Prepares the given HTTP auth data."""
# If no Auth is explicitly provided, extract it from the URL first.
if auth is None:
url_auth = get_auth_from_url(self.url)
auth = url_auth if any(url_auth) else None
if auth:
if isinstance(auth, tuple) and len(auth) == 2:
# special-case basic HTTP auth
auth = HTTPBasicAuth(*auth)
# Allow auth to make its changes.
r = auth(self)
# Update self to reflect the auth changes.
self.__dict__.update(r.__dict__)
# Recompute Content-Length
self.prepare_content_length(self.body)
def prepare_cookies(self, cookies):
"""Prepares the given HTTP cookie data."""
if isinstance(cookies, cookielib.CookieJar):
self._cookies = cookies
else:
self._cookies = cookiejar_from_dict(cookies)
cookie_header = get_cookie_header(self._cookies, self)
if cookie_header is not None:
self.headers['Cookie'] = cookie_header
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
for event in hooks:
self.register_hook(event, hooks[event])
class Response(object):
"""The :class:`Response <Response>` object, which contains a
server's response to an HTTP request.
"""
__attrs__ = [
'_content',
'status_code',
'headers',
'url',
'history',
'encoding',
'reason',
'cookies',
'elapsed',
'request',
]
def __init__(self):
super(Response, self).__init__()
self._content = False
self._content_consumed = False
#: Integer Code of responded HTTP Status.
self.status_code = None
#: Case-insensitive Dictionary of Response Headers.
#: For example, ``headers['content-encoding']`` will return the
#: value of a ``'Content-Encoding'`` response header.
self.headers = CaseInsensitiveDict()
#: File-like object representation of response (for advanced usage).
#: Requires that ``stream=True` on the request.
# This requirement does not apply for use internally to Requests.
self.raw = None
#: Final URL location of Response.
self.url = None
#: Encoding to decode with when accessing r.text.
self.encoding = None
#: A list of :class:`Response <Response>` objects from
#: the history of the Request. Any redirect responses will end
#: up here. The list is sorted from the oldest to the most recent request.
self.history = []
self.reason = None
#: A CookieJar of Cookies the server sent back.
self.cookies = cookiejar_from_dict({})
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta)
self.elapsed = datetime.timedelta(0)
def __getstate__(self):
# Consume everything; accessing the content attribute makes
# sure the content has been fully read.
if not self._content_consumed:
self.content
return dict(
(attr, getattr(self, attr, None))
for attr in self.__attrs__
)
def __setstate__(self, state):
for name, value in state.items():
setattr(self, name, value)
# pickled objects do not have .raw
setattr(self, '_content_consumed', True)
def __repr__(self):
return '<Response [%s]>' % (self.status_code)
def __bool__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __nonzero__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __iter__(self):
"""Allows you to use a response as an iterator."""
return self.iter_content(128)
@property
def ok(self):
try:
self.raise_for_status()
except RequestException:
return False
return True
@property
def apparent_encoding(self):
"""The apparent encoding, provided by the lovely Charade library
(Thanks, Ian!)."""
return chardet.detect(self.content)['encoding']
def iter_content(self, chunk_size=1, decode_unicode=False):
"""Iterates over the response data. When stream=True is set on the
request, this avoids reading the content at once into memory for
large responses. The chunk size is the number of bytes it should
read into memory. This is not necessarily the length of each item
returned as decoding can take place.
"""
if self._content_consumed:
# simulate reading small chunks of the content
return iter_slices(self._content, chunk_size)
def generate():
try:
# Special case for urllib3.
try:
for chunk in self.raw.stream(chunk_size,
decode_content=True):
yield chunk
except IncompleteRead as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
raise ContentDecodingError(e)
except AttributeError:
# Standard file-like object.
while True:
chunk = self.raw.read(chunk_size)
if not chunk:
break
yield chunk
self._content_consumed = True
gen = generate()
if decode_unicode:
gen = stream_decode_response_unicode(gen, self)
return gen
def iter_lines(self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=None):
"""Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.
"""
pending = None
for chunk in self.iter_content(chunk_size=chunk_size,
decode_unicode=decode_unicode):
if pending is not None:
chunk = pending + chunk
lines = chunk.splitlines()
if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
pending = lines.pop()
else:
pending = None
for line in lines:
yield line
if pending is not None:
yield pending
@property
def content(self):
"""Content of the response, in bytes."""
if self._content is False:
# Read the contents.
try:
if self._content_consumed:
raise RuntimeError(
'The content for this response was already consumed')
if self.status_code == 0:
self._content = None
else:
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
except AttributeError:
self._content = None
self._content_consumed = True
# don't need to release the connection; that's been handled by urllib3
# since we exhausted the data.
return self._content
@property
def text(self):
"""Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
``chardet``.
The encoding of the response content is determined based soley on HTTP
headers, following RFC 2616 to the letter. If you can take advantage of
non-HTTP knowledge to make a better guess at the encoding, you should
set ``r.encoding`` appropriately before accessing this property.
"""
# Try charset from content-type
content = None
encoding = self.encoding
if not self.content:
return str('')
# Fallback to auto-detected encoding.
if self.encoding is None:
encoding = self.apparent_encoding
# Decode unicode from given encoding.
try:
content = str(self.content, encoding, errors='replace')
except (LookupError, TypeError):
# A LookupError is raised if the encoding was not found which could
# indicate a misspelling or similar mistake.
#
# A TypeError can be raised if encoding is None
#
# So we try blindly encoding.
content = str(self.content, errors='replace')
return content
def json(self, **kwargs):
"""Returns the json-encoded content of a response, if any.
:param \*\*kwargs: Optional arguments that ``json.loads`` takes.
"""
if not self.encoding and len(self.content) > 3:
# No encoding set. JSON RFC 4627 section 3 states we should expect
# UTF-8, -16 or -32. Detect which one to use; If the detection or
# decoding fails, fall back to `self.text` (using chardet to make
# a best guess).
encoding = guess_json_utf(self.content)
if encoding is not None:
return json.loads(self.content.decode(encoding), **kwargs)
return json.loads(self.text, **kwargs)
@property
def links(self):
"""Returns the parsed header links of the response, if any."""
header = self.headers.get('link')
# l = MultiDict()
l = {}
if header:
links = parse_header_links(header)
for link in links:
key = link.get('rel') or link.get('url')
l[key] = link
return l
def raise_for_status(self):
"""Raises stored :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if 400 <= self.status_code < 500:
http_error_msg = '%s Client Error: %s' % (self.status_code, self.reason)
elif 500 <= self.status_code < 600:
http_error_msg = '%s Server Error: %s' % (self.status_code, self.reason)
if http_error_msg:
raise HTTPError(http_error_msg, response=self)
def close(self):
"""Closes the underlying file descriptor and releases the connection
back to the pool.
*Note: Should not normally need to be called explicitly.*
"""
return self.raw.release_conn()
| lgpl-3.0 |
arth-co/saleor | saleor/delivery/__init__.py | 13 | 1114 | from __future__ import unicode_literals
from django.conf import settings
from django.utils.encoding import python_2_unicode_compatible
from prices import Price
from satchless.item import ItemSet
class BaseDelivery(ItemSet):
group = None
name = ''
def __iter__(self):
return iter(self.group)
def get_delivery_total(self, **kwargs):
return Price(0, currency=settings.DEFAULT_CURRENCY)
def get_total_with_delivery(self):
return self.group.get_total() + self.get_delivery_total()
@python_2_unicode_compatible
class DummyShipping(BaseDelivery):
name = 'dummy_shipping'
def __str__(self):
return 'Dummy shipping'
def get_delivery_total(self, items, **kwargs):
weight = sum(
line.product.get_weight() * line.quantity for line in items)
return Price(weight, currency=settings.DEFAULT_CURRENCY)
def get_delivery_options_for_items(items, **kwargs):
if 'address' in kwargs:
yield DummyShipping()
else:
raise ValueError('Unknown delivery type')
def get_delivery(name):
return DummyShipping()
| bsd-3-clause |
jtakayama/ics691-setupbooster | makahiki/apps/managers/team_mgr/tests.py | 7 | 18560 | """Tests for team_manager."""
import datetime
from django.test import TransactionTestCase
from django.contrib.auth.models import User
from apps.managers.team_mgr import team_mgr
from apps.managers.team_mgr.models import Group, Team
from apps.utils import test_utils
class DormUnitTestCase(TransactionTestCase):
"""dorm test"""
def setUp(self):
self.groups = [Group(name="Test Group %d" % i) for i in range(0, 2)]
_ = [d.save() for d in self.groups]
self.teams = [Team(name=str(i), group=self.groups[i % 2]) for i in
range(0, 4)]
_ = [f.save() for f in self.teams]
self.users = [User.objects.create_user("test%d" % i, "[email protected]")
for i in range(0, 4)]
# Assign users to teams.
for index, user in enumerate(self.users):
user.get_profile().team = self.teams[index % 4]
user.get_profile().save()
self.current_round = "Round 1"
test_utils.set_competition_round()
def testTeamPointsInRound(self):
"""Tests calculating the team points leaders in a round."""
profile = self.users[0].get_profile()
profile.add_points(10,
datetime.datetime.today() - datetime.timedelta(minutes=1), "test")
profile.save()
self.assertEqual(self.groups[0].team_points_leaders(round_name=self.current_round)[0],
profile.team,
"The user's team is not leading in the prize.")
# Test that a user in a different team but same dorm changes the
# leader for the original user.
profile2 = self.users[2].get_profile()
profile2.add_points(profile.points() + 1,
datetime.datetime.today() - datetime.timedelta(minutes=1), "test")
profile2.save()
self.assertEqual(self.groups[0].team_points_leaders(round_name=self.current_round)[0],
profile2.team,
"The user's team should have changed.")
# Test that adding points to a user in a different dorm does not
# change affect these standings.
profile1 = self.users[1].get_profile()
profile1.add_points(profile.points() + 1,
datetime.datetime.today() -\
datetime.timedelta(minutes=1),
"test")
profile1.save()
self.assertEqual(self.groups[0].team_points_leaders(round_name=self.current_round)[0],
profile2.team,
"The leader of the team should not change.")
self.assertEqual(self.groups[1].team_points_leaders(round_name=self.current_round)[0],
profile1.team,
"The leader in the second dorm should be profile1's "
"team.")
# Test that a tie is handled properly.
profile.add_points(1, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(self.groups[0].team_points_leaders(round_name=self.current_round)[0],
profile.team,
"The leader of the team should have changed back.")
def testTeamPointsOverall(self):
"""Tests calculating the team points leaders in a round."""
profile = self.users[0].get_profile()
profile.add_points(10,
datetime.datetime.today() -\
datetime.timedelta(minutes=1),
"test")
profile.save()
self.assertEqual(self.groups[0].team_points_leaders()[0],
profile.team,
"The user's team is not leading in the prize.")
# Test that a user in a different team but same dorm changes the
# leader for the original user.
profile2 = self.users[2].get_profile()
profile2.add_points(profile.points() + 1,
datetime.datetime.today() -\
datetime.timedelta(minutes=1),
"test")
profile2.save()
self.assertEqual(self.groups[0].team_points_leaders()[0],
profile2.team,
"The user's team should have changed.")
# Test that a tie between two different teams is handled properly.
profile.add_points(1, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(profile.points(), profile2.points(),
"The two profiles should have identical points.")
self.assertEqual(self.groups[0].team_points_leaders()[0],
profile.team,
"The leader of the team should have changed back.")
# Test that adding points to a user in a different dorm does not
# change affect these standings.
profile1 = self.users[1].get_profile()
profile1.add_points(profile.points() + 1,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile1.save()
self.assertEqual(self.groups[0].team_points_leaders()[0],
profile.team,
"The leader of the team should not change.")
self.assertEqual(self.groups[1].team_points_leaders()[0],
profile1.team,
"The leader in the second dorm should be profile1's "
"team.")
class TeamLeadersTestCase(TransactionTestCase):
"""test team leader"""
def setUp(self):
self.group = Group(name="Test Group")
self.group.save()
self.teams = [Team(name=str(i), group=self.group) for i in
range(0, 2)]
_ = [f.save() for f in self.teams]
self.users = [User.objects.create_user("test%d" % i, "[email protected]")
for i in range(0, 4)]
# Assign users to teams.
for index, user in enumerate(self.users):
user.get_profile().team = self.teams[index % 2]
user.get_profile().save()
self.current_round = "Round 1"
test_utils.set_competition_round()
def testTeamPointsInRound(self):
"""Tests calculating the team points leaders in a round."""
profile = self.users[0].get_profile()
profile.add_points(10,
datetime.datetime.today() - datetime.timedelta(minutes=1), "test")
profile.save()
self.assertEqual(team_mgr.team_points_leader(round_name=self.current_round),
profile.team,
"The user's team is not leading in the prize.")
# Test that a user in a different team but same dorm changes the
# leader for the original user.
profile2 = self.users[2].get_profile()
profile2.add_points(profile.points() + 1,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile2.save()
self.assertEqual(team_mgr.team_points_leader(round_name=self.current_round),
profile2.team,
"The user's team should have changed.")
# Test that a tie is handled properly.
profile.add_points(1, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(team_mgr.team_points_leader(round_name=self.current_round),
profile.team,
"The leader of the team should have changed back.")
def testIndividualPointsInRound(self):
"""Tests calculating the individual points leaders in a round."""
profile = self.users[0].get_profile()
profile.add_points(10,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile.save()
self.assertEqual(profile.team.points_leaders(round_name=self.current_round)[0],
profile,
"The user should be in the lead in his own team.")
# Test that a user in a different team but same dorm does not change
# the leader for the original team.
profile1 = self.users[1].get_profile()
profile1.add_points(15,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile1.save()
self.assertEqual(profile.team.points_leaders(round_name=self.current_round)[0],
profile,
"The leader for the user's team should not have"
" changed.")
self.assertEqual(profile1.team.points_leaders(round_name=self.current_round)[0],
profile1,
"User 1 should be leading in their own team.")
# Test another user going ahead in the user's team.
profile2 = self.users[2].get_profile()
profile2.add_points(15,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile2.save()
self.assertEqual(profile.team.points_leaders(round_name=self.current_round)[0],
profile2,
"User 2 should be in the lead in the user's team.")
# Test that a tie is handled properly.
profile.add_points(5, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(profile.team.points_leaders(round_name=self.current_round)[0],
profile,
"The leader of the team should have changed back.")
def testTeamPointsOverall(self):
"""Tests calculating the team points leaders in a round."""
profile = self.users[0].get_profile()
profile.add_points(10,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile.save()
self.assertEqual(profile.team.points_leaders()[0],
profile,
"The user should be in the lead in his own team.")
# Test that a user in a different team but same dorm does not change
# the leader for the original team.
profile1 = self.users[1].get_profile()
profile1.add_points(15,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile1.save()
self.assertEqual(profile.team.points_leaders()[0],
profile,
"The leader for the user's team should not have "
"changed.")
self.assertEqual(profile1.team.points_leaders()[0],
profile1,
"User 1 should be leading in their own team.")
# Test another user going ahead in the user's team.
profile2 = self.users[2].get_profile()
profile2.add_points(15,
datetime.datetime.today() -
datetime.timedelta(minutes=1),
"test")
profile2.save()
self.assertEqual(profile.team.points_leaders()[0],
profile2,
"User 2 should be in the lead in the user's team.")
# Test that a tie is handled properly.
profile.add_points(5, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(profile.team.points_leaders()[0],
profile,
"The leader of the team should have changed back.")
class TeamsUnitTestCase(TransactionTestCase):
"""team tests"""
def setUp(self):
self.group = Group(name="Test group")
self.group.save()
self.test_team = Team(name="A", group=self.group)
self.test_team.save()
def testOverallPoints(self):
"""Check that retrieving the points for the team is correct."""
# Create a test user.
user = User(username="test_user", password="test_password")
user.save()
user_points = 10
user.get_profile().team = self.test_team
self.assertEqual(self.test_team.points(),
0,
"Check that the team does not have any points yet.")
user.get_profile().add_points(user_points, datetime.datetime.today(),
"test")
user.get_profile().save()
self.assertEqual(self.test_team.points(),
user_points,
"Check that the number of points are equal for "
"one user.")
# Create another test user and check again.
user = User(username="test_user1", password="test_password")
user.save()
user.get_profile().team = self.test_team
user.get_profile().add_points(user_points,
datetime.datetime.today(),
"test")
user.get_profile().save()
self.assertEqual(self.test_team.points(), 2 * user_points,
"Check that the number of points are equal for two users.")
def testPointsInRound(self):
"""Tests that we can accurately compute the amount of points in a
round."""
test_utils.set_competition_round()
user = User(username="test_user", password="test_password")
user.save()
profile = user.get_profile()
profile.team = self.test_team
profile.save()
self.assertEqual(self.test_team.current_round_points(),
0,
"Check that the team does not have any points yet.")
profile.add_points(10, datetime.datetime.today(), "test")
profile.save()
self.assertEqual(self.test_team.current_round_points(),
10,
"Check that the number of points are correct in "
"this round.")
def testOverallRankWithPoints(self):
"""Check that calculating the rank is correct based on point value."""
# Create a test user.
user = User(username="test_user", password="test_password")
user.save()
user_points = 10
user.get_profile().team = self.test_team
# Test the team is ranked last if they haven't done anything yet.
team_rank = 1
self.assertEqual(self.test_team.rank(), team_rank,
"Check the team is ranked last.")
user.get_profile().add_points(user_points,
datetime.datetime.today(),
"test")
user.get_profile().save()
self.assertEqual(self.test_team.rank(),
1,
"Check the team is now ranked number 1.")
# Create a test user on a different team.
test_team2 = Team(name="B", group=self.group)
test_team2.save()
user2 = User(username="test_user1", password="test_password")
user2.save()
user2.get_profile().team = test_team2
user2.get_profile().add_points(user_points + 1,
datetime.datetime.today(),
"test")
user2.get_profile().save()
self.assertEqual(self.test_team.rank(),
2,
"Check that the team is now ranked number 2.")
def testRoundRank(self):
"""Check that the rank calculation is correct for the current round."""
# Save the round information and set up a test round.
test_utils.set_competition_round()
# Create a test user.
user = User(username="test_user", password="test_password")
user.save()
user_points = 10
user.get_profile().team = self.test_team
user.get_profile().save()
self.assertEqual(self.test_team.current_round_rank(),
1,
"Check the calculation works even if there's "
"no submission.")
user.get_profile().add_points(user_points,
datetime.datetime.today(),
"test")
user.get_profile().save()
self.assertEqual(self.test_team.current_round_rank(),
1,
"Check the team is now ranked number 1.")
test_team2 = Team(name="B", group=self.group)
test_team2.save()
user2 = User(username="test_user1", password="test_password")
user2.save()
user2.get_profile().team = test_team2
user2.get_profile().add_points(user_points + 1,
datetime.datetime.today(),
"test")
user2.get_profile().save()
self.assertEqual(self.test_team.current_round_rank(),
2,
"Check the team is now ranked number 2.")
def testOverallRankWithSubmissionDate(self):
"""Check that rank calculation is correct in the case of ties."""
# Create a test user.
user = User(username="test_user", password="test_password")
user.save()
user_points = 10
user.get_profile().team = self.test_team
user.get_profile().add_points(user_points,
datetime.datetime.today(),
"test")
user.get_profile().save()
# Create a test user on a different team.
test_team2 = Team(name="B", group=self.group)
test_team2.save()
user = User(username="test_user1", password="test_password")
user.save()
user.get_profile().team = test_team2
user.get_profile().add_points(user_points,
datetime.datetime.today() + datetime.timedelta(days=1),
"test")
user.get_profile().save()
self.assertEqual(self.test_team.rank(),
2,
"Check that the team is ranked second.")
| mit |
regini/inSquare | inSquareBackend/cloud.insquare/node_modules/node-forge/tests/forge_ssl/forge/ssl.py | 169 | 16598 | # Wrapper module for _ssl, providing some additional facilities
# implemented in Python. Written by Bill Janssen.
"""\
This module provides some more Pythonic support for SSL.
Object types:
SSLSocket -- subtype of socket.socket which does SSL over the socket
Exceptions:
SSLError -- exception raised for I/O errors
Functions:
cert_time_to_seconds -- convert time string used for certificate
notBefore and notAfter functions to integer
seconds past the Epoch (the time values
returned from time.time())
fetch_server_certificate (HOST, PORT) -- fetch the certificate provided
by the server running on HOST at port PORT. No
validation of the certificate is performed.
Integer constants:
SSL_ERROR_ZERO_RETURN
SSL_ERROR_WANT_READ
SSL_ERROR_WANT_WRITE
SSL_ERROR_WANT_X509_LOOKUP
SSL_ERROR_SYSCALL
SSL_ERROR_SSL
SSL_ERROR_WANT_CONNECT
SSL_ERROR_EOF
SSL_ERROR_INVALID_ERROR_CODE
The following group define certificate requirements that one side is
allowing/requiring from the other side:
CERT_NONE - no certificates from the other side are required (or will
be looked at if provided)
CERT_OPTIONAL - certificates are not required, but if provided will be
validated, and if validation fails, the connection will
also fail
CERT_REQUIRED - certificates are required, and will be validated, and
if validation fails, the connection will also fail
The following constants identify various SSL protocol variants:
PROTOCOL_SSLv2
PROTOCOL_SSLv3
PROTOCOL_SSLv23
PROTOCOL_TLSv1
The following constants identify various SSL session caching modes:
SESS_CACHE_OFF
SESS_CACHE_CLIENT
SESS_CACHE_SERVER
SESS_CACHE_BOTH
"""
import textwrap
import _forge_ssl # if we can't import it, let the error propagate
from _forge_ssl import SSLError
from _forge_ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED
from _forge_ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1
from _forge_ssl import SESS_CACHE_OFF, SESS_CACHE_CLIENT, SESS_CACHE_SERVER, SESS_CACHE_BOTH
from _forge_ssl import RAND_status, RAND_egd, RAND_add
from _forge_ssl import \
SSL_ERROR_ZERO_RETURN, \
SSL_ERROR_WANT_READ, \
SSL_ERROR_WANT_WRITE, \
SSL_ERROR_WANT_X509_LOOKUP, \
SSL_ERROR_SYSCALL, \
SSL_ERROR_SSL, \
SSL_ERROR_WANT_CONNECT, \
SSL_ERROR_EOF, \
SSL_ERROR_INVALID_ERROR_CODE
from socket import socket, _fileobject, _delegate_methods
from socket import error as socket_error
from socket import getnameinfo as _getnameinfo
import base64 # for DER-to-PEM translation
import errno
class SSLSocket(socket):
"""This class implements a subtype of socket.socket that wraps
the underlying OS socket in an SSL context when necessary, and
provides read and write methods over that channel."""
def __init__(self, parent_socket, sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23,
sess_cache_mode=SESS_CACHE_SERVER,
sess_id_ctx=None,
ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True):
socket.__init__(self, _sock=sock._sock)
# The initializer for socket overrides the methods send(), recv(), etc.
# in the instancce, which we don't need -- but we want to provide the
# methods defined in SSLSocket.
for attr in _delegate_methods:
try:
delattr(self, attr)
except AttributeError:
pass
if certfile and not keyfile:
keyfile = certfile
create = True
connected = False
if not server_side:
# see if it's connected
try:
socket.getpeername(self)
connected = True
except socket_error, e:
if e.errno != errno.ENOTCONN:
raise
# no, no connection yet
self._sslobj = None
create = False
if create:
# yes, create the SSL object
if parent_socket == None:
self._sslobj = _forge_ssl.sslwrap(
self._sock,
server_side,
keyfile, certfile,
cert_reqs, ssl_version,
sess_cache_mode, sess_id_ctx,
ca_certs)
else:
self._sslobj = parent_socket._sslobj.wrap_accepted(self._sock)
if connected and do_handshake_on_connect:
self.do_handshake()
self.keyfile = keyfile
self.certfile = certfile
self.cert_reqs = cert_reqs
self.ssl_version = ssl_version
self.sess_cache_mode = sess_cache_mode
self.sess_id_ctx = sess_id_ctx
self.ca_certs = ca_certs
self.do_handshake_on_connect = do_handshake_on_connect
self.suppress_ragged_eofs = suppress_ragged_eofs
self._makefile_refs = 0
def read(self, len=1024):
"""Read up to LEN bytes and return them.
Return zero-length string on EOF."""
try:
return self._sslobj.read(len)
except SSLError, x:
if x.args[0] == SSL_ERROR_EOF and self.suppress_ragged_eofs:
return ''
else:
raise
def write(self, data):
"""Write DATA to the underlying SSL channel. Returns
number of bytes of DATA actually transmitted."""
return self._sslobj.write(data)
def getpeercert(self, binary_form=False):
"""Returns a formatted version of the data in the
certificate provided by the other end of the SSL channel.
Return None if no certificate was provided, {} if a
certificate was provided, but not validated."""
return self._sslobj.peer_certificate(binary_form)
def cipher(self):
if not self._sslobj:
return None
else:
return self._sslobj.cipher()
def send(self, data, flags=0):
if self._sslobj:
if flags != 0:
raise ValueError(
"non-zero flags not allowed in calls to send() on %s" %
self.__class__)
while True:
try:
v = self._sslobj.write(data)
except SSLError, x:
if x.args[0] == SSL_ERROR_WANT_READ:
return 0
elif x.args[0] == SSL_ERROR_WANT_WRITE:
return 0
else:
raise
else:
return v
else:
return socket.send(self, data, flags)
def sendto(self, data, addr, flags=0):
if self._sslobj:
raise ValueError("sendto not allowed on instances of %s" %
self.__class__)
else:
return socket.sendto(self, data, addr, flags)
def sendall(self, data, flags=0):
if self._sslobj:
if flags != 0:
raise ValueError(
"non-zero flags not allowed in calls to sendall() on %s" %
self.__class__)
amount = len(data)
count = 0
while (count < amount):
v = self.send(data[count:])
count += v
return amount
else:
return socket.sendall(self, data, flags)
def recv(self, buflen=1024, flags=0):
if self._sslobj:
if flags != 0:
raise ValueError(
"non-zero flags not allowed in calls to recv() on %s" %
self.__class__)
return self.read(buflen)
else:
return socket.recv(self, buflen, flags)
def recv_into(self, buffer, nbytes=None, flags=0):
if buffer and (nbytes is None):
nbytes = len(buffer)
elif nbytes is None:
nbytes = 1024
if self._sslobj:
if flags != 0:
raise ValueError(
"non-zero flags not allowed in calls to recv_into() on %s" %
self.__class__)
tmp_buffer = self.read(nbytes)
v = len(tmp_buffer)
buffer[:v] = tmp_buffer
return v
else:
return socket.recv_into(self, buffer, nbytes, flags)
def recvfrom(self, addr, buflen=1024, flags=0):
if self._sslobj:
raise ValueError("recvfrom not allowed on instances of %s" %
self.__class__)
else:
return socket.recvfrom(self, addr, buflen, flags)
def recvfrom_into(self, buffer, nbytes=None, flags=0):
if self._sslobj:
raise ValueError("recvfrom_into not allowed on instances of %s" %
self.__class__)
else:
return socket.recvfrom_into(self, buffer, nbytes, flags)
def pending(self):
if self._sslobj:
return self._sslobj.pending()
else:
return 0
def unwrap(self):
if self._sslobj:
try:
# if connected then shutdown
self.getpeername()
s = self._sslobj.shutdown()
except:
s = self._sock
self._sslobj = None
return s
else:
raise ValueError("No SSL wrapper around " + str(self))
def shutdown(self, how):
self._sslobj = None
socket.shutdown(self, how)
def close(self):
if self._makefile_refs < 1:
if self._sslobj:
self.unwrap()
socket.close(self)
else:
self._makefile_refs -= 1
def do_handshake(self):
"""Perform a TLS/SSL handshake."""
self._sslobj.do_handshake()
def connect(self, addr):
"""Connects to remote ADDR, and then wraps the connection in
an SSL channel."""
# Here we assume that the socket is client-side, and not
# connected at the time of the call. We connect it, then wrap it.
if self._sslobj:
raise ValueError("attempt to connect already-connected SSLSocket!")
socket.connect(self, addr)
self._sslobj = _forge_ssl.sslwrap(self._sock, False,
self.keyfile, self.certfile,
self.cert_reqs, self.ssl_version,
self.sess_cache_mode,
self.sess_id_ctx,
self.ca_certs)
if self.do_handshake_on_connect:
self.do_handshake()
def accept(self):
"""Accepts a new connection from a remote client, and returns
a tuple containing that new connection wrapped with a server-side
SSL channel, and the address of the remote client."""
newsock, addr = socket.accept(self)
return (SSLSocket(self,
newsock,
keyfile=self.keyfile,
certfile=self.certfile,
server_side=True,
cert_reqs=self.cert_reqs,
ssl_version=self.ssl_version,
sess_cache_mode=self.sess_cache_mode,
sess_id_ctx=self.sess_id_ctx,
ca_certs=self.ca_certs,
do_handshake_on_connect=self.do_handshake_on_connect,
suppress_ragged_eofs=self.suppress_ragged_eofs),
addr)
def makefile(self, mode='r', bufsize=-1):
"""Make and return a file-like object that
works with the SSL connection. Just use the code
from the socket module."""
self._makefile_refs += 1
# close=True so as to decrement the reference count when done with
# the file-like object.
return _fileobject(self, mode, bufsize, close=True)
def wrap_socket(sock, parent_socket=None, keyfile=None, certfile=None,
server_side=False, cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23,
sess_cache_mode=SESS_CACHE_SERVER,
sess_id_ctx=None,
ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True):
return SSLSocket(parent_socket,
sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version,
sess_cache_mode=sess_cache_mode,
sess_id_ctx=sess_id_ctx,
ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs)
# some utility functions
def cert_time_to_seconds(cert_time):
"""Takes a date-time string in standard ASN1_print form
("MON DAY 24HOUR:MINUTE:SEC YEAR TIMEZONE") and return
a Python time value in seconds past the epoch."""
import time
return time.mktime(time.strptime(cert_time, "%b %d %H:%M:%S %Y GMT"))
PEM_HEADER = "-----BEGIN CERTIFICATE-----"
PEM_FOOTER = "-----END CERTIFICATE-----"
def DER_cert_to_PEM_cert(der_cert_bytes):
"""Takes a certificate in binary DER format and returns the
PEM version of it as a string."""
if hasattr(base64, 'standard_b64encode'):
# preferred because older API gets line-length wrong
f = base64.standard_b64encode(der_cert_bytes)
return (PEM_HEADER + '\n' +
textwrap.fill(f, 64) + '\n' +
PEM_FOOTER + '\n')
else:
return (PEM_HEADER + '\n' +
base64.encodestring(der_cert_bytes) +
PEM_FOOTER + '\n')
def PEM_cert_to_DER_cert(pem_cert_string):
"""Takes a certificate in ASCII PEM format and returns the
DER-encoded version of it as a byte sequence"""
if not pem_cert_string.startswith(PEM_HEADER):
raise ValueError("Invalid PEM encoding; must start with %s"
% PEM_HEADER)
if not pem_cert_string.strip().endswith(PEM_FOOTER):
raise ValueError("Invalid PEM encoding; must end with %s"
% PEM_FOOTER)
d = pem_cert_string.strip()[len(PEM_HEADER):-len(PEM_FOOTER)]
return base64.decodestring(d)
def get_server_certificate(addr, ssl_version=PROTOCOL_SSLv3, ca_certs=None):
"""Retrieve the certificate from the server at the specified address,
and return it as a PEM-encoded string.
If 'ca_certs' is specified, validate the server cert against it.
If 'ssl_version' is specified, use it in the connection attempt."""
host, port = addr
if (ca_certs is not None):
cert_reqs = CERT_REQUIRED
else:
cert_reqs = CERT_NONE
s = wrap_socket(socket(), ssl_version=ssl_version,
cert_reqs=cert_reqs, ca_certs=ca_certs)
s.connect(addr)
dercert = s.getpeercert(True)
s.close()
return DER_cert_to_PEM_cert(dercert)
def get_protocol_name(protocol_code):
if protocol_code == PROTOCOL_TLSv1:
return "TLSv1"
elif protocol_code == PROTOCOL_SSLv23:
return "SSLv23"
elif protocol_code == PROTOCOL_SSLv2:
return "SSLv2"
elif protocol_code == PROTOCOL_SSLv3:
return "SSLv3"
else:
return "<unknown>"
# a replacement for the old socket.ssl function
def sslwrap_simple(sock, keyfile=None, certfile=None):
"""A replacement for the old socket.ssl function. Designed
for compability with Python 2.5 and earlier. Will disappear in
Python 3.0."""
if hasattr(sock, "_sock"):
sock = sock._sock
ssl_sock = _forge_ssl.sslwrap(sock, 0, keyfile, certfile,
CERT_NONE, PROTOCOL_SSLv23,
SESS_CACHE_SERVER, None, None)
try:
sock.getpeername()
except:
# no, no connection yet
pass
else:
# yes, do the handshake
ssl_sock.do_handshake()
return ssl_sock
| mit |
garbersc/keras-galaxies | tests/MLP.py | 1 | 14490 | """
This tutorial introduces the multilayer perceptron using Theano.
A multilayer perceptron is a logistic regressor where
instead of feeding the input to the logistic regression you insert a
intermediate layer, called the hidden layer, that has a nonlinear
activation function (usually tanh or sigmoid) . One can use many such
hidden layers making the architecture deep. The tutorial will also tackle
the problem of MNIST digit classification.
.. math::
f(x) = G( b^{(2)} + W^{(2)}( s( b^{(1)} + W^{(1)} x))),
References:
- textbooks: "Pattern Recognition and Machine Learning" -
Christopher M. Bishop, section 5
"""
from __future__ import print_function
__docformat__ = 'restructedtext en'
import six.moves.cPickle as pickle
import os
import sys
import timeit
import numpy
import theano
import theano.tensor as T
from loregTut import LogisticRegression, load_data
# start-snippet-1
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.tanh):
"""
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden
layer
"""
self.input = input
# end-snippet-1
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in [Xavier10] suggest that you
# should use 4 times larger initial weights for sigmoid
# compared to tanh
# We have no info for other function, so we use the same as
# tanh.
if W is None:
W_values = numpy.asarray(
rng.uniform(
low=-numpy.sqrt(6. / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
# start-snippet-2
class MLP(object):
"""Multi-Layer Perceptron Class
A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function tanh or the
sigmoid function (defined here by a ``HiddenLayer`` class) while the
top layer is a softmax layer (defined here by a ``LogisticRegression``
class).
"""
def __init__(self, rng, input, n_in, n_hidden, n_out):
"""Initialize the parameters for the multilayer perceptron
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_hidden: int
:param n_hidden: number of hidden units
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
"""
# Since we are dealing with a one hidden layer MLP, this will translate
# into a HiddenLayer with a tanh activation function connected to the
# LogisticRegression layer; the activation function can be replaced by
# sigmoid or any other nonlinear function
self.hiddenLayer = HiddenLayer(
rng=rng,
input=input,
n_in=n_in,
n_out=n_hidden,
activation=T.tanh
)
# The logistic regression layer gets as input the hidden units
# of the hidden layer
self.logRegressionLayer = LogisticRegression(
input=self.hiddenLayer.output,
n_in=n_hidden,
n_out=n_out
)
# end-snippet-2 start-snippet-3
# L1 norm ; one regularization option is to enforce L1 norm to
# be small
self.L1 = (
abs(self.hiddenLayer.W).sum()
+ abs(self.logRegressionLayer.W).sum()
)
# square of L2 norm ; one regularization option is to enforce
# square of L2 norm to be small
self.L2_sqr = (
(self.hiddenLayer.W ** 2).sum()
+ (self.logRegressionLayer.W ** 2).sum()
)
# negative log likelihood of the MLP is given by the negative
# log likelihood of the output of the model, computed in the
# logistic regression layer
self.negative_log_likelihood = (
self.logRegressionLayer.negative_log_likelihood
)
# same holds for the function computing the number of errors
self.errors = self.logRegressionLayer.errors
# the parameters of the model are the parameters of the two layer it is
# made out of
self.params = self.hiddenLayer.params + self.logRegressionLayer.params
# end-snippet-3
# keep track of model input
self.input = input
def test_mlp(learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, n_epochs=1000,
dataset='mnist.pkl.gz', batch_size=20, n_hidden=500):
"""
Demonstrate stochastic gradient descent optimization for a multilayer
perceptron
This is demonstrated on MNIST.
:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
gradient
:type L1_reg: float
:param L1_reg: L1-norm's weight when added to the cost (see
regularization)
:type L2_reg: float
:param L2_reg: L2-norm's weight when added to the cost (see
regularization)
:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer
:type dataset: string
:param dataset: the path of the MNIST dataset file from
http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz
"""
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0] // batch_size
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] // batch_size
n_test_batches = test_set_x.get_value(borrow=True).shape[0] // batch_size
######################
# BUILD ACTUAL MODEL #
######################
print('... building the model')
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
rng = numpy.random.RandomState(1234)
# construct the MLP class
classifier = MLP(
rng=rng,
input=x,
n_in=28 * 28,
n_hidden=n_hidden,
n_out=10
)
# start-snippet-4
# the cost we minimize during training is the negative log likelihood of
# the model plus the regularization terms (L1 and L2); cost is expressed
# here symbolically
cost = (
classifier.negative_log_likelihood(y)
+ L1_reg * classifier.L1
+ L2_reg * classifier.L2_sqr
)
# end-snippet-4
# compiling a Theano function that computes the mistakes that are made
# by the model on a minibatch
test_model = theano.function(
inputs=[index],
outputs=classifier.errors(y),
givens={
x: test_set_x[index * batch_size:(index + 1) * batch_size],
y: test_set_y[index * batch_size:(index + 1) * batch_size]
}
)
validate_model = theano.function(
inputs=[index],
outputs=classifier.errors(y),
givens={
x: valid_set_x[index * batch_size:(index + 1) * batch_size],
y: valid_set_y[index * batch_size:(index + 1) * batch_size]
}
)
# start-snippet-5
# compute the gradient of cost with respect to theta (sorted in params)
# the resulting gradients will be stored in a list gparams
gparams = [T.grad(cost, param) for param in classifier.params]
# specify how to update the parameters of the model as a list of
# (variable, update expression) pairs
# given two lists of the same length, A = [a1, a2, a3, a4] and
# B = [b1, b2, b3, b4], zip generates a list C of same size, where each
# element is a pair formed from the two lists :
# C = [(a1, b1), (a2, b2), (a3, b3), (a4, b4)]
updates = [
(param, param - learning_rate * gparam)
for param, gparam in zip(classifier.params, gparams)
]
# compiling a Theano function `train_model` that returns the cost, but
# in the same time updates the parameter of the model based on the rules
# defined in `updates`
train_model = theano.function(
inputs=[index],
outputs=cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# end-snippet-5
###############
# TRAIN MODEL #
###############
print('... training')
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience // 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = timeit.default_timer()
epoch = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
minibatch_avg_cost = train_model(minibatch_index)
# iteration number
iter = (epoch - 1) * n_train_batches + minibatch_index
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print(
'epoch %i, minibatch %i/%i, validation error %f %%' %
(
epoch,
minibatch_index + 1,
n_train_batches,
this_validation_loss * 100.
)
)
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if (
this_validation_loss < best_validation_loss *
improvement_threshold
):
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [test_model(i) for i
in range(n_test_batches)]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
#added by garbers
# save the best model
# with open('best_MLP_model.pkl', 'w') as fBest:
# pickle.dump(classifier, fBest)
if patience <= iter:
done_looping = True
break
end_time = timeit.default_timer()
print(('Optimization complete. Best validation score of %f %% '
'obtained at iteration %i, with test performance %f %%') %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print(('The code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.)), file=sys.stderr)
if __name__ == '__main__':
test_mlp()
| bsd-3-clause |
shhui/nova | nova/tests/virt/xenapi/client/test_session.py | 10 | 2692 | # Copyright (c) 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.tests.virt.xenapi import stubs
from nova.virt.xenapi.client import session
class ApplySessionHelpersTestCase(stubs.XenAPITestBaseNoDB):
def setUp(self):
super(ApplySessionHelpersTestCase, self).setUp()
self.session = mock.Mock()
session.apply_session_helpers(self.session)
def test_apply_session_helpers_add_VM(self):
self.session.VM.get_X("ref")
self.session.call_xenapi.assert_called_once_with("VM.get_X", "ref")
def test_apply_session_helpers_add_SR(self):
self.session.SR.get_X("ref")
self.session.call_xenapi.assert_called_once_with("SR.get_X", "ref")
def test_apply_session_helpers_add_VDI(self):
self.session.VDI.get_X("ref")
self.session.call_xenapi.assert_called_once_with("VDI.get_X", "ref")
def test_apply_session_helpers_add_VBD(self):
self.session.VBD.get_X("ref")
self.session.call_xenapi.assert_called_once_with("VBD.get_X", "ref")
def test_apply_session_helpers_add_PBD(self):
self.session.PBD.get_X("ref")
self.session.call_xenapi.assert_called_once_with("PBD.get_X", "ref")
def test_apply_session_helpers_add_PIF(self):
self.session.PIF.get_X("ref")
self.session.call_xenapi.assert_called_once_with("PIF.get_X", "ref")
def test_apply_session_helpers_add_VLAN(self):
self.session.VLAN.get_X("ref")
self.session.call_xenapi.assert_called_once_with("VLAN.get_X", "ref")
def test_apply_session_helpers_add_host(self):
self.session.host.get_X("ref")
self.session.call_xenapi.assert_called_once_with("host.get_X", "ref")
def test_apply_session_helpers_add_host(self):
self.session.network.get_X("ref")
self.session.call_xenapi.assert_called_once_with("network.get_X",
"ref")
def test_apply_session_helpers_add_pool(self):
self.session.pool.get_X("ref")
self.session.call_xenapi.assert_called_once_with("pool.get_X", "ref")
| apache-2.0 |
kiwifb/numpy | numpy/ma/extras.py | 1 | 53181 | """
Masked arrays add-ons.
A collection of utilities for `numpy.ma`.
:author: Pierre Gerard-Marchant
:contact: pierregm_at_uga_dot_edu
:version: $Id: extras.py 3473 2007-10-29 15:18:13Z jarrod.millman $
"""
from __future__ import division, absolute_import, print_function
__all__ = [
'apply_along_axis', 'apply_over_axes', 'atleast_1d', 'atleast_2d',
'atleast_3d', 'average', 'clump_masked', 'clump_unmasked',
'column_stack', 'compress_cols', 'compress_nd', 'compress_rowcols',
'compress_rows', 'count_masked', 'corrcoef', 'cov', 'diagflat', 'dot',
'dstack', 'ediff1d', 'flatnotmasked_contiguous', 'flatnotmasked_edges',
'hsplit', 'hstack', 'in1d', 'intersect1d', 'mask_cols', 'mask_rowcols',
'mask_rows', 'masked_all', 'masked_all_like', 'median', 'mr_',
'notmasked_contiguous', 'notmasked_edges', 'polyfit', 'row_stack',
'setdiff1d', 'setxor1d', 'unique', 'union1d', 'vander', 'vstack',
]
import itertools
import warnings
from . import core as ma
from .core import (
MaskedArray, MAError, add, array, asarray, concatenate, filled, count,
getmask, getmaskarray, make_mask_descr, masked, masked_array, mask_or,
nomask, ones, sort, zeros, getdata, get_masked_subclass, dot,
mask_rowcols
)
import numpy as np
from numpy import ndarray, array as nxarray
import numpy.core.umath as umath
from numpy.lib.function_base import _ureduce
from numpy.lib.index_tricks import AxisConcatenator
def issequence(seq):
"""
Is seq a sequence (ndarray, list or tuple)?
"""
if isinstance(seq, (ndarray, tuple, list)):
return True
return False
def count_masked(arr, axis=None):
"""
Count the number of masked elements along the given axis.
Parameters
----------
arr : array_like
An array with (possibly) masked elements.
axis : int, optional
Axis along which to count. If None (default), a flattened
version of the array is used.
Returns
-------
count : int, ndarray
The total number of masked elements (axis=None) or the number
of masked elements along each slice of the given axis.
See Also
--------
MaskedArray.count : Count non-masked elements.
Examples
--------
>>> import numpy.ma as ma
>>> a = np.arange(9).reshape((3,3))
>>> a = ma.array(a)
>>> a[1, 0] = ma.masked
>>> a[1, 2] = ma.masked
>>> a[2, 1] = ma.masked
>>> a
masked_array(data =
[[0 1 2]
[-- 4 --]
[6 -- 8]],
mask =
[[False False False]
[ True False True]
[False True False]],
fill_value=999999)
>>> ma.count_masked(a)
3
When the `axis` keyword is used an array is returned.
>>> ma.count_masked(a, axis=0)
array([1, 1, 1])
>>> ma.count_masked(a, axis=1)
array([0, 2, 1])
"""
m = getmaskarray(arr)
return m.sum(axis)
def masked_all(shape, dtype=float):
"""
Empty masked array with all elements masked.
Return an empty masked array of the given shape and dtype, where all the
data are masked.
Parameters
----------
shape : tuple
Shape of the required MaskedArray.
dtype : dtype, optional
Data type of the output.
Returns
-------
a : MaskedArray
A masked array with all data masked.
See Also
--------
masked_all_like : Empty masked array modelled on an existing array.
Examples
--------
>>> import numpy.ma as ma
>>> ma.masked_all((3, 3))
masked_array(data =
[[-- -- --]
[-- -- --]
[-- -- --]],
mask =
[[ True True True]
[ True True True]
[ True True True]],
fill_value=1e+20)
The `dtype` parameter defines the underlying data type.
>>> a = ma.masked_all((3, 3))
>>> a.dtype
dtype('float64')
>>> a = ma.masked_all((3, 3), dtype=np.int32)
>>> a.dtype
dtype('int32')
"""
a = masked_array(np.empty(shape, dtype),
mask=np.ones(shape, make_mask_descr(dtype)))
return a
def masked_all_like(arr):
"""
Empty masked array with the properties of an existing array.
Return an empty masked array of the same shape and dtype as
the array `arr`, where all the data are masked.
Parameters
----------
arr : ndarray
An array describing the shape and dtype of the required MaskedArray.
Returns
-------
a : MaskedArray
A masked array with all data masked.
Raises
------
AttributeError
If `arr` doesn't have a shape attribute (i.e. not an ndarray)
See Also
--------
masked_all : Empty masked array with all elements masked.
Examples
--------
>>> import numpy.ma as ma
>>> arr = np.zeros((2, 3), dtype=np.float32)
>>> arr
array([[ 0., 0., 0.],
[ 0., 0., 0.]], dtype=float32)
>>> ma.masked_all_like(arr)
masked_array(data =
[[-- -- --]
[-- -- --]],
mask =
[[ True True True]
[ True True True]],
fill_value=1e+20)
The dtype of the masked array matches the dtype of `arr`.
>>> arr.dtype
dtype('float32')
>>> ma.masked_all_like(arr).dtype
dtype('float32')
"""
a = np.empty_like(arr).view(MaskedArray)
a._mask = np.ones(a.shape, dtype=make_mask_descr(a.dtype))
return a
#####--------------------------------------------------------------------------
#---- --- Standard functions ---
#####--------------------------------------------------------------------------
class _fromnxfunction:
"""
Defines a wrapper to adapt NumPy functions to masked arrays.
An instance of `_fromnxfunction` can be called with the same parameters
as the wrapped NumPy function. The docstring of `newfunc` is adapted from
the wrapped function as well, see `getdoc`.
Parameters
----------
funcname : str
The name of the function to be adapted. The function should be
in the NumPy namespace (i.e. ``np.funcname``).
"""
def __init__(self, funcname):
self.__name__ = funcname
self.__doc__ = self.getdoc()
def getdoc(self):
"""
Retrieve the docstring and signature from the function.
The ``__doc__`` attribute of the function is used as the docstring for
the new masked array version of the function. A note on application
of the function to the mask is appended.
.. warning::
If the function docstring already contained a Notes section, the
new docstring will have two Notes sections instead of appending a note
to the existing section.
Parameters
----------
None
"""
npfunc = getattr(np, self.__name__, None)
doc = getattr(npfunc, '__doc__', None)
if doc:
sig = self.__name__ + ma.get_object_signature(npfunc)
locdoc = "Notes\n-----\nThe function is applied to both the _data"\
" and the _mask, if any."
return '\n'.join((sig, doc, locdoc))
return
def __call__(self, *args, **params):
func = getattr(np, self.__name__)
if len(args) == 1:
x = args[0]
if isinstance(x, ndarray):
_d = func(x.__array__(), **params)
_m = func(getmaskarray(x), **params)
return masked_array(_d, mask=_m)
elif isinstance(x, tuple) or isinstance(x, list):
_d = func(tuple([np.asarray(a) for a in x]), **params)
_m = func(tuple([getmaskarray(a) for a in x]), **params)
return masked_array(_d, mask=_m)
else:
_d = func(np.asarray(x), **params)
_m = func(getmaskarray(x), **params)
return masked_array(_d, mask=_m)
else:
arrays = []
args = list(args)
while len(args) > 0 and issequence(args[0]):
arrays.append(args.pop(0))
res = []
for x in arrays:
_d = func(np.asarray(x), *args, **params)
_m = func(getmaskarray(x), *args, **params)
res.append(masked_array(_d, mask=_m))
return res
atleast_1d = _fromnxfunction('atleast_1d')
atleast_2d = _fromnxfunction('atleast_2d')
atleast_3d = _fromnxfunction('atleast_3d')
#atleast_1d = np.atleast_1d
#atleast_2d = np.atleast_2d
#atleast_3d = np.atleast_3d
vstack = row_stack = _fromnxfunction('vstack')
hstack = _fromnxfunction('hstack')
column_stack = _fromnxfunction('column_stack')
dstack = _fromnxfunction('dstack')
hsplit = _fromnxfunction('hsplit')
diagflat = _fromnxfunction('diagflat')
#####--------------------------------------------------------------------------
#----
#####--------------------------------------------------------------------------
def flatten_inplace(seq):
"""Flatten a sequence in place."""
k = 0
while (k != len(seq)):
while hasattr(seq[k], '__iter__'):
seq[k:(k + 1)] = seq[k]
k += 1
return seq
def apply_along_axis(func1d, axis, arr, *args, **kwargs):
"""
(This docstring should be overwritten)
"""
arr = array(arr, copy=False, subok=True)
nd = arr.ndim
if axis < 0:
axis += nd
if (axis >= nd):
raise ValueError("axis must be less than arr.ndim; axis=%d, rank=%d."
% (axis, nd))
ind = [0] * (nd - 1)
i = np.zeros(nd, 'O')
indlist = list(range(nd))
indlist.remove(axis)
i[axis] = slice(None, None)
outshape = np.asarray(arr.shape).take(indlist)
i.put(indlist, ind)
j = i.copy()
res = func1d(arr[tuple(i.tolist())], *args, **kwargs)
# if res is a number, then we have a smaller output array
asscalar = np.isscalar(res)
if not asscalar:
try:
len(res)
except TypeError:
asscalar = True
# Note: we shouldn't set the dtype of the output from the first result
# so we force the type to object, and build a list of dtypes. We'll
# just take the largest, to avoid some downcasting
dtypes = []
if asscalar:
dtypes.append(np.asarray(res).dtype)
outarr = zeros(outshape, object)
outarr[tuple(ind)] = res
Ntot = np.product(outshape)
k = 1
while k < Ntot:
# increment the index
ind[-1] += 1
n = -1
while (ind[n] >= outshape[n]) and (n > (1 - nd)):
ind[n - 1] += 1
ind[n] = 0
n -= 1
i.put(indlist, ind)
res = func1d(arr[tuple(i.tolist())], *args, **kwargs)
outarr[tuple(ind)] = res
dtypes.append(asarray(res).dtype)
k += 1
else:
res = array(res, copy=False, subok=True)
j = i.copy()
j[axis] = ([slice(None, None)] * res.ndim)
j.put(indlist, ind)
Ntot = np.product(outshape)
holdshape = outshape
outshape = list(arr.shape)
outshape[axis] = res.shape
dtypes.append(asarray(res).dtype)
outshape = flatten_inplace(outshape)
outarr = zeros(outshape, object)
outarr[tuple(flatten_inplace(j.tolist()))] = res
k = 1
while k < Ntot:
# increment the index
ind[-1] += 1
n = -1
while (ind[n] >= holdshape[n]) and (n > (1 - nd)):
ind[n - 1] += 1
ind[n] = 0
n -= 1
i.put(indlist, ind)
j.put(indlist, ind)
res = func1d(arr[tuple(i.tolist())], *args, **kwargs)
outarr[tuple(flatten_inplace(j.tolist()))] = res
dtypes.append(asarray(res).dtype)
k += 1
max_dtypes = np.dtype(np.asarray(dtypes).max())
if not hasattr(arr, '_mask'):
result = np.asarray(outarr, dtype=max_dtypes)
else:
result = asarray(outarr, dtype=max_dtypes)
result.fill_value = ma.default_fill_value(result)
return result
apply_along_axis.__doc__ = np.apply_along_axis.__doc__
def apply_over_axes(func, a, axes):
"""
(This docstring will be overwritten)
"""
val = asarray(a)
N = a.ndim
if array(axes).ndim == 0:
axes = (axes,)
for axis in axes:
if axis < 0:
axis = N + axis
args = (val, axis)
res = func(*args)
if res.ndim == val.ndim:
val = res
else:
res = ma.expand_dims(res, axis)
if res.ndim == val.ndim:
val = res
else:
raise ValueError("function is not returning "
"an array of the correct shape")
return val
if apply_over_axes.__doc__ is not None:
apply_over_axes.__doc__ = np.apply_over_axes.__doc__[
:np.apply_over_axes.__doc__.find('Notes')].rstrip() + \
"""
Examples
--------
>>> a = ma.arange(24).reshape(2,3,4)
>>> a[:,0,1] = ma.masked
>>> a[:,1,:] = ma.masked
>>> print(a)
[[[0 -- 2 3]
[-- -- -- --]
[8 9 10 11]]
[[12 -- 14 15]
[-- -- -- --]
[20 21 22 23]]]
>>> print(ma.apply_over_axes(ma.sum, a, [0,2]))
[[[46]
[--]
[124]]]
Tuple axis arguments to ufuncs are equivalent:
>>> print(ma.sum(a, axis=(0,2)).reshape((1,-1,1)))
[[[46]
[--]
[124]]]
"""
def average(a, axis=None, weights=None, returned=False):
"""
Return the weighted average of array over the given axis.
Parameters
----------
a : array_like
Data to be averaged.
Masked entries are not taken into account in the computation.
axis : int, optional
Axis along which to average `a`. If `None`, averaging is done over
the flattened array.
weights : array_like, optional
The importance that each element has in the computation of the average.
The weights array can either be 1-D (in which case its length must be
the size of `a` along the given axis) or of the same shape as `a`.
If ``weights=None``, then all data in `a` are assumed to have a
weight equal to one. If `weights` is complex, the imaginary parts
are ignored.
returned : bool, optional
Flag indicating whether a tuple ``(result, sum of weights)``
should be returned as output (True), or just the result (False).
Default is False.
Returns
-------
average, [sum_of_weights] : (tuple of) scalar or MaskedArray
The average along the specified axis. When returned is `True`,
return a tuple with the average as the first element and the sum
of the weights as the second element. The return type is `np.float64`
if `a` is of integer type and floats smaller than `float64`, or the
input data-type, otherwise. If returned, `sum_of_weights` is always
`float64`.
Examples
--------
>>> a = np.ma.array([1., 2., 3., 4.], mask=[False, False, True, True])
>>> np.ma.average(a, weights=[3, 1, 0, 0])
1.25
>>> x = np.ma.arange(6.).reshape(3, 2)
>>> print(x)
[[ 0. 1.]
[ 2. 3.]
[ 4. 5.]]
>>> avg, sumweights = np.ma.average(x, axis=0, weights=[1, 2, 3],
... returned=True)
>>> print(avg)
[2.66666666667 3.66666666667]
"""
a = asarray(a)
m = getmask(a)
# inspired by 'average' in numpy/lib/function_base.py
if weights is None:
avg = a.mean(axis)
scl = avg.dtype.type(a.count(axis))
else:
wgt = np.asanyarray(weights)
if issubclass(a.dtype.type, (np.integer, np.bool_)):
result_dtype = np.result_type(a.dtype, wgt.dtype, 'f8')
else:
result_dtype = np.result_type(a.dtype, wgt.dtype)
# Sanity checks
if a.shape != wgt.shape:
if axis is None:
raise TypeError(
"Axis must be specified when shapes of a and weights "
"differ.")
if wgt.ndim != 1:
raise TypeError(
"1D weights expected when shapes of a and weights differ.")
if wgt.shape[0] != a.shape[axis]:
raise ValueError(
"Length of weights not compatible with specified axis.")
# setup wgt to broadcast along axis
wgt = np.broadcast_to(wgt, (a.ndim-1)*(1,) + wgt.shape)
wgt = wgt.swapaxes(-1, axis)
if m is not nomask:
wgt = wgt*(~a.mask)
scl = wgt.sum(axis=axis, dtype=result_dtype)
avg = np.multiply(a, wgt, dtype=result_dtype).sum(axis)/scl
if returned:
if scl.shape != avg.shape:
scl = np.broadcast_to(scl, avg.shape).copy()
return avg, scl
else:
return avg
def median(a, axis=None, out=None, overwrite_input=False, keepdims=False):
"""
Compute the median along the specified axis.
Returns the median of the array elements.
Parameters
----------
a : array_like
Input array or object that can be converted to an array.
axis : int, optional
Axis along which the medians are computed. The default (None) is
to compute the median along a flattened version of the array.
out : ndarray, optional
Alternative output array in which to place the result. It must
have the same shape and buffer length as the expected output
but the type will be cast if necessary.
overwrite_input : bool, optional
If True, then allow use of memory of input array (a) for
calculations. The input array will be modified by the call to
median. This will save memory when you do not need to preserve
the contents of the input array. Treat the input as undefined,
but it will probably be fully or partially sorted. Default is
False. Note that, if `overwrite_input` is True, and the input
is not already an `ndarray`, an error will be raised.
keepdims : bool, optional
If this is set to True, the axes which are reduced are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
.. versionadded:: 1.10.0
Returns
-------
median : ndarray
A new array holding the result is returned unless out is
specified, in which case a reference to out is returned.
Return data-type is `float64` for integers and floats smaller than
`float64`, or the input data-type, otherwise.
See Also
--------
mean
Notes
-----
Given a vector ``V`` with ``N`` non masked values, the median of ``V``
is the middle value of a sorted copy of ``V`` (``Vs``) - i.e.
``Vs[(N-1)/2]``, when ``N`` is odd, or ``{Vs[N/2 - 1] + Vs[N/2]}/2``
when ``N`` is even.
Examples
--------
>>> x = np.ma.array(np.arange(8), mask=[0]*4 + [1]*4)
>>> np.ma.median(x)
1.5
>>> x = np.ma.array(np.arange(10).reshape(2, 5), mask=[0]*6 + [1]*4)
>>> np.ma.median(x)
2.5
>>> np.ma.median(x, axis=-1, overwrite_input=True)
masked_array(data = [ 2. 5.],
mask = False,
fill_value = 1e+20)
"""
if not hasattr(a, 'mask'):
m = np.median(getdata(a, subok=True), axis=axis,
out=out, overwrite_input=overwrite_input,
keepdims=keepdims)
if isinstance(m, np.ndarray) and 1 <= m.ndim:
return masked_array(m, copy=False)
else:
return m
r, k = _ureduce(a, func=_median, axis=axis, out=out,
overwrite_input=overwrite_input)
if keepdims:
return r.reshape(k)
else:
return r
def _median(a, axis=None, out=None, overwrite_input=False):
if overwrite_input:
if axis is None:
asorted = a.ravel()
asorted.sort()
else:
a.sort(axis=axis)
asorted = a
else:
asorted = sort(a, axis=axis)
if axis is None:
axis = 0
elif axis < 0:
axis += a.ndim
if asorted.ndim == 1:
idx, odd = divmod(count(asorted), 2)
return asorted[idx - (not odd) : idx + 1].mean()
counts = asorted.shape[axis] - (asorted.mask).sum(axis=axis)
h = counts // 2
# create indexing mesh grid for all but reduced axis
axes_grid = [np.arange(x) for i, x in enumerate(asorted.shape)
if i != axis]
ind = np.meshgrid(*axes_grid, sparse=True, indexing='ij')
# insert indices of low and high median
ind.insert(axis, h - 1)
low = asorted[tuple(ind)]
low._sharedmask = False
ind[axis] = h
high = asorted[tuple(ind)]
# duplicate high if odd number of elements so mean does nothing
odd = counts % 2 == 1
if asorted.ndim == 1:
if odd:
low = high
else:
low[odd] = high[odd]
if np.issubdtype(asorted.dtype, np.inexact):
# avoid inf / x = masked
s = np.ma.sum([low, high], axis=0, out=out)
np.true_divide(s.data, 2., casting='unsafe', out=s.data)
else:
s = np.ma.mean([low, high], axis=0, out=out)
return s
def compress_nd(x, axis=None):
"""Supress slices from multiple dimensions which contain masked values.
Parameters
----------
x : array_like, MaskedArray
The array to operate on. If not a MaskedArray instance (or if no array
elements are masked, `x` is interpreted as a MaskedArray with `mask`
set to `nomask`.
axis : tuple of ints or int, optional
Which dimensions to supress slices from can be configured with this
parameter.
- If axis is a tuple of ints, those are the axes to supress slices from.
- If axis is an int, then that is the only axis to supress slices from.
- If axis is None, all axis are selected.
Returns
-------
compress_array : ndarray
The compressed array.
"""
x = asarray(x)
m = getmask(x)
# Set axis to tuple of ints
if isinstance(axis, int):
axis = (axis,)
elif axis is None:
axis = tuple(range(x.ndim))
elif not isinstance(axis, tuple):
raise ValueError('Invalid type for axis argument')
# Check axis input
axis = [ax + x.ndim if ax < 0 else ax for ax in axis]
if not all(0 <= ax < x.ndim for ax in axis):
raise ValueError("'axis' entry is out of bounds")
if len(axis) != len(set(axis)):
raise ValueError("duplicate value in 'axis'")
# Nothing is masked: return x
if m is nomask or not m.any():
return x._data
# All is masked: return empty
if m.all():
return nxarray([])
# Filter elements through boolean indexing
data = x._data
for ax in axis:
axes = tuple(list(range(ax)) + list(range(ax + 1, x.ndim)))
data = data[(slice(None),)*ax + (~m.any(axis=axes),)]
return data
def compress_rowcols(x, axis=None):
"""
Suppress the rows and/or columns of a 2-D array that contain
masked values.
The suppression behavior is selected with the `axis` parameter.
- If axis is None, both rows and columns are suppressed.
- If axis is 0, only rows are suppressed.
- If axis is 1 or -1, only columns are suppressed.
Parameters
----------
x : array_like, MaskedArray
The array to operate on. If not a MaskedArray instance (or if no array
elements are masked), `x` is interpreted as a MaskedArray with
`mask` set to `nomask`. Must be a 2D array.
axis : int, optional
Axis along which to perform the operation. Default is None.
Returns
-------
compressed_array : ndarray
The compressed array.
Examples
--------
>>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0],
... [1, 0, 0],
... [0, 0, 0]])
>>> x
masked_array(data =
[[-- 1 2]
[-- 4 5]
[6 7 8]],
mask =
[[ True False False]
[ True False False]
[False False False]],
fill_value = 999999)
>>> np.ma.compress_rowcols(x)
array([[7, 8]])
>>> np.ma.compress_rowcols(x, 0)
array([[6, 7, 8]])
>>> np.ma.compress_rowcols(x, 1)
array([[1, 2],
[4, 5],
[7, 8]])
"""
if asarray(x).ndim != 2:
raise NotImplementedError("compress_rowcols works for 2D arrays only.")
return compress_nd(x, axis=axis)
def compress_rows(a):
"""
Suppress whole rows of a 2-D array that contain masked values.
This is equivalent to ``np.ma.compress_rowcols(a, 0)``, see
`extras.compress_rowcols` for details.
See Also
--------
extras.compress_rowcols
"""
a = asarray(a)
if a.ndim != 2:
raise NotImplementedError("compress_rows works for 2D arrays only.")
return compress_rowcols(a, 0)
def compress_cols(a):
"""
Suppress whole columns of a 2-D array that contain masked values.
This is equivalent to ``np.ma.compress_rowcols(a, 1)``, see
`extras.compress_rowcols` for details.
See Also
--------
extras.compress_rowcols
"""
a = asarray(a)
if a.ndim != 2:
raise NotImplementedError("compress_cols works for 2D arrays only.")
return compress_rowcols(a, 1)
def mask_rows(a, axis=None):
"""
Mask rows of a 2D array that contain masked values.
This function is a shortcut to ``mask_rowcols`` with `axis` equal to 0.
See Also
--------
mask_rowcols : Mask rows and/or columns of a 2D array.
masked_where : Mask where a condition is met.
Examples
--------
>>> import numpy.ma as ma
>>> a = np.zeros((3, 3), dtype=np.int)
>>> a[1, 1] = 1
>>> a
array([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
>>> a = ma.masked_equal(a, 1)
>>> a
masked_array(data =
[[0 0 0]
[0 -- 0]
[0 0 0]],
mask =
[[False False False]
[False True False]
[False False False]],
fill_value=999999)
>>> ma.mask_rows(a)
masked_array(data =
[[0 0 0]
[-- -- --]
[0 0 0]],
mask =
[[False False False]
[ True True True]
[False False False]],
fill_value=999999)
"""
return mask_rowcols(a, 0)
def mask_cols(a, axis=None):
"""
Mask columns of a 2D array that contain masked values.
This function is a shortcut to ``mask_rowcols`` with `axis` equal to 1.
See Also
--------
mask_rowcols : Mask rows and/or columns of a 2D array.
masked_where : Mask where a condition is met.
Examples
--------
>>> import numpy.ma as ma
>>> a = np.zeros((3, 3), dtype=np.int)
>>> a[1, 1] = 1
>>> a
array([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
>>> a = ma.masked_equal(a, 1)
>>> a
masked_array(data =
[[0 0 0]
[0 -- 0]
[0 0 0]],
mask =
[[False False False]
[False True False]
[False False False]],
fill_value=999999)
>>> ma.mask_cols(a)
masked_array(data =
[[0 -- 0]
[0 -- 0]
[0 -- 0]],
mask =
[[False True False]
[False True False]
[False True False]],
fill_value=999999)
"""
return mask_rowcols(a, 1)
#####--------------------------------------------------------------------------
#---- --- arraysetops ---
#####--------------------------------------------------------------------------
def ediff1d(arr, to_end=None, to_begin=None):
"""
Compute the differences between consecutive elements of an array.
This function is the equivalent of `numpy.ediff1d` that takes masked
values into account, see `numpy.ediff1d` for details.
See Also
--------
numpy.ediff1d : Equivalent function for ndarrays.
"""
arr = ma.asanyarray(arr).flat
ed = arr[1:] - arr[:-1]
arrays = [ed]
#
if to_begin is not None:
arrays.insert(0, to_begin)
if to_end is not None:
arrays.append(to_end)
#
if len(arrays) != 1:
# We'll save ourselves a copy of a potentially large array in the common
# case where neither to_begin or to_end was given.
ed = hstack(arrays)
#
return ed
def unique(ar1, return_index=False, return_inverse=False):
"""
Finds the unique elements of an array.
Masked values are considered the same element (masked). The output array
is always a masked array. See `numpy.unique` for more details.
See Also
--------
numpy.unique : Equivalent function for ndarrays.
"""
output = np.unique(ar1,
return_index=return_index,
return_inverse=return_inverse)
if isinstance(output, tuple):
output = list(output)
output[0] = output[0].view(MaskedArray)
output = tuple(output)
else:
output = output.view(MaskedArray)
return output
def intersect1d(ar1, ar2, assume_unique=False):
"""
Returns the unique elements common to both arrays.
Masked values are considered equal one to the other.
The output is always a masked array.
See `numpy.intersect1d` for more details.
See Also
--------
numpy.intersect1d : Equivalent function for ndarrays.
Examples
--------
>>> x = array([1, 3, 3, 3], mask=[0, 0, 0, 1])
>>> y = array([3, 1, 1, 1], mask=[0, 0, 0, 1])
>>> intersect1d(x, y)
masked_array(data = [1 3 --],
mask = [False False True],
fill_value = 999999)
"""
if assume_unique:
aux = ma.concatenate((ar1, ar2))
else:
# Might be faster than unique( intersect1d( ar1, ar2 ) )?
aux = ma.concatenate((unique(ar1), unique(ar2)))
aux.sort()
return aux[:-1][aux[1:] == aux[:-1]]
def setxor1d(ar1, ar2, assume_unique=False):
"""
Set exclusive-or of 1-D arrays with unique elements.
The output is always a masked array. See `numpy.setxor1d` for more details.
See Also
--------
numpy.setxor1d : Equivalent function for ndarrays.
"""
if not assume_unique:
ar1 = unique(ar1)
ar2 = unique(ar2)
aux = ma.concatenate((ar1, ar2))
if aux.size == 0:
return aux
aux.sort()
auxf = aux.filled()
# flag = ediff1d( aux, to_end = 1, to_begin = 1 ) == 0
flag = ma.concatenate(([True], (auxf[1:] != auxf[:-1]), [True]))
# flag2 = ediff1d( flag ) == 0
flag2 = (flag[1:] == flag[:-1])
return aux[flag2]
def in1d(ar1, ar2, assume_unique=False, invert=False):
"""
Test whether each element of an array is also present in a second
array.
The output is always a masked array. See `numpy.in1d` for more details.
See Also
--------
numpy.in1d : Equivalent function for ndarrays.
Notes
-----
.. versionadded:: 1.4.0
"""
if not assume_unique:
ar1, rev_idx = unique(ar1, return_inverse=True)
ar2 = unique(ar2)
ar = ma.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = ma.concatenate((bool_ar, [invert]))
indx = order.argsort(kind='mergesort')[:len(ar1)]
if assume_unique:
return flag[indx]
else:
return flag[indx][rev_idx]
def union1d(ar1, ar2):
"""
Union of two arrays.
The output is always a masked array. See `numpy.union1d` for more details.
See also
--------
numpy.union1d : Equivalent function for ndarrays.
"""
return unique(ma.concatenate((ar1, ar2)))
def setdiff1d(ar1, ar2, assume_unique=False):
"""
Set difference of 1D arrays with unique elements.
The output is always a masked array. See `numpy.setdiff1d` for more
details.
See Also
--------
numpy.setdiff1d : Equivalent function for ndarrays.
Examples
--------
>>> x = np.ma.array([1, 2, 3, 4], mask=[0, 1, 0, 1])
>>> np.ma.setdiff1d(x, [1, 2])
masked_array(data = [3 --],
mask = [False True],
fill_value = 999999)
"""
if assume_unique:
ar1 = ma.asarray(ar1).ravel()
else:
ar1 = unique(ar1)
ar2 = unique(ar2)
return ar1[in1d(ar1, ar2, assume_unique=True, invert=True)]
###############################################################################
# Covariance #
###############################################################################
def _covhelper(x, y=None, rowvar=True, allow_masked=True):
"""
Private function for the computation of covariance and correlation
coefficients.
"""
x = ma.array(x, ndmin=2, copy=True, dtype=float)
xmask = ma.getmaskarray(x)
# Quick exit if we can't process masked data
if not allow_masked and xmask.any():
raise ValueError("Cannot process masked data.")
#
if x.shape[0] == 1:
rowvar = True
# Make sure that rowvar is either 0 or 1
rowvar = int(bool(rowvar))
axis = 1 - rowvar
if rowvar:
tup = (slice(None), None)
else:
tup = (None, slice(None))
#
if y is None:
xnotmask = np.logical_not(xmask).astype(int)
else:
y = array(y, copy=False, ndmin=2, dtype=float)
ymask = ma.getmaskarray(y)
if not allow_masked and ymask.any():
raise ValueError("Cannot process masked data.")
if xmask.any() or ymask.any():
if y.shape == x.shape:
# Define some common mask
common_mask = np.logical_or(xmask, ymask)
if common_mask is not nomask:
xmask = x._mask = y._mask = ymask = common_mask
x._sharedmask = False
y._sharedmask = False
x = ma.concatenate((x, y), axis)
xnotmask = np.logical_not(np.concatenate((xmask, ymask), axis)).astype(int)
x -= x.mean(axis=rowvar)[tup]
return (x, xnotmask, rowvar)
def cov(x, y=None, rowvar=True, bias=False, allow_masked=True, ddof=None):
"""
Estimate the covariance matrix.
Except for the handling of missing data this function does the same as
`numpy.cov`. For more details and examples, see `numpy.cov`.
By default, masked values are recognized as such. If `x` and `y` have the
same shape, a common mask is allocated: if ``x[i,j]`` is masked, then
``y[i,j]`` will also be masked.
Setting `allow_masked` to False will raise an exception if values are
missing in either of the input arrays.
Parameters
----------
x : array_like
A 1-D or 2-D array containing multiple variables and observations.
Each row of `x` represents a variable, and each column a single
observation of all those variables. Also see `rowvar` below.
y : array_like, optional
An additional set of variables and observations. `y` has the same
form as `x`.
rowvar : bool, optional
If `rowvar` is True (default), then each row represents a
variable, with observations in the columns. Otherwise, the relationship
is transposed: each column represents a variable, while the rows
contain observations.
bias : bool, optional
Default normalization (False) is by ``(N-1)``, where ``N`` is the
number of observations given (unbiased estimate). If `bias` is True,
then normalization is by ``N``. This keyword can be overridden by
the keyword ``ddof`` in numpy versions >= 1.5.
allow_masked : bool, optional
If True, masked values are propagated pair-wise: if a value is masked
in `x`, the corresponding value is masked in `y`.
If False, raises a `ValueError` exception when some values are missing.
ddof : {None, int}, optional
If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is
the number of observations; this overrides the value implied by
``bias``. The default value is ``None``.
.. versionadded:: 1.5
Raises
------
ValueError
Raised if some values are missing and `allow_masked` is False.
See Also
--------
numpy.cov
"""
# Check inputs
if ddof is not None and ddof != int(ddof):
raise ValueError("ddof must be an integer")
# Set up ddof
if ddof is None:
if bias:
ddof = 0
else:
ddof = 1
(x, xnotmask, rowvar) = _covhelper(x, y, rowvar, allow_masked)
if not rowvar:
fact = np.dot(xnotmask.T, xnotmask) * 1. - ddof
result = (dot(x.T, x.conj(), strict=False) / fact).squeeze()
else:
fact = np.dot(xnotmask, xnotmask.T) * 1. - ddof
result = (dot(x, x.T.conj(), strict=False) / fact).squeeze()
return result
def corrcoef(x, y=None, rowvar=True, bias=np._NoValue, allow_masked=True,
ddof=np._NoValue):
"""
Return Pearson product-moment correlation coefficients.
Except for the handling of missing data this function does the same as
`numpy.corrcoef`. For more details and examples, see `numpy.corrcoef`.
Parameters
----------
x : array_like
A 1-D or 2-D array containing multiple variables and observations.
Each row of `x` represents a variable, and each column a single
observation of all those variables. Also see `rowvar` below.
y : array_like, optional
An additional set of variables and observations. `y` has the same
shape as `x`.
rowvar : bool, optional
If `rowvar` is True (default), then each row represents a
variable, with observations in the columns. Otherwise, the relationship
is transposed: each column represents a variable, while the rows
contain observations.
bias : _NoValue, optional
Has no effect, do not use.
.. deprecated:: 1.10.0
allow_masked : bool, optional
If True, masked values are propagated pair-wise: if a value is masked
in `x`, the corresponding value is masked in `y`.
If False, raises an exception. Because `bias` is deprecated, this
argument needs to be treated as keyword only to avoid a warning.
ddof : _NoValue, optional
Has no effect, do not use.
.. deprecated:: 1.10.0
See Also
--------
numpy.corrcoef : Equivalent function in top-level NumPy module.
cov : Estimate the covariance matrix.
Notes
-----
This function accepts but discards arguments `bias` and `ddof`. This is
for backwards compatibility with previous versions of this function. These
arguments had no effect on the return values of the function and can be
safely ignored in this and previous versions of numpy.
"""
msg = 'bias and ddof have no effect and are deprecated'
if bias is not np._NoValue or ddof is not np._NoValue:
# 2015-03-15, 1.10
warnings.warn(msg, DeprecationWarning)
# Get the data
(x, xnotmask, rowvar) = _covhelper(x, y, rowvar, allow_masked)
# Compute the covariance matrix
if not rowvar:
fact = np.dot(xnotmask.T, xnotmask) * 1.
c = (dot(x.T, x.conj(), strict=False) / fact).squeeze()
else:
fact = np.dot(xnotmask, xnotmask.T) * 1.
c = (dot(x, x.T.conj(), strict=False) / fact).squeeze()
# Check whether we have a scalar
try:
diag = ma.diagonal(c)
except ValueError:
return 1
#
if xnotmask.all():
_denom = ma.sqrt(ma.multiply.outer(diag, diag))
else:
_denom = diagflat(diag)
_denom._sharedmask = False # We know return is always a copy
n = x.shape[1 - rowvar]
if rowvar:
for i in range(n - 1):
for j in range(i + 1, n):
_x = mask_cols(vstack((x[i], x[j]))).var(axis=1)
_denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x))
else:
for i in range(n - 1):
for j in range(i + 1, n):
_x = mask_cols(
vstack((x[:, i], x[:, j]))).var(axis=1)
_denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x))
return c / _denom
#####--------------------------------------------------------------------------
#---- --- Concatenation helpers ---
#####--------------------------------------------------------------------------
class MAxisConcatenator(AxisConcatenator):
"""
Translate slice objects to concatenation along an axis.
For documentation on usage, see `mr_class`.
See Also
--------
mr_class
"""
def __init__(self, axis=0):
AxisConcatenator.__init__(self, axis, matrix=False)
def __getitem__(self, key):
if isinstance(key, str):
raise MAError("Unavailable for masked array.")
if not isinstance(key, tuple):
key = (key,)
objs = []
scalars = []
final_dtypedescr = None
for k in range(len(key)):
scalar = False
if isinstance(key[k], slice):
step = key[k].step
start = key[k].start
stop = key[k].stop
if start is None:
start = 0
if step is None:
step = 1
if isinstance(step, complex):
size = int(abs(step))
newobj = np.linspace(start, stop, num=size)
else:
newobj = np.arange(start, stop, step)
elif isinstance(key[k], str):
if (key[k] in 'rc'):
self.matrix = True
self.col = (key[k] == 'c')
continue
try:
self.axis = int(key[k])
continue
except (ValueError, TypeError):
raise ValueError("Unknown special directive")
elif type(key[k]) in np.ScalarType:
newobj = asarray([key[k]])
scalars.append(k)
scalar = True
else:
newobj = key[k]
objs.append(newobj)
if isinstance(newobj, ndarray) and not scalar:
if final_dtypedescr is None:
final_dtypedescr = newobj.dtype
elif newobj.dtype > final_dtypedescr:
final_dtypedescr = newobj.dtype
if final_dtypedescr is not None:
for k in scalars:
objs[k] = objs[k].astype(final_dtypedescr)
res = concatenate(tuple(objs), axis=self.axis)
return self._retval(res)
class mr_class(MAxisConcatenator):
"""
Translate slice objects to concatenation along the first axis.
This is the masked array version of `lib.index_tricks.RClass`.
See Also
--------
lib.index_tricks.RClass
Examples
--------
>>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])]
array([1, 2, 3, 0, 0, 4, 5, 6])
"""
def __init__(self):
MAxisConcatenator.__init__(self, 0)
mr_ = mr_class()
#####--------------------------------------------------------------------------
#---- Find unmasked data ---
#####--------------------------------------------------------------------------
def flatnotmasked_edges(a):
"""
Find the indices of the first and last unmasked values.
Expects a 1-D `MaskedArray`, returns None if all values are masked.
Parameters
----------
a : array_like
Input 1-D `MaskedArray`
Returns
-------
edges : ndarray or None
The indices of first and last non-masked value in the array.
Returns None if all values are masked.
See Also
--------
flatnotmasked_contiguous, notmasked_contiguous, notmasked_edges,
clump_masked, clump_unmasked
Notes
-----
Only accepts 1-D arrays.
Examples
--------
>>> a = np.ma.arange(10)
>>> flatnotmasked_edges(a)
[0,-1]
>>> mask = (a < 3) | (a > 8) | (a == 5)
>>> a[mask] = np.ma.masked
>>> np.array(a[~a.mask])
array([3, 4, 6, 7, 8])
>>> flatnotmasked_edges(a)
array([3, 8])
>>> a[:] = np.ma.masked
>>> print(flatnotmasked_edges(ma))
None
"""
m = getmask(a)
if m is nomask or not np.any(m):
return np.array([0, a.size - 1])
unmasked = np.flatnonzero(~m)
if len(unmasked) > 0:
return unmasked[[0, -1]]
else:
return None
def notmasked_edges(a, axis=None):
"""
Find the indices of the first and last unmasked values along an axis.
If all values are masked, return None. Otherwise, return a list
of two tuples, corresponding to the indices of the first and last
unmasked values respectively.
Parameters
----------
a : array_like
The input array.
axis : int, optional
Axis along which to perform the operation.
If None (default), applies to a flattened version of the array.
Returns
-------
edges : ndarray or list
An array of start and end indexes if there are any masked data in
the array. If there are no masked data in the array, `edges` is a
list of the first and last index.
See Also
--------
flatnotmasked_contiguous, flatnotmasked_edges, notmasked_contiguous,
clump_masked, clump_unmasked
Examples
--------
>>> a = np.arange(9).reshape((3, 3))
>>> m = np.zeros_like(a)
>>> m[1:, 1:] = 1
>>> am = np.ma.array(a, mask=m)
>>> np.array(am[~am.mask])
array([0, 1, 2, 3, 6])
>>> np.ma.notmasked_edges(ma)
array([0, 6])
"""
a = asarray(a)
if axis is None or a.ndim == 1:
return flatnotmasked_edges(a)
m = getmaskarray(a)
idx = array(np.indices(a.shape), mask=np.asarray([m] * a.ndim))
return [tuple([idx[i].min(axis).compressed() for i in range(a.ndim)]),
tuple([idx[i].max(axis).compressed() for i in range(a.ndim)]), ]
def flatnotmasked_contiguous(a):
"""
Find contiguous unmasked data in a masked array along the given axis.
Parameters
----------
a : narray
The input array.
Returns
-------
slice_list : list
A sorted sequence of slices (start index, end index).
See Also
--------
flatnotmasked_edges, notmasked_contiguous, notmasked_edges,
clump_masked, clump_unmasked
Notes
-----
Only accepts 2-D arrays at most.
Examples
--------
>>> a = np.ma.arange(10)
>>> np.ma.flatnotmasked_contiguous(a)
slice(0, 10, None)
>>> mask = (a < 3) | (a > 8) | (a == 5)
>>> a[mask] = np.ma.masked
>>> np.array(a[~a.mask])
array([3, 4, 6, 7, 8])
>>> np.ma.flatnotmasked_contiguous(a)
[slice(3, 5, None), slice(6, 9, None)]
>>> a[:] = np.ma.masked
>>> print(np.ma.flatnotmasked_edges(a))
None
"""
m = getmask(a)
if m is nomask:
return slice(0, a.size, None)
i = 0
result = []
for (k, g) in itertools.groupby(m.ravel()):
n = len(list(g))
if not k:
result.append(slice(i, i + n))
i += n
return result or None
def notmasked_contiguous(a, axis=None):
"""
Find contiguous unmasked data in a masked array along the given axis.
Parameters
----------
a : array_like
The input array.
axis : int, optional
Axis along which to perform the operation.
If None (default), applies to a flattened version of the array.
Returns
-------
endpoints : list
A list of slices (start and end indexes) of unmasked indexes
in the array.
See Also
--------
flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges,
clump_masked, clump_unmasked
Notes
-----
Only accepts 2-D arrays at most.
Examples
--------
>>> a = np.arange(9).reshape((3, 3))
>>> mask = np.zeros_like(a)
>>> mask[1:, 1:] = 1
>>> ma = np.ma.array(a, mask=mask)
>>> np.array(ma[~ma.mask])
array([0, 1, 2, 3, 6])
>>> np.ma.notmasked_contiguous(ma)
[slice(0, 4, None), slice(6, 7, None)]
"""
a = asarray(a)
nd = a.ndim
if nd > 2:
raise NotImplementedError("Currently limited to atmost 2D array.")
if axis is None or nd == 1:
return flatnotmasked_contiguous(a)
#
result = []
#
other = (axis + 1) % 2
idx = [0, 0]
idx[axis] = slice(None, None)
#
for i in range(a.shape[other]):
idx[other] = i
result.append(flatnotmasked_contiguous(a[idx]) or None)
return result
def _ezclump(mask):
"""
Finds the clumps (groups of data with the same values) for a 1D bool array.
Returns a series of slices.
"""
if mask.ndim > 1:
mask = mask.ravel()
idx = (mask[1:] ^ mask[:-1]).nonzero()
idx = idx[0] + 1
if mask[0]:
if len(idx) == 0:
return [slice(0, mask.size)]
r = [slice(0, idx[0])]
r.extend((slice(left, right)
for left, right in zip(idx[1:-1:2], idx[2::2])))
else:
if len(idx) == 0:
return []
r = [slice(left, right) for left, right in zip(idx[:-1:2], idx[1::2])]
if mask[-1]:
r.append(slice(idx[-1], mask.size))
return r
def clump_unmasked(a):
"""
Return list of slices corresponding to the unmasked clumps of a 1-D array.
(A "clump" is defined as a contiguous region of the array).
Parameters
----------
a : ndarray
A one-dimensional masked array.
Returns
-------
slices : list of slice
The list of slices, one for each continuous region of unmasked
elements in `a`.
Notes
-----
.. versionadded:: 1.4.0
See Also
--------
flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges,
notmasked_contiguous, clump_masked
Examples
--------
>>> a = np.ma.masked_array(np.arange(10))
>>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked
>>> np.ma.clump_unmasked(a)
[slice(3, 6, None), slice(7, 8, None)]
"""
mask = getattr(a, '_mask', nomask)
if mask is nomask:
return [slice(0, a.size)]
return _ezclump(~mask)
def clump_masked(a):
"""
Returns a list of slices corresponding to the masked clumps of a 1-D array.
(A "clump" is defined as a contiguous region of the array).
Parameters
----------
a : ndarray
A one-dimensional masked array.
Returns
-------
slices : list of slice
The list of slices, one for each continuous region of masked elements
in `a`.
Notes
-----
.. versionadded:: 1.4.0
See Also
--------
flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges,
notmasked_contiguous, clump_unmasked
Examples
--------
>>> a = np.ma.masked_array(np.arange(10))
>>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked
>>> np.ma.clump_masked(a)
[slice(0, 3, None), slice(6, 7, None), slice(8, 10, None)]
"""
mask = ma.getmask(a)
if mask is nomask:
return []
return _ezclump(mask)
###############################################################################
# Polynomial fit #
###############################################################################
def vander(x, n=None):
"""
Masked values in the input array result in rows of zeros.
"""
_vander = np.vander(x, n)
m = getmask(x)
if m is not nomask:
_vander[m] = 0
return _vander
vander.__doc__ = ma.doc_note(np.vander.__doc__, vander.__doc__)
def polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False):
"""
Any masked values in x is propagated in y, and vice-versa.
"""
x = asarray(x)
y = asarray(y)
m = getmask(x)
if y.ndim == 1:
m = mask_or(m, getmask(y))
elif y.ndim == 2:
my = getmask(mask_rows(y))
if my is not nomask:
m = mask_or(m, my[:, 0])
else:
raise TypeError("Expected a 1D or 2D array for y!")
if w is not None:
w = asarray(w)
if w.ndim != 1:
raise TypeError("expected a 1-d array for weights")
if w.shape[0] != y.shape[0]:
raise TypeError("expected w and y to have the same length")
m = mask_or(m, getmask(w))
if m is not nomask:
not_m = ~m
if w is not None:
w = w[not_m]
return np.polyfit(x[not_m], y[not_m], deg, rcond, full, w, cov)
else:
return np.polyfit(x, y, deg, rcond, full, w, cov)
polyfit.__doc__ = ma.doc_note(np.polyfit.__doc__, polyfit.__doc__)
| bsd-3-clause |
huguesv/PTVS | Python/Product/Miniconda/Miniconda3-x64/Lib/unittest/test/test_functiontestcase.py | 108 | 5540 | import unittest
from unittest.test.support import LoggingResult
class Test_FunctionTestCase(unittest.TestCase):
# "Return the number of tests represented by the this test object. For
# TestCase instances, this will always be 1"
def test_countTestCases(self):
test = unittest.FunctionTestCase(lambda: None)
self.assertEqual(test.countTestCases(), 1)
# "When a setUp() method is defined, the test runner will run that method
# prior to each test. Likewise, if a tearDown() method is defined, the
# test runner will invoke that method after each test. In the example,
# setUp() was used to create a fresh sequence for each test."
#
# Make sure the proper call order is maintained, even if setUp() raises
# an exception.
def test_run_call_order__error_in_setUp(self):
events = []
result = LoggingResult(events)
def setUp():
events.append('setUp')
raise RuntimeError('raised by setUp')
def test():
events.append('test')
def tearDown():
events.append('tearDown')
expected = ['startTest', 'setUp', 'addError', 'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)
# "When a setUp() method is defined, the test runner will run that method
# prior to each test. Likewise, if a tearDown() method is defined, the
# test runner will invoke that method after each test. In the example,
# setUp() was used to create a fresh sequence for each test."
#
# Make sure the proper call order is maintained, even if the test raises
# an error (as opposed to a failure).
def test_run_call_order__error_in_test(self):
events = []
result = LoggingResult(events)
def setUp():
events.append('setUp')
def test():
events.append('test')
raise RuntimeError('raised by test')
def tearDown():
events.append('tearDown')
expected = ['startTest', 'setUp', 'test', 'tearDown',
'addError', 'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)
# "When a setUp() method is defined, the test runner will run that method
# prior to each test. Likewise, if a tearDown() method is defined, the
# test runner will invoke that method after each test. In the example,
# setUp() was used to create a fresh sequence for each test."
#
# Make sure the proper call order is maintained, even if the test signals
# a failure (as opposed to an error).
def test_run_call_order__failure_in_test(self):
events = []
result = LoggingResult(events)
def setUp():
events.append('setUp')
def test():
events.append('test')
self.fail('raised by test')
def tearDown():
events.append('tearDown')
expected = ['startTest', 'setUp', 'test', 'tearDown',
'addFailure', 'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)
# "When a setUp() method is defined, the test runner will run that method
# prior to each test. Likewise, if a tearDown() method is defined, the
# test runner will invoke that method after each test. In the example,
# setUp() was used to create a fresh sequence for each test."
#
# Make sure the proper call order is maintained, even if tearDown() raises
# an exception.
def test_run_call_order__error_in_tearDown(self):
events = []
result = LoggingResult(events)
def setUp():
events.append('setUp')
def test():
events.append('test')
def tearDown():
events.append('tearDown')
raise RuntimeError('raised by tearDown')
expected = ['startTest', 'setUp', 'test', 'tearDown', 'addError',
'stopTest']
unittest.FunctionTestCase(test, setUp, tearDown).run(result)
self.assertEqual(events, expected)
# "Return a string identifying the specific test case."
#
# Because of the vague nature of the docs, I'm not going to lock this
# test down too much. Really all that can be asserted is that the id()
# will be a string (either 8-byte or unicode -- again, because the docs
# just say "string")
def test_id(self):
test = unittest.FunctionTestCase(lambda: None)
self.assertIsInstance(test.id(), str)
# "Returns a one-line description of the test, or None if no description
# has been provided. The default implementation of this method returns
# the first line of the test method's docstring, if available, or None."
def test_shortDescription__no_docstring(self):
test = unittest.FunctionTestCase(lambda: None)
self.assertEqual(test.shortDescription(), None)
# "Returns a one-line description of the test, or None if no description
# has been provided. The default implementation of this method returns
# the first line of the test method's docstring, if available, or None."
def test_shortDescription__singleline_docstring(self):
desc = "this tests foo"
test = unittest.FunctionTestCase(lambda: None, description=desc)
self.assertEqual(test.shortDescription(), "this tests foo")
if __name__ == "__main__":
unittest.main()
| apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.