repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
---|---|---|---|---|---|
caphrim007/ansible | lib/ansible/modules/cloud/azure/azure_rm_autoscale.py | 7 | 26866 | #!/usr/bin/python
#
# Copyright (c) 2017 Yuwei Zhou, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_autoscale
version_added: "2.7"
short_description: Manage Azure autoscale setting.
description:
- Create, delete an autoscale setting.
options:
target:
description:
- The identifier of the resource to apply autoscale setting.
- It could be the resource id string.
- It also could be a dict contains the C(name), C(subscription_id), C(namespace), C(types), C(resource_group) of the resource.
resource_group:
required: true
description: resource group of the resource.
enabled:
type: bool
description: Specifies whether automatic scaling is enabled for the resource.
default: true
profiles:
description:
- The collection of automatic scaling profiles that specify different scaling parameters for different time periods.
- A maximum of 20 profiles can be specified.
suboptions:
name:
required: true
description: the name of the profile.
count:
required: true
description:
- The number of instances that will be set if metrics are not available for evaluation.
- The default is only used if the current instance count is lower than the default.
min_count:
description: the minimum number of instances for the resource.
max_count:
description: the maximum number of instances for the resource.
recurrence_frequency:
default: None
description:
- How often the schedule profile should take effect.
- If this value is Week, meaning each week will have the same set of profiles.
- This element is not used if the FixedDate element is used.
choices:
- None
- Second
- Minute
- Hour
- Day
- Week
- Month
- Year
recurrence_timezone:
description:
- The timezone of repeating times at which this profile begins.
- This element is not used if the FixedDate element is used.
recurrence_days:
description:
- The days of repeating times at which this profile begins.
- This element is not used if the FixedDate element is used.
recurrence_hours:
description:
- The hours of repeating times at which this profile begins.
- This element is not used if the FixedDate element is used.
recurrence_mins:
description:
- The mins of repeating times at which this profile begins.
- This element is not used if the FixedDate element is used.
fixed_date_timezone:
description:
- The specific date-time timezone for the profile.
- This element is not used if the Recurrence element is used.
fixed_date_start:
description:
- The specific date-time start for the profile.
- This element is not used if the Recurrence element is used.
fixed_date_end:
description:
- The specific date-time end for the profile.
- This element is not used if the Recurrence element is used.
rules:
description:
- The collection of rules that provide the triggers and parameters for the scaling action.
- A maximum of 10 rules can be specified.
suboptions:
time_aggregation:
default: Average
description: How the data that is collected should be combined over time.
choices:
- Average
- Minimum
- Maximum
- Total
- Count
time_window:
required: true
description:
- The range of time(minutes) in which instance data is collected.
- This value must be greater than the delay in metric collection, which can vary from resource-to-resource.
- Must be between 5 ~ 720.
direction:
description: Whether the scaling action increases or decreases the number of instances.
choices:
- Increase
- Decrease
metric_name:
required: true
description: The name of the metric that defines what the rule monitors.
metric_resource_uri:
description: The resource identifier of the resource the rule monitors.
value:
description:
- The number of instances that are involved in the scaling action.
- This value must be 1 or greater.
operator:
default: GreaterThan
description: The operator that is used to compare the metric data and the threshold.
choices:
- Equals
- NotEquals
- GreaterThan
- GreaterThanOrEqual
- LessThan
- LessThanOrEqual
cooldown:
description:
- The amount of time (minutes) to wait since the last scaling action before this action occurs.
- It must be between 1 ~ 10080.
time_grain:
required: true
description:
- The granularity(minutes) of metrics the rule monitors.
- Must be one of the predefined values returned from metric definitions for the metric.
- Must be between 1 ~ 720.
statistic:
default: Average
description: How the metrics from multiple instances are combined.
choices:
- Average
- Min
- Max
- Sum
threshold:
default: 70
description: The threshold of the metric that triggers the scale action.
type:
description: The type of action that should occur when the scale rule fires.
choices:
- PercentChangeCount
- ExactCount
- ChangeCount
notifications:
description: the collection of notifications.
suboptions:
custom_emails:
description: the custom e-mails list. This value can be null or empty, in which case this attribute will be ignored.
send_to_subscription_administrator:
type: bool
description: A value indicating whether to send email to subscription administrator.
webhooks:
description: The list of webhook notifications service uri.
send_to_subscription_co_administrators:
type: bool
description: A value indicating whether to send email to subscription co-administrators.
state:
default: present
description: Assert the state of the virtual network. Use 'present' to create or update and 'absent' to delete.
choices:
- present
- absent
location:
description: location of the resource.
name:
required: true
description: name of the resource.
extends_documentation_fragment:
- azure
- azure_tags
author:
- "Yuwei Zhou (@yuwzho)"
'''
EXAMPLES = '''
- name: Create an auto scale
azure_rm_autoscale:
target: "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/foo/providers/Microsoft.Compute/virtualMachineScaleSets/vmss"
enabled: true
profiles:
- count: '1'
recurrence_days:
- Monday
name: Auto created scale condition
recurrence_timezone: China Standard Time
recurrence_mins:
- '0'
min_count: '1'
max_count: '1'
recurrence_frequency: Week
recurrence_hours:
- '18'
name: scale
resource_group: foo
- name: Create an auto scale with compicated profile
azure_rm_autoscale:
target: "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/foo/providers/Microsoft.Compute/virtualMachineScaleSets/vmss"
enabled: true
profiles:
- count: '1'
recurrence_days:
- Monday
name: Auto created scale condition 0
rules:
- Time_aggregation: Average
time_window: 10
direction: Increase
metric_name: Percentage CPU
metric_resource_uri: "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/foo/providers/Microsoft.Compute/virtualMachineScaleSets/vmss"
value: '1'
threshold: 70
cooldown: 5
time_grain: 1
statistic: Average
operator: GreaterThan
type: ChangeCount
max_count: '1'
recurrence_mins:
- '0'
min_count: '1'
recurrence_timezone: China Standard Time
recurrence_frequency: Week
recurrence_hours:
- '6'
notifications:
- email_admin: True
email_co_admin: False
custom_emails:
- [email protected]
name: scale
resource_group: foo
- name: Delete an Azure Auto Scale Setting
azure_rm_autoscale:
state: absent
resource_group: foo
name: scale
'''
RETURN = '''
state:
description: Current state of the resource.
returned: always
type: dict
sample: {
"changed": false,
"enabled": true,
"id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/foo/providers/microsoft.insights/autoscalesettings/scale",
"location": "eastus",
"name": "scale",
"notifications": [
{
"custom_emails": [
"[email protected]"
],
"send_to_subscription_administrator": true,
"send_to_subscription_co_administrators": false,
"webhooks": []
}
],
"profiles": [
{
"count": "1",
"max_count": "1",
"min_count": "1",
"name": "Auto created scale condition 0",
"recurrence_days": [
"Monday"
],
"recurrence_frequency": "Week",
"recurrence_hours": [
"6"
],
"recurrence_mins": [
"0"
],
"recurrence_timezone": "China Standard Time",
"rules": [
{
"cooldown": 5.0,
"direction": "Increase",
"metric_name": "Percentage CPU",
"metric_resource_uri": "/subscriptions/X/resourceGroups/foo/providers/Microsoft.Compute/virtualMachineScaleSets/vmss",
"operator": "GreaterThan",
"statistic": "Average",
"threshold": 70.0,
"time_aggregation": "Average",
"time_grain": 1.0,
"time_window": 10.0,
"type": "ChangeCount",
"value": "1"
}
]
}
],
"target": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/foo/providers/Microsoft.Compute/virtualMachineScaleSets/vmss"
}
''' # NOQA
from ansible.module_utils.azure_rm_common import AzureRMModuleBase, format_resource_id
from datetime import timedelta
try:
from msrestazure.tools import parse_resource_id
from msrestazure.azure_exceptions import CloudError
from azure.mgmt.monitor.models import WebhookNotification, EmailNotification, AutoscaleNotification, RecurrentSchedule, MetricTrigger, \
ScaleAction, AutoscaleSettingResource, AutoscaleProfile, ScaleCapacity, TimeWindow, Recurrence, ScaleRule
from ansible.module_utils._text import to_native
except ImportError:
# This is handled in azure_rm_common
pass
def timedelta_to_minutes(time):
if not time:
return 0
return time.days * 1440 + time.seconds / 60.0 + time.microseconds / 60000000.0
def get_enum_value(item):
if 'value' in dir(item):
return to_native(item.value)
return to_native(item)
def auto_scale_to_dict(instance):
if not instance:
return dict()
return dict(
id=to_native(instance.id or ''),
name=to_native(instance.name),
location=to_native(instance.location),
profiles=[profile_to_dict(p) for p in instance.profiles or []],
notifications=[notification_to_dict(n) for n in instance.notifications or []],
enabled=instance.enabled,
target=to_native(instance.target_resource_uri),
tags=instance.tags
)
def rule_to_dict(rule):
if not rule:
return dict()
result = dict(metric_name=to_native(rule.metric_trigger.metric_name),
metric_resource_uri=to_native(rule.metric_trigger.metric_resource_uri),
time_grain=timedelta_to_minutes(rule.metric_trigger.time_grain),
statistic=get_enum_value(rule.metric_trigger.statistic),
time_window=timedelta_to_minutes(rule.metric_trigger.time_window),
time_aggregation=get_enum_value(rule.metric_trigger.time_aggregation),
operator=get_enum_value(rule.metric_trigger.operator),
threshold=float(rule.metric_trigger.threshold))
if rule.scale_action and to_native(rule.scale_action.direction) != 'None':
result['direction'] = get_enum_value(rule.scale_action.direction)
result['type'] = get_enum_value(rule.scale_action.type)
result['value'] = to_native(rule.scale_action.value)
result['cooldown'] = timedelta_to_minutes(rule.scale_action.cooldown)
return result
def profile_to_dict(profile):
if not profile:
return dict()
result = dict(name=to_native(profile.name),
count=to_native(profile.capacity.default),
max_count=to_native(profile.capacity.maximum),
min_count=to_native(profile.capacity.minimum))
if profile.rules:
result['rules'] = [rule_to_dict(r) for r in profile.rules]
if profile.fixed_date:
result['fixed_date_timezone'] = profile.fixed_date.time_zone
result['fixed_date_start'] = profile.fixed_date.start
result['fixed_date_end'] = profile.fixed_date.end
if profile.recurrence:
if get_enum_value(profile.recurrence.frequency) != 'None':
result['recurrence_frequency'] = get_enum_value(profile.recurrence.frequency)
if profile.recurrence.schedule:
result['recurrence_timezone'] = to_native(str(profile.recurrence.schedule.time_zone))
result['recurrence_days'] = [to_native(r) for r in profile.recurrence.schedule.days]
result['recurrence_hours'] = [to_native(r) for r in profile.recurrence.schedule.hours]
result['recurrence_mins'] = [to_native(r) for r in profile.recurrence.schedule.minutes]
return result
def notification_to_dict(notification):
if not notification:
return dict()
return dict(send_to_subscription_administrator=notification.email.send_to_subscription_administrator if notification.email else False,
send_to_subscription_co_administrators=notification.email.send_to_subscription_co_administrators if notification.email else False,
custom_emails=[to_native(e) for e in notification.email.custom_emails or []],
webhooks=[to_native(w.service_url) for w in notification.webhooks or []])
rule_spec = dict(
metric_name=dict(type='str', required=True),
metric_resource_uri=dict(type='str'),
time_grain=dict(type='float', required=True),
statistic=dict(type='str', choices=['Average', 'Min', 'Max', 'Sum'], default='Average'),
time_window=dict(type='float', required=True),
time_aggregation=dict(type='str', choices=['Average', 'Minimum', 'Maximum', 'Total', 'Count'], default='Average'),
operator=dict(type='str',
choices=['Equals', 'NotEquals', 'GreaterThan', 'GreaterThanOrEqual', 'LessThan', 'LessThanOrEqual'],
default='GreaterThan'),
threshold=dict(type='float', default=70),
direction=dict(type='str', choices=['Increase', 'Decrease']),
type=dict(type='str', choices=['PercentChangeCount', 'ExactCount', 'ChangeCount']),
value=dict(type='str'),
cooldown=dict(type='float')
)
profile_spec = dict(
name=dict(type='str', required=True),
count=dict(type='str', required=True),
max_count=dict(type='str'),
min_count=dict(type='str'),
rules=dict(type='list', elements='dict', options=rule_spec),
fixed_date_timezone=dict(type='str'),
fixed_date_start=dict(type='str'),
fixed_date_end=dict(type='str'),
recurrence_frequency=dict(type='str', choices=['None', 'Second', 'Minute', 'Hour', 'Day', 'Week', 'Month', 'Year'], default='None'),
recurrence_timezone=dict(type='str'),
recurrence_days=dict(type='list', elements='str'),
recurrence_hours=dict(type='list', elements='str'),
recurrence_mins=dict(type='list', elements='str')
)
notification_spec = dict(
send_to_subscription_administrator=dict(type='bool', aliases=['email_admin'], default=False),
send_to_subscription_co_administrators=dict(type='bool', aliases=['email_co_admin'], default=False),
custom_emails=dict(type='list', elements='str'),
webhooks=dict(type='list', elements='str')
)
class AzureRMAutoScale(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(type='str', required=True),
name=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['present', 'absent']),
location=dict(type='str'),
target=dict(type='raw'),
profiles=dict(type='list', elements='dict', options=profile_spec),
enabled=dict(type='bool', default=True),
notifications=dict(type='list', elements='dict', options=notification_spec)
)
self.results = dict(
changed=False
)
required_if = [
('state', 'present', ['target', 'profiles'])
]
self.resource_group = None
self.name = None
self.state = None
self.location = None
self.tags = None
self.target = None
self.profiles = None
self.notifications = None
self.enabled = None
super(AzureRMAutoScale, self).__init__(self.module_arg_spec, supports_check_mode=True, required_if=required_if)
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()) + ['tags']:
setattr(self, key, kwargs[key])
results = None
changed = False
self.log('Fetching auto scale settings {0}'.format(self.name))
results = self.get_auto_scale()
if results and self.state == 'absent':
# delete
changed = True
if not self.check_mode:
self.delete_auto_scale()
elif self.state == 'present':
if not self.location:
# Set default location
resource_group = self.get_resource_group(self.resource_group)
self.location = resource_group.location
resource_id = self.target
if isinstance(self.target, dict):
resource_id = format_resource_id(val=self.target['name'],
subscription_id=self.target.get('subscription_id') or self.subscription_id,
namespace=self.target['namespace'],
types=self.target['types'],
resource_group=self.target.get('resource_group') or self.resource_group)
self.target = resource_id
resource_name = self.name
def create_rule_instance(params):
rule = params.copy()
rule['metric_resource_uri'] = rule.get('metric_resource_uri', self.target)
rule['time_grain'] = timedelta(minutes=rule.get('time_grain', 0))
rule['time_window'] = timedelta(minutes=rule.get('time_window', 0))
rule['cooldown'] = timedelta(minutes=rule.get('cooldown', 0))
return ScaleRule(metric_trigger=MetricTrigger(**rule), scale_action=ScaleAction(**rule))
profiles = [AutoscaleProfile(name=p.get('name'),
capacity=ScaleCapacity(minimum=p.get('min_count'),
maximum=p.get('max_count'),
default=p.get('count')),
rules=[create_rule_instance(r) for r in p.get('rules') or []],
fixed_date=TimeWindow(time_zone=p.get('fixed_date_timezone'),
start=p.get('fixed_date_start'),
end=p.get('fixed_date_end')) if p.get('fixed_date_timezone') else None,
recurrence=Recurrence(frequency=p.get('recurrence_frequency'),
schedule=(RecurrentSchedule(time_zone=p.get('recurrence_timezone'),
days=p.get('recurrence_days'),
hours=p.get('recurrence_hours'),
minutes=p.get('recurrence_mins')))
if p.get('recurrence_frequency') else None)) for p in self.profiles or []]
notifications = [AutoscaleNotification(email=EmailNotification(**n),
webhooks=[WebhookNotification(service_uri=w) for w in n.get('webhooks') or []])
for n in self.notifications or []]
if not results:
# create new
changed = True
else:
# check changed
resource_name = results.autoscale_setting_resource_name or self.name
update_tags, tags = self.update_tags(results.tags)
if update_tags:
changed = True
self.tags = tags
if self.target != results.target_resource_uri:
changed = True
if self.enabled != results.enabled:
changed = True
profile_result_set = set([str(profile_to_dict(p)) for p in results.profiles or []])
if profile_result_set != set([str(profile_to_dict(p)) for p in profiles]):
changed = True
notification_result_set = set([str(notification_to_dict(n)) for n in results.notifications or []])
if notification_result_set != set([str(notification_to_dict(n)) for n in notifications]):
changed = True
if changed:
# construct the instance will be send to create_or_update api
results = AutoscaleSettingResource(location=self.location,
tags=self.tags,
profiles=profiles,
notifications=notifications,
enabled=self.enabled,
autoscale_setting_resource_name=resource_name,
target_resource_uri=self.target)
if not self.check_mode:
results = self.create_or_update_auto_scale(results)
# results should be the dict of the instance
self.results = auto_scale_to_dict(results)
self.results['changed'] = changed
return self.results
def get_auto_scale(self):
try:
return self.monitor_client.autoscale_settings.get(self.resource_group, self.name)
except Exception as exc:
self.log('Error: failed to get auto scale settings {0} - {1}'.format(self.name, str(exc)))
return None
def create_or_update_auto_scale(self, param):
try:
return self.monitor_client.autoscale_settings.create_or_update(self.resource_group, self.name, param)
except Exception as exc:
self.fail("Error creating auto scale settings {0} - {1}".format(self.name, str(exc)))
def delete_auto_scale(self):
self.log('Deleting auto scale settings {0}'.format(self.name))
try:
return self.monitor_client.autoscale_settings.delete(self.resource_group, self.name)
except Exception as exc:
self.fail("Error deleting auto scale settings {0} - {1}".format(self.name, str(exc)))
def main():
AzureRMAutoScale()
if __name__ == '__main__':
main()
| gpl-3.0 |
raj454raj/eden | modules/webkit_url2png.py | 53 | 2510 | #!/usr/bin/env python
import sys
import signal
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import QWebPage
def save_webpage_screenshot(url, width, height, file_name = None):
"""Saves a screenshot of the webpage given in url into filename+".png"
width and height, if given, are in pixels
if not given, the browser's default dimensions will be used.
Needs a call to window.print() from within the webpage.
Example:
save_webpage_screenshot(
"http://www.example.com",
"example",
width=1024,
height=768
)
"""
app = QApplication(sys.argv)
signal.signal(signal.SIGINT, signal.SIG_DFL)
class MyQWebPage(QWebPage):
@pyqtSlot()
def shouldInterruptJavaScript(qwebpage):
print "not interrupting"
return False
webpage = MyQWebPage()
# set page dimensions
webpage.setViewportSize(QSize(int(width), int(height)))
# display errors otherwise debugging is very difficult
def print_error(
message,
lineNumber,
sourceID
):
print "\n%(sourceID)s line %(lineNumber)i: \n %(message)s" % locals()
webpage.javaScriptConsoleMessage = print_error
if file_name is None:
result = []
# register print request handler
def onPrintRequested(virtual_browser_window):
#print "onPrintRequested"
# Paint this frame into an image
image = QImage(
webpage.viewportSize(),
QImage.Format_ARGB32
)
painter = QPainter(image)
virtual_browser_window.render(painter)
painter.end()
if file_name is not None:
image.save(file_name)
else:
byte_array = QByteArray()
buffer = QBuffer(byte_array)
buffer.open(QIODevice.WriteOnly)
image.save(buffer, "PNG")
result.append(str(byte_array))
if __name__ == "__main__":
if file_name is None:
sys.stdout.write(result[0])
sys.exit(0)
else:
app.quit()
webpage.printRequested.connect(onPrintRequested)
# load the page and wait for a print request
webpage.mainFrame().load(QUrl(url))
app.exec_()
if file_name is None:
return result[0]
if __name__ == "__main__":
sys.exit(
save_webpage_screenshot(
*sys.argv[1:]
)
) | mit |
dfdeshom/solrcloudpy | solrcloudpy/collection/stats.py | 1 | 1889 | """
Get different statistics about the underlying index in a collection
"""
from future.utils import iteritems
from solrcloudpy.utils import _Request, SolrResult
class SolrIndexStats(object):
"""
Get different statistics about the underlying index in a collection
"""
def __init__(self, connection, name):
"""
:param connection: the connection to solr
:type connection: SolrConnection
:param name: the name of the index
:type name: str
"""
self.connection = connection
self.name = name
self.client = _Request(connection)
@property
def cache_stats(self):
"""
Get cache statistics about the index.
We retrieve cache stats for the document, filter, fiedvalue, fieldcache caches
:return: The result
:rtype: SolrResult
"""
params = {'stats': 'true', 'cat': 'CACHE'}
result = self.client.get('/solr/%s/admin/mbeans' % self.name, params).result.dict
caches = result['solr-mbeans']['CACHE']
res = {}
for cache, info in iteritems(caches):
if cache == 'fieldCache':
res[cache] = {'entries_count': info['stats'].get('entries_count', 0)}
continue
res[cache] = info['stats']
return SolrResult(res)
@property
def queryhandler_stats(self):
"""
Get query handler statistics for all of the handlers used in this Solr node
:return: The result
:rtype: SolrResult
"""
params = {'stats': 'true', 'cat': 'QUERYHANDLER'}
result = self.client.get('/solr/%s/admin/mbeans' % self.name, params).result.dict
caches = result['solr-mbeans']['QUERYHANDLER']
res = {}
for cache, info in iteritems(caches):
res[cache] = info['stats']
return SolrResult(res)
| bsd-3-clause |
asgardproject/asgard-blog | blog/forms.py | 2 | 2259 | from django import forms
from django.utils.translation import ugettext_lazy as _
# Stop Words courtesy of:
# http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/stop_words
STOP_WORDS = r"""\b(a|about|above|across|after|afterwards|again|
against|all|almost|alone|along|already|also|although|always|am|
among|amongst|amoungst|amount|an|and|another|any|anyhow|anyone|
anything|anyway|anywhere|are|around|as|at|back|be|became|because|
become|becomes|becoming|been|before|beforehand|behind|being|
below|beside|besides|between|beyond|bill|both|bottom|but|by|call|
can|cannot|cant|co|computer|con|could|couldnt|cry|de|describe|
detail|do|done|down|due|during|each|eg|eight|either|eleven|else|
elsewhere|empty|enough|etc|even|ever|every|everyone|everything|
everywhere|except|few|fifteen|fify|fill|find|fire|first|five|for|
former|formerly|forty|found|four|from|front|full|further|get|
give|go|had|has|hasnt|have|he|hence|her|here|hereafter|hereby|
herein|hereupon|hers|herself|him|himself|his|how|however|hundred|
i|ie|if|in|inc|indeed|interest|into|is|it|its|itself|keep|last|
latter|latterly|least|less|ltd|made|many|may|me|meanwhile|might|
mill|mine|more|moreover|most|mostly|move|much|must|my|myself|
name|namely|neither|never|nevertheless|next|nine|no|nobody|none|
noone|nor|not|nothing|now|nowhere|of|off|often|on|once|one|only|
onto|or|other|others|otherwise|our|ours|ourselves|out|over|own|
part|per|perhaps|please|put|rather|re|same|see|seem|seemed|
seeming|seems|serious|several|she|should|show|side|since|sincere|
six|sixty|so|some|somehow|someone|something|sometime|sometimes|
somewhere|still|such|system|take|ten|than|that|the|their|them|
themselves|then|thence|there|thereafter|thereby|therefore|
therein|thereupon|these|they|thick|thin|third|this|those|though|
three|through|throughout|thru|thus|to|together|too|top|toward|
towards|twelve|twenty|two|un|under|until|up|upon|us|very|via|was|
we|well|were|what|whatever|when|whence|whenever|where|whereafter|
whereas|whereby|wherein|whereupon|wherever|whether|which|while|
whither|who|whoever|whole|whom|whose|why|will|with|within|
without|would|yet|you|your|yours|yourself|yourselves)\b"""
class BlogSearchForm(forms.Form):
q = forms.CharField(label=_("Search")) | bsd-3-clause |
vvuk/servo | tests/wpt/web-platform-tests/2dcontext/tools/gentestutils.py | 11 | 33741 | # Copyright (c) 2010 Philip Taylor
# Released under the BSD license and W3C Test Suite License: see LICENSE.txt
# Current code status:
#
# This was originally written for use at
# http://philip.html5.org/tests/canvas/suite/tests/
#
# It has been adapted for use with the Web Platform Test Suite suite at
# https://github.com/w3c/web-platform-tests/
#
# The W3C version excludes a number of features (multiple versions of each test
# case of varying verbosity, Mozilla mochitests, semi-automated test harness)
# to focus on simply providing reviewable test cases. It also expects a different
# directory structure.
# This code attempts to support both versions, but the non-W3C version hasn't
# been tested recently and is probably broken.
# To update or add test cases:
#
# * Modify the tests*.yaml files.
# 'name' is an arbitrary hierarchical name to help categorise tests.
# 'desc' is a rough description of what behaviour the test aims to test.
# 'testing' is a list of references to spec.yaml, to show which spec sentences
# this test case is primarily testing.
# 'code' is JavaScript code to execute, with some special commands starting with '@'
# 'expected' is what the final canvas output should be: a string 'green' or 'clear'
# (100x50 images in both cases), or a string 'size 100 50' (or any other size)
# followed by Python code using Pycairo to generate the image.
#
# * Run "python gentest.py".
# This requires a few Python modules which might not be ubiquitous.
# It has only been tested on Linux.
# It will usually emit some warnings, which ideally should be fixed but can
# generally be safely ignored.
#
# * Test the tests, add new ones to Git, remove deleted ones from Git, etc.
import re
import codecs
import time
import os
import shutil
import sys
import xml.dom.minidom
from xml.dom.minidom import Node
import cairo
try:
import syck as yaml # compatible and lots faster
except ImportError:
import yaml
def genTestUtils(TESTOUTPUTDIR, IMAGEOUTPUTDIR, TEMPLATEFILE, NAME2DIRFILE, ISOFFSCREENCANVAS):
# Default mode is for the W3C test suite; the --standalone option
# generates various extra files that aren't needed there
W3CMODE = True
if '--standalone' in sys.argv:
W3CMODE = False
MISCOUTPUTDIR = './output'
SPECOUTPUTDIR = '../../annotated-spec'
SPECOUTPUTPATH = '../annotated-spec' # relative to TESTOUTPUTDIR
def simpleEscapeJS(str):
return str.replace('\\', '\\\\').replace('"', '\\"')
def escapeJS(str):
str = simpleEscapeJS(str)
str = re.sub(r'\[(\w+)\]', r'[\\""+(\1)+"\\"]', str) # kind of an ugly hack, for nicer failure-message output
return str
def escapeHTML(str):
return str.replace('&', '&').replace('<', '<').replace('>', '>').replace('"', '"')
def expand_nonfinite(method, argstr, tail):
"""
>>> print expand_nonfinite('f', '<0 a>, <0 b>', ';')
f(a, 0);
f(0, b);
f(a, b);
>>> print expand_nonfinite('f', '<0 a>, <0 b c>, <0 d>', ';')
f(a, 0, 0);
f(0, b, 0);
f(0, c, 0);
f(0, 0, d);
f(a, b, 0);
f(a, b, d);
f(a, 0, d);
f(0, b, d);
"""
# argstr is "<valid-1 invalid1-1 invalid2-1 ...>, ..." (where usually
# 'invalid' is Infinity/-Infinity/NaN)
args = []
for arg in argstr.split(', '):
a = re.match('<(.*)>', arg).group(1)
args.append(a.split(' '))
calls = []
# Start with the valid argument list
call = [ args[j][0] for j in range(len(args)) ]
# For each argument alone, try setting it to all its invalid values:
for i in range(len(args)):
for a in args[i][1:]:
c2 = call[:]
c2[i] = a
calls.append(c2)
# For all combinations of >= 2 arguments, try setting them to their
# first invalid values. (Don't do all invalid values, because the
# number of combinations explodes.)
def f(c, start, depth):
for i in range(start, len(args)):
if len(args[i]) > 1:
a = args[i][1]
c2 = c[:]
c2[i] = a
if depth > 0: calls.append(c2)
f(c2, i+1, depth+1)
f(call, 0, 0)
return '\n'.join('%s(%s)%s' % (method, ', '.join(c), tail) for c in calls)
# Run with --test argument to run unit tests
if len(sys.argv) > 1 and sys.argv[1] == '--test':
import doctest
doctest.testmod()
sys.exit()
templates = yaml.load(open(TEMPLATEFILE, "r").read())
name_mapping = yaml.load(open(NAME2DIRFILE, "r").read())
SPECFILE = 'spec.yaml'
if ISOFFSCREENCANVAS:
SPECFILE = '../../2dcontext/tools/spec.yaml'
spec_assertions = []
for s in yaml.load(open(SPECFILE, "r").read())['assertions']:
if 'meta' in s:
eval(compile(s['meta'], '<meta spec assertion>', 'exec'), {}, {'assertions':spec_assertions})
else:
spec_assertions.append(s)
tests = []
TESTSFILES = ['tests.yaml', 'tests2d.yaml', 'tests2dtext.yaml']
if ISOFFSCREENCANVAS:
TESTSFILES = ['tests2d.yaml']
for t in sum([ yaml.load(open(f, "r").read()) for f in TESTSFILES], []):
if 'DISABLED' in t:
continue
if 'meta' in t:
eval(compile(t['meta'], '<meta test>', 'exec'), {}, {'tests':tests})
else:
tests.append(t)
category_names = []
category_contents_direct = {}
category_contents_all = {}
spec_ids = {}
for t in spec_assertions: spec_ids[t['id']] = True
spec_refs = {}
def backref_html(name):
backrefs = []
c = ''
for p in name.split('.')[:-1]:
c += '.'+p
backrefs.append('<a href="index%s.html">%s</a>.' % (c, p))
backrefs.append(name.split('.')[-1])
return ''.join(backrefs)
def make_flat_image(filename, w, h, r,g,b,a):
if os.path.exists('%s/%s' % (IMAGEOUTPUTDIR, filename)):
return filename
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, w, h)
cr = cairo.Context(surface)
cr.set_source_rgba(r, g, b, a)
cr.rectangle(0, 0, w, h)
cr.fill()
surface.write_to_png('%s/%s' % (IMAGEOUTPUTDIR, filename))
return filename
# Ensure the test output directories exist
testdirs = [TESTOUTPUTDIR, IMAGEOUTPUTDIR, MISCOUTPUTDIR]
if not W3CMODE: testdirs.append('%s/mochitests' % MISCOUTPUTDIR)
else:
for map_dir in set(name_mapping.values()):
testdirs.append("%s/%s" % (TESTOUTPUTDIR, map_dir))
for d in testdirs:
try: os.mkdir(d)
except: pass # ignore if it already exists
mochitests = []
used_images = {}
def expand_test_code(code):
code = re.sub(r'@nonfinite ([^(]+)\(([^)]+)\)(.*)', lambda m: expand_nonfinite(m.group(1), m.group(2), m.group(3)), code) # must come before '@assert throws'
if ISOFFSCREENCANVAS:
code = re.sub(r'@assert pixel (\d+,\d+) == (\d+,\d+,\d+,\d+);',
r'_assertPixel(offscreenCanvas, \1, \2, "\1", "\2");',
code)
else:
code = re.sub(r'@assert pixel (\d+,\d+) == (\d+,\d+,\d+,\d+);',
r'_assertPixel(canvas, \1, \2, "\1", "\2");',
code)
if ISOFFSCREENCANVAS:
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+);',
r'_assertPixelApprox(offscreenCanvas, \1, \2, "\1", "\2", 2);',
code)
else:
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+);',
r'_assertPixelApprox(canvas, \1, \2, "\1", "\2", 2);',
code)
if ISOFFSCREENCANVAS:
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+) \+/- (\d+);',
r'_assertPixelApprox(offscreenCanvas, \1, \2, "\1", "\2", \3);',
code)
else:
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+) \+/- (\d+);',
r'_assertPixelApprox(canvas, \1, \2, "\1", "\2", \3);',
code)
code = re.sub(r'@assert throws (\S+_ERR) (.*);',
r'assert_throws("\1", function() { \2; });',
code)
code = re.sub(r'@assert throws (\S+Error) (.*);',
r'assert_throws(new \1(), function() { \2; });',
code)
code = re.sub(r'@assert throws (.*);',
r'assert_throws(null, function() { \1; });',
code)
code = re.sub(r'@assert (.*) === (.*);',
lambda m: '_assertSame(%s, %s, "%s", "%s");'
% (m.group(1), m.group(2), escapeJS(m.group(1)), escapeJS(m.group(2)))
, code)
code = re.sub(r'@assert (.*) !== (.*);',
lambda m: '_assertDifferent(%s, %s, "%s", "%s");'
% (m.group(1), m.group(2), escapeJS(m.group(1)), escapeJS(m.group(2)))
, code)
code = re.sub(r'@assert (.*) =~ (.*);',
lambda m: 'assert_regexp_match(%s, %s);'
% (m.group(1), m.group(2))
, code)
code = re.sub(r'@assert (.*);',
lambda m: '_assert(%s, "%s");'
% (m.group(1), escapeJS(m.group(1)))
, code)
code = re.sub(r' @moz-todo', '', code)
code = re.sub(r'@moz-UniversalBrowserRead;',
""
, code)
assert('@' not in code)
return code
def expand_mochitest_code(code):
code = re.sub(r'@nonfinite ([^(]+)\(([^)]+)\)(.*)', lambda m: expand_nonfinite(m.group(1), m.group(2), m.group(3)), code)
code = re.sub(r'@assert pixel (\d+,\d+) == (\d+,\d+,\d+,\d+);',
r'isPixel(ctx, \1, \2, "\1", "\2", 0);',
code)
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+);',
r'isPixel(ctx, \1, \2, "\1", "\2", 2);',
code)
code = re.sub(r'@assert pixel (\d+,\d+) ==~ (\d+,\d+,\d+,\d+) \+/- (\d+);',
r'isPixel(ctx, \1, \2, "\1", "\2", \3);',
code)
code = re.sub(r'@assert throws (\S+_ERR) (.*);',
lambda m: 'var _thrown = undefined; try {\n %s;\n} catch (e) { _thrown = e }; ok(_thrown && _thrown.code == DOMException.%s, "should throw %s");'
% (m.group(2), m.group(1), m.group(1))
, code)
code = re.sub(r'@assert throws (\S+Error) (.*);',
lambda m: 'var _thrown = undefined; try {\n %s;\n} catch (e) { _thrown = e }; ok(_thrown && (_thrown instanceof %s), "should throw %s");'
% (m.group(2), m.group(1), m.group(1))
, code)
code = re.sub(r'@assert throws (.*);',
lambda m: 'try { var _thrown = false;\n %s;\n} catch (e) { _thrown = true; } finally { ok(_thrown, "should throw exception"); }'
% (m.group(1))
, code)
code = re.sub(r'@assert (.*) =~ (.*);',
lambda m: 'ok(%s.match(%s), "%s.match(%s)");'
% (m.group(1), m.group(2), escapeJS(m.group(1)), escapeJS(m.group(2)))
, code)
code = re.sub(r'@assert (.*);',
lambda m: 'ok(%s, "%s");'
% (m.group(1), escapeJS(m.group(1)))
, code)
code = re.sub(r'((?:^|\n|;)\s*)ok(.*;) @moz-todo',
lambda m: '%stodo%s'
% (m.group(1), m.group(2))
, code)
code = re.sub(r'((?:^|\n|;)\s*)(is.*;) @moz-todo',
lambda m: '%stodo_%s'
% (m.group(1), m.group(2))
, code)
code = re.sub(r'@moz-UniversalBrowserRead;',
"netscape.security.PrivilegeManager.enablePrivilege('UniversalBrowserRead');"
, code)
code = code.replace('../images/', 'image_')
assert '@' not in code, '@ not in code:\n%s' % code
return code
used_tests = {}
for i in range(len(tests)):
test = tests[i]
name = test['name']
print "\r(%s)" % name, " "*32, "\t",
if name in used_tests:
print "Test %s is defined twice" % name
used_tests[name] = 1
mapped_name = None
for mn in sorted(name_mapping.keys(), key=len, reverse=True):
if name.startswith(mn):
mapped_name = "%s/%s" % (name_mapping[mn], name)
break
if not mapped_name:
print "LIKELY ERROR: %s has no defined target directory mapping" % name
if ISOFFSCREENCANVAS:
continue
else:
mapped_name = name
if 'manual' in test:
mapped_name += "-manual"
cat_total = ''
for cat_part in [''] + name.split('.')[:-1]:
cat_total += cat_part+'.'
if not cat_total in category_names: category_names.append(cat_total)
category_contents_all.setdefault(cat_total, []).append(name)
category_contents_direct.setdefault(cat_total, []).append(name)
for ref in test.get('testing', []):
if ref not in spec_ids:
print "Test %s uses nonexistent spec point %s" % (name, ref)
spec_refs.setdefault(ref, []).append(name)
#if not (len(test.get('testing', [])) or 'mozilla' in test):
if not test.get('testing', []):
print "Test %s doesn't refer to any spec points" % name
if test.get('expected', '') == 'green' and re.search(r'@assert pixel .* 0,0,0,0;', test['code']):
print "Probable incorrect pixel test in %s" % name
code = expand_test_code(test['code'])
mochitest = not (W3CMODE or 'manual' in test or 'disabled' in test.get('mozilla', {}))
if mochitest:
mochi_code = expand_mochitest_code(test['code'])
mochi_name = name
if 'mozilla' in test:
if 'throws' in test['mozilla']:
mochi_code = templates['mochitest.exception'] % mochi_code
if 'bug' in test['mozilla']:
mochi_name = "%s - bug %s" % (name, test['mozilla']['bug'])
if 'desc' in test:
mochi_desc = '<!-- Testing: %s -->\n' % test['desc']
else:
mochi_desc = ''
if 'deferTest' in mochi_code:
mochi_setup = ''
mochi_footer = ''
else:
mochi_setup = ''
mochi_footer = 'SimpleTest.finish();\n'
for f in ['isPixel', 'todo_isPixel', 'deferTest', 'wrapFunction']:
if f in mochi_code:
mochi_setup += templates['mochitest.%s' % f]
else:
if not W3CMODE:
print "Skipping mochitest for %s" % name
mochi_name = ''
mochi_desc = ''
mochi_code = ''
mochi_setup = ''
mochi_footer = ''
expectation_html = ''
if 'expected' in test and test['expected'] is not None:
expected = test['expected']
expected_img = None
if expected == 'green':
expected_img = make_flat_image('green-100x50.png', 100, 50, 0,1,0,1)
if W3CMODE: expected_img = "/images/" + expected_img
elif expected == 'clear':
expected_img = make_flat_image('clear-100x50.png', 100, 50, 0,0,0,0)
if W3CMODE: expected_img = "/images/" + expected_img
else:
if ';' in expected: print "Found semicolon in %s" % name
expected = re.sub(r'^size (\d+) (\d+)',
r'surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, \1, \2)\ncr = cairo.Context(surface)',
expected)
if mapped_name.endswith("-manual"):
png_name = mapped_name[:-len("-manual")]
else:
png_name = mapped_name
expected += "\nsurface.write_to_png('%s/%s.png')\n" % (IMAGEOUTPUTDIR, png_name)
eval(compile(expected, '<test %s>' % test['name'], 'exec'), {}, {'cairo':cairo})
expected_img = "%s.png" % name
if expected_img:
expectation_html = ('<p class="output expectedtext">Expected output:' +
'<p><img src="%s" class="output expected" id="expected" alt="">' % (expected_img))
canvas = test.get('canvas', 'width="100" height="50"')
prev = tests[i-1]['name'] if i != 0 else 'index'
next = tests[i+1]['name'] if i != len(tests)-1 else 'index'
name_wrapped = name.replace('.', '.​') # (see https://bugzilla.mozilla.org/show_bug.cgi?id=376188)
refs = ''.join('<li><a href="%s/canvas.html#testrefs.%s">%s</a>\n' % (SPECOUTPUTPATH, n,n) for n in test.get('testing', []))
if not W3CMODE and 'mozilla' in test and 'bug' in test['mozilla']:
refs += '<li><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=%d">Bugzilla</a>' % test['mozilla']['bug']
notes = '<p class="notes">%s' % test['notes'] if 'notes' in test else ''
scripts = ''
for s in test.get('scripts', []):
scripts += '<script src="%s"></script>\n' % (s)
images = ''
for i in test.get('images', []):
id = i.split('/')[-1]
if '/' not in i:
used_images[i] = 1
i = '../images/%s' % i
images += '<img src="%s" id="%s" class="resource">\n' % (i,id)
mochi_images = images.replace('../images/', 'image_')
if W3CMODE: images = images.replace("../images/", "/images/")
fonts = ''
fonthack = ''
for i in test.get('fonts', []):
fonts += '@font-face {\n font-family: %s;\n src: url("/fonts/%s.ttf");\n}\n' % (i, i)
# Browsers require the font to actually be used in the page
if test.get('fonthack', 1):
fonthack += '<span style="font-family: %s; position: absolute; visibility: hidden">A</span>\n' % i
if fonts:
fonts = '<style>\n%s</style>\n' % fonts
fallback = test.get('fallback', '<p class="fallback">FAIL (fallback content)</p>')
desc = test.get('desc', '')
escaped_desc = simpleEscapeJS(desc)
template_params = {
'name':name, 'name_wrapped':name_wrapped, 'backrefs':backref_html(name),
'mapped_name':mapped_name,
'desc':desc, 'escaped_desc':escaped_desc,
'prev':prev, 'next':next, 'refs':refs, 'notes':notes, 'images':images,
'fonts':fonts, 'fonthack':fonthack,
'canvas':canvas, 'expected':expectation_html, 'code':code, 'scripts':scripts,
'mochi_name':mochi_name, 'mochi_desc':mochi_desc, 'mochi_code':mochi_code,
'mochi_setup':mochi_setup, 'mochi_footer':mochi_footer, 'mochi_images':mochi_images,
'fallback':fallback
}
if W3CMODE:
f = codecs.open('%s/%s.html' % (TESTOUTPUTDIR, mapped_name), 'w', 'utf-8')
f.write(templates['w3c'] % template_params)
if ISOFFSCREENCANVAS:
f = codecs.open('%s/%s.worker.js' % (TESTOUTPUTDIR, mapped_name), 'w', 'utf-8')
f.write(templates['w3cworker'] % template_params)
else:
f = codecs.open('%s/%s.html' % (TESTOUTPUTDIR, name), 'w', 'utf-8')
f.write(templates['standalone'] % template_params)
f = codecs.open('%s/framed.%s.html' % (TESTOUTPUTDIR, name), 'w', 'utf-8')
f.write(templates['framed'] % template_params)
f = codecs.open('%s/minimal.%s.html' % (TESTOUTPUTDIR, name), 'w', 'utf-8')
f.write(templates['minimal'] % template_params)
if mochitest:
mochitests.append(name)
f = codecs.open('%s/mochitests/test_%s.html' % (MISCOUTPUTDIR, name), 'w', 'utf-8')
f.write(templates['mochitest'] % template_params)
def write_mochitest_makefile():
f = open('%s/mochitests/Makefile.in' % MISCOUTPUTDIR, 'w')
f.write(templates['mochitest.Makefile'])
files = ['test_%s.html' % n for n in mochitests] + ['image_%s' % n for n in used_images]
chunksize = 100
chunks = []
for i in range(0, len(files), chunksize):
chunk = files[i:i+chunksize]
name = '_TEST_FILES_%d' % (i / chunksize)
chunks.append(name)
f.write('%s = \\\n' % name)
for file in chunk: f.write('\t%s \\\n' % file)
f.write('\t$(NULL)\n\n')
f.write('# split up into groups to work around command-line length limits\n')
for name in chunks:
f.write('libs:: $(%s)\n\t$(INSTALL) $(foreach f,$^,"$f") $(DEPTH)/_tests/testing/mochitest/tests/$(relativesrcdir)\n\n' % name)
if not W3CMODE:
for i in used_images:
shutil.copyfile("../../images/%s" % i, "%s/mochitests/image_%s" % (MISCOUTPUTDIR, i))
write_mochitest_makefile()
print
def write_index():
f = open('%s/index.html' % TESTOUTPUTDIR, 'w')
f.write(templates['index.w3c' if W3CMODE else 'index'] % { 'updated':time.strftime('%Y-%m-%d', time.gmtime()) })
f.write('\n<ul class="testlist">\n')
depth = 1
for category in category_names:
name = category[1:-1] or ''
count = len(category_contents_all[category])
new_depth = category.count('.')
while new_depth < depth: f.write(' '*(depth-1) + '</ul>\n'); depth -= 1
f.write(' '*depth + templates['index.w3c.category.item' if W3CMODE else 'index.category.item'] % (name or 'all', name, count, '' if count==1 else 's'))
while new_depth+1 > depth: f.write(' '*depth + '<ul>\n'); depth += 1
for item in category_contents_direct.get(category, []):
f.write(' '*depth + '<li><a href="%s.html">%s</a>\n' % (item, item) )
while 0 < depth: f.write(' '*(depth-1) + '</ul>\n'); depth -= 1
def write_category_indexes():
for category in category_names:
name = (category[1:-1] or 'all')
f = open('%s/index.%s.html' % (TESTOUTPUTDIR, name), 'w')
f.write(templates['index.w3c.frame' if W3CMODE else 'index.frame'] % { 'backrefs':backref_html(name), 'category':name })
for item in category_contents_all[category]:
f.write(templates['index.w3c.frame.item' if W3CMODE else 'index.frame.item'] % item)
def write_reportgen():
f = open('%s/reportgen.html' % MISCOUTPUTDIR, 'w')
items_text = ',\n'.join(('"%s"' % item) for item in category_contents_all['.'])
f.write(templates['reportgen'] % {'items':items_text })
def write_results():
results = {}
uas = []
uastrings = {}
for item in category_contents_all['.']: results[item] = {}
f = open('%s/results.html' % MISCOUTPUTDIR, 'w')
f.write(templates['results'])
if not os.path.exists('results.yaml'):
print "Can't find results.yaml"
else:
for resultset in yaml.load(open('results.yaml', "r").read()):
#title = "%s (%s)" % (resultset['ua'], resultset['time'])
title = resultset['name']
#assert title not in uas # don't allow repetitions
if title not in uas:
uas.append(title)
uastrings[title] = resultset['ua']
else:
assert uastrings[title] == resultset['ua']
for r in resultset['results']:
if r['id'] not in results:
print 'Skipping results for removed test %s' % r['id']
continue
results[r['id']][title] = (
r['status'].lower(),
re.sub(r'%(..)', lambda m: chr(int(m.group(1), 16)),
re.sub(r'%u(....)', lambda m: unichr(int(m.group(1), 16)),
r['notes'])).encode('utf8')
)
passes = {}
for ua in uas:
f.write('<th title="%s">%s\n' % (uastrings[ua], ua))
passes[ua] = 0
for id in category_contents_all['.']:
f.write('<tr><td><a href="#%s" id="%s">#</a> <a href="%s.html">%s</a>\n' % (id, id, id, id))
for ua in uas:
status, details = results[id].get(ua, ('', ''))
f.write('<td class="r %s"><ul class="d">%s</ul>\n' % (status, details))
if status == 'pass': passes[ua] += 1
f.write('<tr><th>Passes\n')
for ua in uas:
f.write('<td>%.1f%%\n' % ((100.0 * passes[ua]) / len(category_contents_all['.'])))
f.write('<tr><td>\n')
for ua in uas:
f.write('<td>%s\n' % ua)
f.write('</table>\n')
def getNodeText(node):
t, offsets = '', []
# Skip over any previous annotations we added
if node.nodeType == node.ELEMENT_NODE and 'testrefs' in node.getAttribute('class').split(' '):
return t, offsets
if node.nodeType == node.TEXT_NODE:
val = node.nodeValue
val = val.replace(unichr(0xa0), ' ') # replace s
t += val
offsets += [ (node, len(node.nodeValue)) ]
for n in node.childNodes:
child_t, child_offsets = getNodeText(n)
t += child_t
offsets += child_offsets
return t, offsets
def htmlSerializer(element):
element.normalize()
rv = []
specialtext = ['style', 'script', 'xmp', 'iframe', 'noembed', 'noframes', 'noscript']
empty = ['area', 'base', 'basefont', 'bgsound', 'br', 'col', 'embed', 'frame',
'hr', 'img', 'input', 'link', 'meta', 'param', 'spacer', 'wbr']
def serializeElement(element):
if element.nodeType == Node.DOCUMENT_TYPE_NODE:
rv.append("<!DOCTYPE %s>" % element.name)
elif element.nodeType == Node.DOCUMENT_NODE:
for child in element.childNodes:
serializeElement(child)
elif element.nodeType == Node.COMMENT_NODE:
rv.append("<!--%s-->" % element.nodeValue)
elif element.nodeType == Node.TEXT_NODE:
unescaped = False
n = element.parentNode
while n is not None:
if n.nodeName in specialtext:
unescaped = True
break
n = n.parentNode
if unescaped:
rv.append(element.nodeValue)
else:
rv.append(escapeHTML(element.nodeValue))
else:
rv.append("<%s" % element.nodeName)
if element.hasAttributes():
for name, value in element.attributes.items():
rv.append(' %s="%s"' % (name, escapeHTML(value)))
rv.append(">")
if element.nodeName not in empty:
for child in element.childNodes:
serializeElement(child)
rv.append("</%s>" % element.nodeName)
serializeElement(element)
return '<!DOCTYPE html>\n' + ''.join(rv)
def write_annotated_spec():
# Load the stripped-down XHTMLised copy of the spec
doc = xml.dom.minidom.parse(open('current-work-canvas.xhtml', 'r'))
# Insert our new stylesheet
n = doc.getElementsByTagName('head')[0].appendChild(doc.createElement('link'))
n.setAttribute('rel', 'stylesheet')
n.setAttribute('href', '../common/canvas-spec.css' if W3CMODE else '../spectest.css')
n.setAttribute('type', 'text/css')
spec_assertion_patterns = []
for a in spec_assertions:
# Warn about problems
if a['id'] not in spec_refs:
print "Unused spec statement %s" % a['id']
pattern_text = a['text']
if 'keyword' in a:
# Explicit keyword override
keyword = a['keyword']
else:
# Extract the marked keywords, and remove the markers
keyword = 'none'
for kw in ['must', 'should', 'required']:
if ('*%s*' % kw) in pattern_text:
keyword = kw
pattern_text = pattern_text.replace('*%s*' % kw, kw)
break
# Make sure there wasn't >1 keyword
for kw in ['must', 'should', 'required']:
assert('*%s*' % kw not in pattern_text)
# Convert the special pattern format into regexp syntax
pattern_text = (pattern_text.
# Escape relevant characters
replace('*', r'\*').
replace('+', r'\+').
replace('.', r'\.').
replace('(', r'\(').
replace(')', r'\)').
replace('[', r'\[').
replace(']', r'\]').
# Convert special sequences back into unescaped regexp code
replace(' ', r'\s+').
replace(r'<\.\.\.>', r'.+').
replace('<^>', r'()').
replace('<eol>', r'\s*?\n')
)
pattern = re.compile(pattern_text, re.S)
spec_assertion_patterns.append( (a['id'], pattern, keyword, a.get('previously', None)) )
matched_assertions = {}
def process_element(e):
if e.nodeType == e.ELEMENT_NODE and (e.getAttribute('class') == 'impl' or e.hasAttribute('data-component')):
for c in e.childNodes:
process_element(c)
return
t, offsets = getNodeText(e)
for id, pattern, keyword, previously in spec_assertion_patterns:
m = pattern.search(t)
if m:
# When the pattern-match isn't enough to uniquely identify a sentence,
# allow explicit back-references to earlier paragraphs
if previously:
if len(previously) >= 3:
n, text, exp = previously
else:
n, text = previously
exp = True
node = e
while n and node.previousSibling:
node = node.previousSibling
n -= 1
if (text not in getNodeText(node)[0]) == exp:
continue # discard this match
if id in matched_assertions:
print "Spec statement %s matches multiple places" % id
matched_assertions[id] = True
if m.lastindex != 1:
print "Spec statement %s has incorrect number of match groups" % id
end = m.end(1)
end_node = None
for end_node, o in offsets:
if end < o:
break
end -= o
assert(end_node)
n1 = doc.createElement('span')
n1.setAttribute('class', 'testrefs kw-%s' % keyword)
n1.setAttribute('id', 'testrefs.%s' % id)
n1.appendChild(doc.createTextNode(' '))
n = n1.appendChild(doc.createElement('a'))
n.setAttribute('href', '#testrefs.%s' % id)
n.setAttribute('title', id)
n.appendChild(doc.createTextNode('#'))
n1.appendChild(doc.createTextNode(' '))
for test_id in spec_refs.get(id, []):
n = n1.appendChild(doc.createElement('a'))
n.setAttribute('href', '../canvas/%s.html' % test_id)
n.appendChild(doc.createTextNode(test_id))
n1.appendChild(doc.createTextNode(' '))
n0 = doc.createTextNode(end_node.nodeValue[:end])
n2 = doc.createTextNode(end_node.nodeValue[end:])
p = end_node.parentNode
p.replaceChild(n2, end_node)
p.insertBefore(n1, n2)
p.insertBefore(n0, n1)
t, offsets = getNodeText(e)
for e in doc.getElementsByTagName('body')[0].childNodes:
process_element(e)
for s in spec_assertions:
if s['id'] not in matched_assertions:
print "Annotation incomplete: Unmatched spec statement %s" % s['id']
# Convert from XHTML back to HTML
doc.documentElement.removeAttribute('xmlns')
doc.documentElement.setAttribute('lang', doc.documentElement.getAttribute('xml:lang'))
head = doc.documentElement.getElementsByTagName('head')[0]
head.insertBefore(doc.createElement('meta'), head.firstChild).setAttribute('charset', 'UTF-8')
f = codecs.open('%s/canvas.html' % SPECOUTPUTDIR, 'w', 'utf-8')
f.write(htmlSerializer(doc))
if not W3CMODE:
write_index()
write_category_indexes()
write_reportgen()
write_results()
write_annotated_spec()
| mpl-2.0 |
Pexego/alimentacion | product_format/__openerp__.py | 2 | 1402 | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2004-2012 Pexego Sistemas Informáticos All Rights Reserved
# $Marta Vázquez Rodríguez$ <[email protected]>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
"name" : "Format product",
"description" : """Add format field to product""",
"version" : "1.0",
"author" : "Pexego",
"depends" : ["base", "product", "stock"],
"category" : "Product",
"init_xml" : [],
"update_xml" : ["product_format_view.xml", "product_view.xml", "security/ir.model.access.csv"],
'demo_xml': [],
'installable': True,
'active': False,
}
| agpl-3.0 |
AICP/external_chromium_org | build/mac/tweak_info_plist.py | 42 | 10163 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
#
# Xcode supports build variable substitutions and CPP; sadly, that doesn't work
# because:
#
# 1. Xcode wants to do the Info.plist work before it runs any build phases,
# this means if we were to generate a .h file for INFOPLIST_PREFIX_HEADER
# we'd have to put it in another target so it runs in time.
# 2. Xcode also doesn't check to see if the header being used as a prefix for
# the Info.plist has changed. So even if we updated it, it's only looking
# at the modtime of the info.plist to see if that's changed.
#
# So, we work around all of this by making a script build phase that will run
# during the app build, and simply update the info.plist in place. This way
# by the time the app target is done, the info.plist is correct.
#
import optparse
import os
from os import environ as env
import plistlib
import re
import subprocess
import sys
import tempfile
TOP = os.path.join(env['SRCROOT'], '..')
def _GetOutput(args):
"""Runs a subprocess and waits for termination. Returns (stdout, returncode)
of the process. stderr is attached to the parent."""
proc = subprocess.Popen(args, stdout=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
return (stdout, proc.returncode)
def _GetOutputNoError(args):
"""Similar to _GetOutput() but ignores stderr. If there's an error launching
the child (like file not found), the exception will be caught and (None, 1)
will be returned to mimic quiet failure."""
try:
proc = subprocess.Popen(args, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
except OSError:
return (None, 1)
(stdout, stderr) = proc.communicate()
return (stdout, proc.returncode)
def _RemoveKeys(plist, *keys):
"""Removes a varargs of keys from the plist."""
for key in keys:
try:
del plist[key]
except KeyError:
pass
def _AddVersionKeys(plist, version=None):
"""Adds the product version number into the plist. Returns True on success and
False on error. The error will be printed to stderr."""
if version:
match = re.match('\d+\.\d+\.(\d+\.\d+)$', version)
if not match:
print >>sys.stderr, 'Invalid version string specified: "%s"' % version
return False
full_version = match.group(0)
bundle_version = match.group(1)
else:
# Pull in the Chrome version number.
VERSION_TOOL = os.path.join(TOP, 'build/util/version.py')
VERSION_FILE = os.path.join(TOP, 'chrome/VERSION')
(stdout, retval1) = _GetOutput([VERSION_TOOL, '-f', VERSION_FILE, '-t',
'@MAJOR@.@MINOR@.@BUILD@.@PATCH@'])
full_version = stdout.rstrip()
(stdout, retval2) = _GetOutput([VERSION_TOOL, '-f', VERSION_FILE, '-t',
'@BUILD@.@PATCH@'])
bundle_version = stdout.rstrip()
# If either of the two version commands finished with non-zero returncode,
# report the error up.
if retval1 or retval2:
return False
# Add public version info so "Get Info" works.
plist['CFBundleShortVersionString'] = full_version
# Honor the 429496.72.95 limit. The maximum comes from splitting 2^32 - 1
# into 6, 2, 2 digits. The limitation was present in Tiger, but it could
# have been fixed in later OS release, but hasn't been tested (it's easy
# enough to find out with "lsregister -dump).
# http://lists.apple.com/archives/carbon-dev/2006/Jun/msg00139.html
# BUILD will always be an increasing value, so BUILD_PATH gives us something
# unique that meetings what LS wants.
plist['CFBundleVersion'] = bundle_version
# Return with no error.
return True
def _DoSCMKeys(plist, add_keys):
"""Adds the SCM information, visible in about:version, to property list. If
|add_keys| is True, it will insert the keys, otherwise it will remove them."""
scm_revision = None
if add_keys:
# Pull in the Chrome revision number.
VERSION_TOOL = os.path.join(TOP, 'build/util/version.py')
LASTCHANGE_FILE = os.path.join(TOP, 'build/util/LASTCHANGE')
(stdout, retval) = _GetOutput([VERSION_TOOL, '-f', LASTCHANGE_FILE, '-t',
'@LASTCHANGE@'])
if retval:
return False
scm_revision = stdout.rstrip()
# See if the operation failed.
_RemoveKeys(plist, 'SCMRevision')
if scm_revision != None:
plist['SCMRevision'] = scm_revision
elif add_keys:
print >>sys.stderr, 'Could not determine SCM revision. This may be OK.'
return True
def _AddBreakpadKeys(plist, branding):
"""Adds the Breakpad keys. This must be called AFTER _AddVersionKeys() and
also requires the |branding| argument."""
plist['BreakpadReportInterval'] = '3600' # Deliberately a string.
plist['BreakpadProduct'] = '%s_Mac' % branding
plist['BreakpadProductDisplay'] = branding
plist['BreakpadVersion'] = plist['CFBundleShortVersionString']
# These are both deliberately strings and not boolean.
plist['BreakpadSendAndExit'] = 'YES'
plist['BreakpadSkipConfirm'] = 'YES'
def _RemoveBreakpadKeys(plist):
"""Removes any set Breakpad keys."""
_RemoveKeys(plist,
'BreakpadURL',
'BreakpadReportInterval',
'BreakpadProduct',
'BreakpadProductDisplay',
'BreakpadVersion',
'BreakpadSendAndExit',
'BreakpadSkipConfirm')
def _TagSuffixes():
# Keep this list sorted in the order that tag suffix components are to
# appear in a tag value. That is to say, it should be sorted per ASCII.
components = ('32bit', 'full')
assert tuple(sorted(components)) == components
components_len = len(components)
combinations = 1 << components_len
tag_suffixes = []
for combination in xrange(0, combinations):
tag_suffix = ''
for component_index in xrange(0, components_len):
if combination & (1 << component_index):
tag_suffix += '-' + components[component_index]
tag_suffixes.append(tag_suffix)
return tag_suffixes
def _AddKeystoneKeys(plist, bundle_identifier):
"""Adds the Keystone keys. This must be called AFTER _AddVersionKeys() and
also requires the |bundle_identifier| argument (com.example.product)."""
plist['KSVersion'] = plist['CFBundleShortVersionString']
plist['KSProductID'] = bundle_identifier
plist['KSUpdateURL'] = 'https://tools.google.com/service/update2'
_RemoveKeys(plist, 'KSChannelID')
for tag_suffix in _TagSuffixes():
if tag_suffix:
plist['KSChannelID' + tag_suffix] = tag_suffix
def _RemoveKeystoneKeys(plist):
"""Removes any set Keystone keys."""
_RemoveKeys(plist,
'KSVersion',
'KSProductID',
'KSUpdateURL')
tag_keys = []
for tag_suffix in _TagSuffixes():
tag_keys.append('KSChannelID' + tag_suffix)
_RemoveKeys(plist, *tag_keys)
def Main(argv):
parser = optparse.OptionParser('%prog [options]')
parser.add_option('--breakpad', dest='use_breakpad', action='store',
type='int', default=False, help='Enable Breakpad [1 or 0]')
parser.add_option('--breakpad_uploads', dest='breakpad_uploads',
action='store', type='int', default=False,
help='Enable Breakpad\'s uploading of crash dumps [1 or 0]')
parser.add_option('--keystone', dest='use_keystone', action='store',
type='int', default=False, help='Enable Keystone [1 or 0]')
parser.add_option('--scm', dest='add_scm_info', action='store', type='int',
default=True, help='Add SCM metadata [1 or 0]')
parser.add_option('--branding', dest='branding', action='store',
type='string', default=None, help='The branding of the binary')
parser.add_option('--bundle_id', dest='bundle_identifier',
action='store', type='string', default=None,
help='The bundle id of the binary')
parser.add_option('--version', dest='version', action='store', type='string',
default=None, help='The version string [major.minor.build.patch]')
(options, args) = parser.parse_args(argv)
if len(args) > 0:
print >>sys.stderr, parser.get_usage()
return 1
# Read the plist into its parsed format.
DEST_INFO_PLIST = os.path.join(env['TARGET_BUILD_DIR'], env['INFOPLIST_PATH'])
plist = plistlib.readPlist(DEST_INFO_PLIST)
# Insert the product version.
if not _AddVersionKeys(plist, version=options.version):
return 2
# Add Breakpad if configured to do so.
if options.use_breakpad:
if options.branding is None:
print >>sys.stderr, 'Use of Breakpad requires branding.'
return 1
_AddBreakpadKeys(plist, options.branding)
if options.breakpad_uploads:
plist['BreakpadURL'] = 'https://clients2.google.com/cr/report'
else:
# This allows crash dumping to a file without uploading the
# dump, for testing purposes. Breakpad does not recognise
# "none" as a special value, but this does stop crash dump
# uploading from happening. We need to specify something
# because if "BreakpadURL" is not present, Breakpad will not
# register its crash handler and no crash dumping will occur.
plist['BreakpadURL'] = 'none'
else:
_RemoveBreakpadKeys(plist)
# Only add Keystone in Release builds.
if options.use_keystone and env['CONFIGURATION'] == 'Release':
if options.bundle_identifier is None:
print >>sys.stderr, 'Use of Keystone requires the bundle id.'
return 1
_AddKeystoneKeys(plist, options.bundle_identifier)
else:
_RemoveKeystoneKeys(plist)
# Adds or removes any SCM keys.
if not _DoSCMKeys(plist, options.add_scm_info):
return 3
# Now that all keys have been mutated, rewrite the file.
temp_info_plist = tempfile.NamedTemporaryFile()
plistlib.writePlist(plist, temp_info_plist.name)
# Info.plist will work perfectly well in any plist format, but traditionally
# applications use xml1 for this, so convert it to ensure that it's valid.
proc = subprocess.Popen(['plutil', '-convert', 'xml1', '-o', DEST_INFO_PLIST,
temp_info_plist.name])
proc.wait()
return proc.returncode
if __name__ == '__main__':
sys.exit(Main(sys.argv[1:]))
| bsd-3-clause |
akesandgren/easybuild-easyblocks | easybuild/easyblocks/n/nwchem.py | 3 | 26070 | ##
# Copyright 2009-2021 Ghent University
#
# This file is part of EasyBuild,
# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
# with support of Ghent University (http://ugent.be/hpc),
# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
# Flemish Research Foundation (FWO) (http://www.fwo.be/en)
# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
#
# https://github.com/easybuilders/easybuild
#
# EasyBuild is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation v2.
#
# EasyBuild is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
##
"""
EasyBuild support for building and installing NWChem, implemented as an easyblock
@author: Kenneth Hoste (Ghent University)
"""
import os
import re
import shutil
import stat
import tempfile
import easybuild.tools.config as config
import easybuild.tools.environment as env
import easybuild.tools.toolchain as toolchain
from distutils.version import LooseVersion
from easybuild.easyblocks.generic.configuremake import ConfigureMake
from easybuild.framework.easyconfig import CUSTOM
from easybuild.tools.build_log import EasyBuildError
from easybuild.tools.filetools import adjust_permissions, change_dir, mkdir, remove_file, symlink, write_file
from easybuild.tools.modules import get_software_libdir, get_software_root, get_software_version
from easybuild.tools.run import run_cmd
class EB_NWChem(ConfigureMake):
"""Support for building/installing NWChem."""
def __init__(self, *args, **kwargs):
"""Initialisation of custom class variables for NWChem."""
super(EB_NWChem, self).__init__(*args, **kwargs)
self.test_cases_dir = None
# path for symlink to local copy of default .nwchemrc, required by NWChem at runtime
# this path is hardcoded by NWChem, and there's no way to make it use a config file at another path...
self.home_nwchemrc = os.path.join(os.getenv('HOME'), '.nwchemrc')
# temporary directory that is common across multiple nodes in a cluster;
# we can't rely on tempfile.gettempdir() since that follows $TMPDIR,
# which is typically set to a unique directory in jobs;
# use /tmp as default, allow customisation via $EB_NWCHEM_TMPDIR environment variable
common_tmp_dir = os.getenv('EB_NWCHEM_TMPDIR', '/tmp')
# local NWChem .nwchemrc config file, to which symlink will point
# using this approach, multiple parallel builds (on different nodes) can use the same symlink
self.local_nwchemrc = os.path.join(common_tmp_dir, os.getenv('USER'), 'easybuild_nwchem', '.nwchemrc')
@staticmethod
def extra_options():
"""Custom easyconfig parameters for NWChem."""
extra_vars = {
'target': ["LINUX64", "Target platform", CUSTOM],
# possible options for ARMCI_NETWORK on LINUX64 with Infiniband:
# OPENIB, MPI-MT, MPI-SPAWN, MELLANOX
'armci_network': ["OPENIB", "Network protocol to use", CUSTOM],
'msg_comms': ["MPI", "Type of message communication", CUSTOM],
'modules': ["all", "NWChem modules to build", CUSTOM],
'lib_defines': ["", "Additional defines for C preprocessor", CUSTOM],
'tests': [True, "Run example test cases", CUSTOM],
# lots of tests fail, so allow a certain fail ratio
'max_fail_ratio': [0.5, "Maximum test case fail ratio", CUSTOM],
}
return ConfigureMake.extra_options(extra_vars)
def setvar_env_makeopt(self, name, value):
"""Set a variable both in the environment and a an option to make."""
env.setvar(name, value)
self.cfg.update('buildopts', "%s='%s'" % (name, value))
def configure_step(self):
"""Custom configuration procedure for NWChem."""
# check whether a (valid) symlink to a .nwchemrc config file exists (via a dummy file if necessary)
# fail early if the link is not what's we expect, since running the test cases will likely fail in this case
try:
if os.path.exists(self.home_nwchemrc) or os.path.islink(self.home_nwchemrc):
# create a dummy file to check symlink
if not os.path.exists(self.local_nwchemrc):
write_file(self.local_nwchemrc, 'dummy')
self.log.debug("Contents of %s: %s", os.path.dirname(self.local_nwchemrc),
os.listdir(os.path.dirname(self.local_nwchemrc)))
if os.path.islink(self.home_nwchemrc):
home_nwchemrc_target = os.readlink(self.home_nwchemrc)
if home_nwchemrc_target != self.local_nwchemrc:
raise EasyBuildError("Found %s, but it's not a symlink to %s. "
"Please (re)move %s while installing NWChem; it can be restored later",
self.home_nwchemrc, self.local_nwchemrc, self.home_nwchemrc)
# ok to remove, we'll recreate it anyway
remove_file(self.local_nwchemrc)
except (IOError, OSError) as err:
raise EasyBuildError("Failed to validate %s symlink: %s", self.home_nwchemrc, err)
# building NWChem in a long path name is an issue, so let's try to make sure we have a short one
try:
# NWChem insists that version is in name of build dir
tmpdir = tempfile.mkdtemp(suffix='-%s-%s' % (self.name, self.version))
# remove created directory, since we're not going to use it as is
os.rmdir(tmpdir)
# avoid having '['/']' characters in build dir name, NWChem doesn't like that
start_dir = tmpdir.replace('[', '_').replace(']', '_')
mkdir(os.path.dirname(start_dir), parents=True)
symlink(self.cfg['start_dir'], start_dir)
change_dir(start_dir)
self.cfg['start_dir'] = start_dir
except OSError as err:
raise EasyBuildError("Failed to symlink build dir to a shorter path name: %s", err)
# change to actual build dir
change_dir('src')
nwchem_modules = self.cfg['modules']
# set required NWChem environment variables
env.setvar('NWCHEM_TOP', self.cfg['start_dir'])
if len(self.cfg['start_dir']) > 64:
# workaround for:
# "The directory name chosen for NWCHEM_TOP is longer than the maximum allowed value of 64 characters"
# see also https://svn.pnl.gov/svn/nwchem/trunk/src/util/util_nwchem_srcdir.F
self.setvar_env_makeopt('NWCHEM_LONG_PATHS', 'Y')
env.setvar('NWCHEM_TARGET', self.cfg['target'])
garoot = get_software_root('GlobalArrays')
if garoot:
self.setvar_env_makeopt('EXTERNAL_GA_PATH', garoot)
else:
env.setvar('MSG_COMMS', self.cfg['msg_comms'])
env.setvar('ARMCI_NETWORK', self.cfg['armci_network'])
if self.cfg['armci_network'] in ["OPENIB"]:
env.setvar('IB_INCLUDE', "/usr/include")
env.setvar('IB_LIB', "/usr/lib64")
env.setvar('IB_LIB_NAME', "-libumad -libverbs -lpthread")
if 'python' in self.cfg['modules']:
python_root = get_software_root('Python')
if not python_root:
raise EasyBuildError("Python module not loaded, you should add Python as a dependency.")
env.setvar('PYTHONHOME', python_root)
pyver = '.'.join(get_software_version('Python').split('.')[0:2])
env.setvar('PYTHONVERSION', pyver)
# if libreadline is loaded, assume it was a dependency for Python
# pass -lreadline to avoid linking issues (libpython2.7.a doesn't include readline symbols)
libreadline = get_software_root('libreadline')
if libreadline:
libreadline_libdir = os.path.join(libreadline, get_software_libdir('libreadline'))
ncurses = get_software_root('ncurses')
if not ncurses:
raise EasyBuildError("ncurses is not loaded, but required to link with libreadline")
ncurses_libdir = os.path.join(ncurses, get_software_libdir('ncurses'))
readline_libs = ' '.join([
os.path.join(libreadline_libdir, 'libreadline.a'),
os.path.join(ncurses_libdir, 'libcurses.a'),
])
extra_libs = os.environ.get('EXTRA_LIBS', '')
env.setvar('EXTRA_LIBS', ' '.join([extra_libs, readline_libs]))
env.setvar('LARGE_FILES', 'TRUE')
env.setvar('USE_NOFSCHECK', 'TRUE')
env.setvar('CCSDTLR', 'y') # enable CCSDTLR
env.setvar('CCSDTQ', 'y') # enable CCSDTQ (compilation is long, executable is big)
if LooseVersion(self.version) >= LooseVersion("6.2"):
env.setvar('MRCC_METHODS', 'y') # enable multireference coupled cluster capability
if LooseVersion(self.version) >= LooseVersion("6.5"):
env.setvar('EACCSD', 'y') # enable EOM electron-attachemnt coupled cluster capability
env.setvar('IPCCSD', 'y') # enable EOM ionization-potential coupled cluster capability
env.setvar('USE_NOIO', 'TRUE') # avoid doing I/O for the ddscf, mp2 and ccsd modules
for var in ['USE_MPI', 'USE_MPIF', 'USE_MPIF4']:
env.setvar(var, 'y')
for var in ['CC', 'CXX', 'F90']:
env.setvar('MPI_%s' % var, os.getenv('MPI%s' % var))
libmpi = ""
# for NWChem 6.6 and newer, $LIBMPI & co should no longer be
# set, the correct values are determined by the NWChem build
# procedure automatically, see
# http://www.nwchem-sw.org/index.php/Compiling_NWChem#MPI_variables
if LooseVersion(self.version) < LooseVersion("6.6"):
env.setvar('MPI_LOC', os.path.dirname(os.getenv('MPI_INC_DIR')))
env.setvar('MPI_LIB', os.getenv('MPI_LIB_DIR'))
env.setvar('MPI_INCLUDE', os.getenv('MPI_INC_DIR'))
mpi_family = self.toolchain.mpi_family()
if mpi_family in toolchain.OPENMPI:
ompi_ver = get_software_version('OpenMPI')
if LooseVersion(ompi_ver) < LooseVersion("1.10"):
if LooseVersion(ompi_ver) < LooseVersion("1.8"):
libmpi = "-lmpi_f90 -lmpi_f77 -lmpi -ldl -Wl,--export-dynamic -lnsl -lutil"
else:
libmpi = "-lmpi_usempi -lmpi_mpifh -lmpi"
else:
libmpi = "-lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi"
elif mpi_family in [toolchain.INTELMPI]:
if self.cfg['armci_network'] in ["MPI-MT"]:
libmpi = "-lmpigf -lmpigi -lmpi_ilp64 -lmpi_mt"
else:
libmpi = "-lmpigf -lmpigi -lmpi_ilp64 -lmpi"
elif mpi_family in [toolchain.MPICH, toolchain.MPICH2]:
libmpi = "-lmpichf90 -lmpich -lopa -lmpl -lrt -lpthread"
else:
raise EasyBuildError("Don't know how to set LIBMPI for %s", mpi_family)
env.setvar('LIBMPI', libmpi)
if not garoot:
if self.cfg['armci_network'] in ["OPENIB"]:
libmpi += " -libumad -libverbs -lpthread"
# compiler optimization flags: set environment variables _and_ add them to list of make options
self.setvar_env_makeopt('COPTIMIZE', os.getenv('CFLAGS'))
self.setvar_env_makeopt('FOPTIMIZE', os.getenv('FFLAGS'))
# BLAS and ScaLAPACK
mpi_lib_dirs = ' '.join('-L' + d for d in os.getenv('MPI_LIB_DIR').split())
self.setvar_env_makeopt('BLASOPT', ' '.join([os.getenv('LDFLAGS'), mpi_lib_dirs,
os.getenv('LIBSCALAPACK_MT'), libmpi]))
# Setting LAPACK_LIB is required from 7.0.0 onwards.
self.setvar_env_makeopt('LAPACK_LIB', os.getenv('LIBLAPACK'))
self.setvar_env_makeopt('SCALAPACK', '%s %s' % (os.getenv('LDFLAGS'), os.getenv('LIBSCALAPACK_MT')))
if self.toolchain.options['i8']:
size = 8
self.setvar_env_makeopt('USE_SCALAPACK_I8', 'y')
self.cfg.update('lib_defines', '-DSCALAPACK_I8')
else:
self.setvar_env_makeopt('HAS_BLAS', 'yes')
self.setvar_env_makeopt('USE_SCALAPACK', 'y')
size = 4
# set sizes
for lib in ['BLAS', 'LAPACK', 'SCALAPACK']:
self.setvar_env_makeopt('%s_SIZE' % lib, str(size))
env.setvar('NWCHEM_MODULES', nwchem_modules)
env.setvar('LIB_DEFINES', self.cfg['lib_defines'])
# clean first (why not)
run_cmd("make clean", simple=True, log_all=True, log_ok=True)
# configure build
cmd = "make %s nwchem_config" % self.cfg['buildopts']
run_cmd(cmd, simple=True, log_all=True, log_ok=True, log_output=True)
def build_step(self):
"""Custom build procedure for NWChem."""
# set FC
self.setvar_env_makeopt('FC', os.getenv('F77'))
# check whether 64-bit integers should be used, and act on it
if not self.toolchain.options['i8']:
if self.cfg['parallel']:
self.cfg.update('buildopts', '-j %s' % self.cfg['parallel'])
run_cmd("make %s 64_to_32" % self.cfg['buildopts'], simple=True, log_all=True, log_ok=True, log_output=True)
self.setvar_env_makeopt('USE_64TO32', "y")
# unset env vars that cause trouble during NWChem build or cause build to generate incorrect stuff
for var in ['CFLAGS', 'FFLAGS', 'LIBS']:
val = os.getenv(var)
if val:
self.log.info("%s was defined as '%s', need to unset it to avoid problems..." % (var, val))
os.unsetenv(var)
os.environ.pop(var)
super(EB_NWChem, self).build_step(verbose=True)
# build version info
try:
self.log.info("Building version info...")
cwd = os.getcwd()
change_dir(os.path.join(self.cfg['start_dir'], 'src', 'util'))
run_cmd("make version", simple=True, log_all=True, log_ok=True, log_output=True)
run_cmd("make", simple=True, log_all=True, log_ok=True, log_output=True)
change_dir(os.path.join(self.cfg['start_dir'], 'src'))
run_cmd("make link", simple=True, log_all=True, log_ok=True, log_output=True)
change_dir(cwd)
except OSError as err:
raise EasyBuildError("Failed to build version info: %s", err)
# run getmem.nwchem script to assess memory availability and make an educated guess
# this is an alternative to specifying -DDFLT_TOT_MEM via LIB_DEFINES
# this recompiles the appropriate files and relinks
if 'DDFLT_TOT_MEM' not in self.cfg['lib_defines']:
change_dir(os.path.join(self.cfg['start_dir'], 'contrib'))
run_cmd("./getmem.nwchem", simple=True, log_all=True, log_ok=True, log_output=True)
change_dir(self.cfg['start_dir'])
def install_step(self):
"""Custom install procedure for NWChem."""
try:
# binary
bindir = os.path.join(self.installdir, 'bin')
mkdir(bindir)
shutil.copy(os.path.join(self.cfg['start_dir'], 'bin', self.cfg['target'], 'nwchem'),
bindir)
# data
shutil.copytree(os.path.join(self.cfg['start_dir'], 'src', 'data'),
os.path.join(self.installdir, 'data'))
shutil.copytree(os.path.join(self.cfg['start_dir'], 'src', 'basis', 'libraries'),
os.path.join(self.installdir, 'data', 'libraries'))
shutil.copytree(os.path.join(self.cfg['start_dir'], 'src', 'nwpw', 'libraryps'),
os.path.join(self.installdir, 'data', 'libraryps'))
except OSError as err:
raise EasyBuildError("Failed to install NWChem: %s", err)
# create NWChem settings file
default_nwchemrc = os.path.join(self.installdir, 'data', 'default.nwchemrc')
txt = '\n'.join([
"nwchem_basis_library %(path)s/data/libraries/",
"nwchem_nwpw_library %(path)s/data/libraryps/",
"ffield amber",
"amber_1 %(path)s/data/amber_s/",
"amber_2 %(path)s/data/amber_q/",
"amber_3 %(path)s/data/amber_x/",
"amber_4 %(path)s/data/amber_u/",
"spce %(path)s/data/solvents/spce.rst",
"charmm_s %(path)s/data/charmm_s/",
"charmm_x %(path)s/data/charmm_x/",
]) % {'path': self.installdir}
write_file(default_nwchemrc, txt)
# fix permissions in data directory
datadir = os.path.join(self.installdir, 'data')
adjust_permissions(datadir, stat.S_IROTH, add=True, recursive=True)
adjust_permissions(datadir, stat.S_IXOTH, add=True, recursive=True, onlydirs=True)
def sanity_check_step(self):
"""Custom sanity check for NWChem."""
custom_paths = {
'files': ['bin/nwchem'],
'dirs': [os.path.join('data', x) for x in ['amber_q', 'amber_s', 'amber_t', 'amber_u', 'amber_x',
'charmm_s', 'charmm_x', 'solvents', 'libraries', 'libraryps']],
}
super(EB_NWChem, self).sanity_check_step(custom_paths=custom_paths)
def make_module_extra(self):
"""Custom extra module file entries for NWChem."""
txt = super(EB_NWChem, self).make_module_extra()
# check whether Python module is loaded for compatibility with --module-only
python = get_software_root('Python')
if python:
txt += self.module_generator.set_environment('PYTHONHOME', python)
# '/' at the end is critical for NWCHEM_BASIS_LIBRARY!
datadir = os.path.join(self.installdir, 'data')
txt += self.module_generator.set_environment('NWCHEM_BASIS_LIBRARY', os.path.join(datadir, 'libraries/'))
if LooseVersion(self.version) >= LooseVersion("6.3"):
txt += self.module_generator.set_environment('NWCHEM_NWPW_LIBRARY', os.path.join(datadir, 'libraryps/'))
return txt
def cleanup_step(self):
"""Copy stuff from build directory we still need, if any."""
try:
exs_dir = os.path.join(self.cfg['start_dir'], 'examples')
self.examples_dir = os.path.join(tempfile.mkdtemp(), 'examples')
shutil.copytree(exs_dir, self.examples_dir)
self.log.info("Copied %s to %s." % (exs_dir, self.examples_dir))
except OSError as err:
raise EasyBuildError("Failed to copy examples: %s", err)
super(EB_NWChem, self).cleanup_step()
def test_cases_step(self):
"""Run provided list of test cases, or provided examples is no test cases were specified."""
# run all examples if no test cases were specified
# order and grouping is important for some of these tests (e.g., [o]h3tr*
# Some of the examples are deleted
# missing md parameter files: dna.nw, mache.nw, 18c6NaK.nw, membrane.nw, sdm.nw
# method not implemented (unknown thory) or keyword not found: triplet.nw, C2H6.nw, pspw_MgO.nw
# ccsdt_polar_small.nw, CG.nw
# no convergence: diamond.nw
# Too much memory required: ccsd_polar_big.nw
if isinstance(self.cfg['tests'], bool):
examples = [
('qmd', ['3carbo_dft.nw', '3carbo.nw', 'h2o_scf.nw']),
('pspw', ['C2.nw', 'C6.nw', 'Carbene.nw', 'Na16.nw', 'NaCl.nw']),
('tcepolar', ['ccsd_polar_small.nw']),
('dirdyvtst/h3', ['h3tr1.nw', 'h3tr2.nw']),
('dirdyvtst/h3', ['h3tr3.nw']),
('dirdyvtst/h3', ['h3tr4.nw']),
('dirdyvtst/h3', ['h3tr5.nw']),
('dirdyvtst/oh3', ['oh3tr1.nw', 'oh3tr2.nw']),
('dirdyvtst/oh3', ['oh3tr3.nw']),
('dirdyvtst/oh3', ['oh3tr4.nw']),
('dirdyvtst/oh3', ['oh3tr5.nw']),
('pspw/session1', ['band.nw', 'si4.linear.nw', 'si4.rhombus.nw', 'S2-drift.nw',
'silicon.nw', 'S2.nw', 'si4.rectangle.nw']),
('md/myo', ['myo.nw']),
('md/nak', ['NaK.nw']),
('md/crown', ['crown.nw']),
('md/hrc', ['hrc.nw']),
('md/benzene', ['benzene.nw'])
]
self.cfg['tests'] = [(os.path.join(self.examples_dir, d), l) for (d, l) in examples]
self.log.info("List of examples to be run as test cases: %s" % self.cfg['tests'])
try:
# symlink $HOME/.nwchemrc to local copy of default nwchemrc
default_nwchemrc = os.path.join(self.installdir, 'data', 'default.nwchemrc')
# make a local copy of the default .nwchemrc file at a fixed path, so we can symlink to it
# this makes sure that multiple parallel builds can reuse the same symlink, even for different builds
# there is apparently no way to point NWChem to a particular config file other that $HOME/.nwchemrc
try:
local_nwchemrc_dir = os.path.dirname(self.local_nwchemrc)
if not os.path.exists(local_nwchemrc_dir):
os.makedirs(local_nwchemrc_dir)
shutil.copy2(default_nwchemrc, self.local_nwchemrc)
# only try to create symlink if it's not there yet
# we've verified earlier that the symlink is what we expect it to be if it's there
if not os.path.islink(self.home_nwchemrc):
symlink(self.local_nwchemrc, self.home_nwchemrc)
except OSError as err:
raise EasyBuildError("Failed to symlink %s to %s: %s", self.home_nwchemrc, self.local_nwchemrc, err)
# run tests, keep track of fail ratio
cwd = os.getcwd()
fail = 0.0
tot = 0.0
success_regexp = re.compile(r"Total times\s*cpu:.*wall:.*")
test_cases_logfn = os.path.join(self.installdir, config.log_path(), 'test_cases.log')
test_cases_log = open(test_cases_logfn, "w")
for (testdir, tests) in self.cfg['tests']:
# run test in a temporary dir
tmpdir = tempfile.mkdtemp(prefix='nwchem_test_')
change_dir(tmpdir)
# copy all files in test case dir
for item in os.listdir(testdir):
test_file = os.path.join(testdir, item)
if os.path.isfile(test_file):
self.log.debug("Copying %s to %s" % (test_file, tmpdir))
shutil.copy2(test_file, tmpdir)
# run tests
for testx in tests:
cmd = "nwchem %s" % testx
msg = "Running test '%s' (from %s) in %s..." % (cmd, testdir, tmpdir)
self.log.info(msg)
test_cases_log.write("\n%s\n" % msg)
(out, ec) = run_cmd(cmd, simple=False, log_all=False, log_ok=False, log_output=True)
# check exit code and output
if ec:
msg = "Test %s failed (exit code: %s)!" % (testx, ec)
self.log.warning(msg)
test_cases_log.write('FAIL: %s' % msg)
fail += 1
else:
if success_regexp.search(out):
msg = "Test %s successful!" % testx
self.log.info(msg)
test_cases_log.write('SUCCESS: %s' % msg)
else:
msg = "No 'Total times' found for test %s (but exit code is %s)!" % (testx, ec)
self.log.warning(msg)
test_cases_log.write('FAIL: %s' % msg)
fail += 1
test_cases_log.write("\nOUTPUT:\n\n%s\n\n" % out)
tot += 1
# go back
change_dir(cwd)
shutil.rmtree(tmpdir)
fail_ratio = fail / tot
fail_pcnt = fail_ratio * 100
msg = "%d of %d tests failed (%s%%)!" % (fail, tot, fail_pcnt)
self.log.info(msg)
test_cases_log.write('\n\nSUMMARY: %s' % msg)
test_cases_log.close()
self.log.info("Log for test cases saved at %s" % test_cases_logfn)
if fail_ratio > self.cfg['max_fail_ratio']:
max_fail_pcnt = self.cfg['max_fail_ratio'] * 100
raise EasyBuildError("Over %s%% of test cases failed, assuming broken build.", max_fail_pcnt)
# cleanup
try:
shutil.rmtree(self.examples_dir)
shutil.rmtree(local_nwchemrc_dir)
except OSError as err:
raise EasyBuildError("Cleanup failed: %s", err)
# set post msg w.r.t. cleaning up $HOME/.nwchemrc symlink
self.postmsg += "\nRemember to clean up %s after all NWChem builds are finished." % self.home_nwchemrc
except OSError as err:
raise EasyBuildError("Failed to run test cases: %s", err)
| gpl-2.0 |
jgcaaprom/android_external_chromium_org | tools/telemetry/telemetry/core/backends/chrome/ios_browser_backend.py | 28 | 4773 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import contextlib
import json
import logging
import re
import urllib2
from telemetry.core import util
from telemetry.core.backends.chrome import chrome_browser_backend
from telemetry.core.backends.chrome import system_info_backend
class IosBrowserBackend(chrome_browser_backend.ChromeBrowserBackend):
_DEBUGGER_URL_BUILDER = 'ws://localhost:%i/devtools/page/%i'
_DEBUGGER_URL_REGEX = 'ws://localhost:(\d+)/devtools/page/(\d+)'
_DEVICE_LIST_URL = 'http://localhost:9221/json'
def __init__(self, browser_options):
super(IosBrowserBackend, self).__init__(
supports_tab_control=False,
supports_extensions=False,
browser_options=browser_options,
output_profile_path=".",
extensions_to_load=None)
self._webviews = []
self._port = None
self._page = None
self.UpdateRunningBrowsersInfo()
def UpdateRunningBrowsersInfo(self):
""" Refresh to match current state of the running browser.
"""
device_urls = self.GetDeviceUrls()
urls = self.GetWebSocketDebuggerUrls(device_urls)
for url in urls:
m = re.match(self._DEBUGGER_URL_REGEX, url)
if m:
self._webviews.append([int(m.group(1)), int(m.group(2))])
else:
logging.error('Unexpected url format: %s' % url)
# TODO(baxley): For now, grab first item from |_webviews|. Ideally, we'd
# prefer to have the currently displayed tab, or something similar.
if self._webviews:
self._port = self._webviews[0][0]
self._page = self._webviews[0][1]
def GetDeviceUrls(self):
device_urls = []
try:
with contextlib.closing(
urllib2.urlopen(self._DEVICE_LIST_URL)) as device_list:
json_urls = device_list.read()
device_urls = json.loads(json_urls)
if not device_urls:
logging.debug('No iOS devices found. Will not try searching for iOS '
'browsers.')
return []
except urllib2.URLError as e:
logging.debug('Error communicating with iOS device.')
logging.debug(str(e))
return []
return device_urls
def GetWebSocketDebuggerUrls(self, device_urls):
""" Get a list of the websocket debugger URLs to communicate with
all running UIWebViews.
"""
data = []
# Loop through all devices.
for d in device_urls:
def GetData():
try:
with contextlib.closing(
urllib2.urlopen('http://%s/json' % d['url'])) as f:
json_result = f.read()
data = json.loads(json_result)
return data
except urllib2.URLError as e:
logging.debug('Error communicating with iOS device.')
logging.debug(e)
return False
try:
# Retry a few times since it can take a few seconds for this API to be
# ready, if ios_webkit_debug_proxy is just launched.
data = util.WaitFor(GetData, 5)
except util.TimeoutException as e:
logging.debug('Timeout retrieving data from iOS device')
logging.debug(e)
return []
# Find all running UIWebViews.
debug_urls = []
for j in data:
debug_urls.append(j['webSocketDebuggerUrl'])
return debug_urls
def GetSystemInfo(self):
if self._system_info_backend is None:
self._system_info_backend = system_info_backend.SystemInfoBackend(
self._port, self._page)
return self._system_info_backend.GetSystemInfo()
def ListInspectableContexts(self):
response = json.loads(self.Request(''))
if len(response) != len(self._webviews):
self.UpdateRunningBrowsersInfo()
for i in range(len(response)):
response[i]['id'] = 1
return response
def IsBrowserRunning(self):
return bool(self._webviews)
#TODO(baxley): The following were stubbed out to get the sunspider benchmark
# running. These should be implemented.
@property
def browser_directory(self):
logging.warn('Not implemented')
return None
@property
def profile_directory(self):
logging.warn('Not implemented')
return None
def Start(self):
logging.warn('Not implemented')
def AddReplayServerOptions(self, extra_wpr_args):
logging.warn('Not implemented')
return None
def extension_backend(self):
logging.warn('Not implemented')
return None
def GetBrowserStartupArgs(self):
logging.warn('Not implemented')
return None
def HasBrowserFinishedLaunching(self):
logging.warn('Not implemented')
return False
def GetStandardOutput(self):
raise NotImplementedError()
def GetStackTrace(self):
raise NotImplementedError()
| bsd-3-clause |
kidig/rrtop100 | setup.py | 1 | 1563 | """
RadioRecord Top Hits Downloader
"""
from setuptools import find_packages, setup
dependencies = ['click==6.6', 'aiohttp==0.22.5', 'lxml==3.6.1']
setup(
name='rrtop100',
version='0.1.0',
url='https://github.com/kidig/rrtop100',
license='BSD',
author='Dmitrii Gerasimenko',
author_email='[email protected]',
description='RadioRecord Top Hits Downloader',
long_description=__doc__,
packages=find_packages(exclude=['tests']),
include_package_data=True,
zip_safe=False,
platforms='any',
install_requires=dependencies,
entry_points={
'console_scripts': [
'rrtop100 = rrtop.cli:main',
],
},
classifiers=[
# As from http://pypi.python.org/pypi?%3Aaction=list_classifiers
# 'Development Status :: 1 - Planning',
# 'Development Status :: 2 - Pre-Alpha',
# 'Development Status :: 3 - Alpha',
'Development Status :: 4 - Beta',
# 'Development Status :: 5 - Production/Stable',
# 'Development Status :: 6 - Mature',
# 'Development Status :: 7 - Inactive',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: POSIX',
'Operating System :: MacOS',
'Operating System :: Unix',
'Operating System :: Microsoft :: Windows',
'Programming Language :: Python',
'Programming Language :: Python :: 3.5',
'Topic :: Software Development :: Libraries :: Python Modules',
]
)
| apache-2.0 |
morenopc/edx-platform | common/lib/capa/capa/tests/test_correctmap.py | 61 | 5833 | """
Tests to verify that CorrectMap behaves correctly
"""
import unittest
from capa.correctmap import CorrectMap
import datetime
class CorrectMapTest(unittest.TestCase):
"""
Tests to verify that CorrectMap behaves correctly
"""
def setUp(self):
self.cmap = CorrectMap()
def test_set_input_properties(self):
# Set the correctmap properties for two inputs
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5,
msg='Test message',
hint='Test hint',
hintmode='always',
queuestate={
'key': 'secretstring',
'time': '20130228100026'
}
)
self.cmap.set(
answer_id='2_2_1',
correctness='incorrect',
npoints=None,
msg=None,
hint=None,
hintmode=None,
queuestate=None
)
# Assert that each input has the expected properties
self.assertTrue(self.cmap.is_correct('1_2_1'))
self.assertFalse(self.cmap.is_correct('2_2_1'))
self.assertEqual(self.cmap.get_correctness('1_2_1'), 'correct')
self.assertEqual(self.cmap.get_correctness('2_2_1'), 'incorrect')
self.assertEqual(self.cmap.get_npoints('1_2_1'), 5)
self.assertEqual(self.cmap.get_npoints('2_2_1'), 0)
self.assertEqual(self.cmap.get_msg('1_2_1'), 'Test message')
self.assertEqual(self.cmap.get_msg('2_2_1'), None)
self.assertEqual(self.cmap.get_hint('1_2_1'), 'Test hint')
self.assertEqual(self.cmap.get_hint('2_2_1'), None)
self.assertEqual(self.cmap.get_hintmode('1_2_1'), 'always')
self.assertEqual(self.cmap.get_hintmode('2_2_1'), None)
self.assertTrue(self.cmap.is_queued('1_2_1'))
self.assertFalse(self.cmap.is_queued('2_2_1'))
self.assertEqual(self.cmap.get_queuetime_str('1_2_1'), '20130228100026')
self.assertEqual(self.cmap.get_queuetime_str('2_2_1'), None)
self.assertTrue(self.cmap.is_right_queuekey('1_2_1', 'secretstring'))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', 'invalidstr'))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', ''))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', None))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', 'secretstring'))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', 'invalidstr'))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', ''))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', None))
def test_get_npoints(self):
# Set the correctmap properties for 4 inputs
# 1) correct, 5 points
# 2) correct, None points
# 3) incorrect, 5 points
# 4) incorrect, None points
# 5) correct, 0 points
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5
)
self.cmap.set(
answer_id='2_2_1',
correctness='correct',
npoints=None
)
self.cmap.set(
answer_id='3_2_1',
correctness='incorrect',
npoints=5
)
self.cmap.set(
answer_id='4_2_1',
correctness='incorrect',
npoints=None
)
self.cmap.set(
answer_id='5_2_1',
correctness='correct',
npoints=0
)
# Assert that we get the expected points
# If points assigned --> npoints
# If no points assigned and correct --> 1 point
# If no points assigned and incorrect --> 0 points
self.assertEqual(self.cmap.get_npoints('1_2_1'), 5)
self.assertEqual(self.cmap.get_npoints('2_2_1'), 1)
self.assertEqual(self.cmap.get_npoints('3_2_1'), 5)
self.assertEqual(self.cmap.get_npoints('4_2_1'), 0)
self.assertEqual(self.cmap.get_npoints('5_2_1'), 0)
def test_set_overall_message(self):
# Default is an empty string string
self.assertEqual(self.cmap.get_overall_message(), "")
# Set a message that applies to the whole question
self.cmap.set_overall_message("Test message")
# Retrieve the message
self.assertEqual(self.cmap.get_overall_message(), "Test message")
# Setting the message to None --> empty string
self.cmap.set_overall_message(None)
self.assertEqual(self.cmap.get_overall_message(), "")
def test_update_from_correctmap(self):
# Initialize a CorrectMap with some properties
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5,
msg='Test message',
hint='Test hint',
hintmode='always',
queuestate={
'key': 'secretstring',
'time': '20130228100026'
}
)
self.cmap.set_overall_message("Test message")
# Create a second cmap, then update it to have the same properties
# as the first cmap
other_cmap = CorrectMap()
other_cmap.update(self.cmap)
# Assert that it has all the same properties
self.assertEqual(
other_cmap.get_overall_message(),
self.cmap.get_overall_message()
)
self.assertEqual(
other_cmap.get_dict(),
self.cmap.get_dict()
)
def test_update_from_invalid(self):
# Should get an exception if we try to update() a CorrectMap
# with a non-CorrectMap value
invalid_list = [None, "string", 5, datetime.datetime.today()]
for invalid in invalid_list:
with self.assertRaises(Exception):
self.cmap.update(invalid)
| agpl-3.0 |
Subito/ansible-modules-extras | system/pam_limits.py | 57 | 7494 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Sebastien Rohaut <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
import shutil
import re
DOCUMENTATION = '''
---
module: pam_limits
version_added: "2.0"
short_description: Modify Linux PAM limits
description:
- The M(pam_limits) module modify PAM limits, default in /etc/security/limits.conf.
For the full documentation, see man limits.conf(5).
options:
domain:
description:
- A username, @groupname, wildcard, uid/gid range.
required: true
limit_type:
description:
- Limit type, see C(man limits) for an explanation
required: true
choices: [ "hard", "soft" ]
limit_item:
description:
- The limit to be set
required: true
choices: [ "core", "data", "fsize", "memlock", "nofile", "rss", "stack", "cpu", "nproc", "as", "maxlogins", "maxsyslogins", "priority", "locks", "sigpending", "msgqueue", "nice", "rtprio", "chroot" ]
value:
description:
- The value of the limit.
required: true
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
required: false
choices: [ "yes", "no" ]
default: "no"
use_min:
description:
- If set to C(yes), the minimal value will be used or conserved.
If the specified value is inferior to the value in the file, file content is replaced with the new value,
else content is not modified.
required: false
choices: [ "yes", "no" ]
default: "no"
use_max:
description:
- If set to C(yes), the maximal value will be used or conserved.
If the specified value is superior to the value in the file, file content is replaced with the new value,
else content is not modified.
required: false
choices: [ "yes", "no" ]
default: "no"
dest:
description:
- Modify the limits.conf path.
required: false
default: "/etc/security/limits.conf"
'''
EXAMPLES = '''
# Add or modify limits for the user joe
- pam_limits: domain=joe limit_type=soft limit_item=nofile value=64000
# Add or modify limits for the user joe. Keep or set the maximal value
- pam_limits: domain=joe limit_type=soft limit_item=nofile value=1000000
'''
def main():
pam_items = [ 'core', 'data', 'fsize', 'memlock', 'nofile', 'rss', 'stack', 'cpu', 'nproc', 'as', 'maxlogins', 'maxsyslogins', 'priority', 'locks', 'sigpending', 'msgqueue', 'nice', 'rtprio', 'chroot' ]
pam_types = [ 'soft', 'hard', '-' ]
limits_conf = '/etc/security/limits.conf'
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec = dict(
domain = dict(required=True, type='str'),
limit_type = dict(required=True, type='str', choices=pam_types),
limit_item = dict(required=True, type='str', choices=pam_items),
value = dict(required=True, type='int'),
use_max = dict(default=False, type='bool'),
use_min = dict(default=False, type='bool'),
backup = dict(default=False, type='bool'),
dest = dict(default=limits_conf, type='str'),
comment = dict(required=False, default='', type='str')
)
)
domain = module.params['domain']
limit_type = module.params['limit_type']
limit_item = module.params['limit_item']
value = module.params['value']
use_max = module.params['use_max']
use_min = module.params['use_min']
backup = module.params['backup']
limits_conf = module.params['dest']
new_comment = module.params['comment']
changed = False
if os.path.isfile(limits_conf):
if not os.access(limits_conf, os.W_OK):
module.fail_json(msg="%s is not writable. Use sudo" % (limits_conf) )
else:
module.fail_json(msg="%s is not visible (check presence, access rights, use sudo)" % (limits_conf) )
if use_max and use_min:
module.fail_json(msg="Cannot use use_min and use_max at the same time." )
# Backup
if backup:
backup_file = module.backup_local(limits_conf)
space_pattern = re.compile(r'\s+')
message = ''
f = open (limits_conf, 'r')
# Tempfile
nf = tempfile.NamedTemporaryFile(delete = False)
found = False
new_value = value
for line in f:
if line.startswith('#'):
nf.write(line)
continue
newline = re.sub(space_pattern, ' ', line).strip()
if not newline:
nf.write(line)
continue
# Remove comment in line
newline = newline.split('#',1)[0]
try:
old_comment = line.split('#',1)[1]
except:
old_comment = ''
newline = newline.rstrip()
if not new_comment:
new_comment = old_comment
if new_comment:
new_comment = "\t#"+new_comment
line_fields = newline.split(' ')
if len(line_fields) != 4:
nf.write(line)
continue
line_domain = line_fields[0]
line_type = line_fields[1]
line_item = line_fields[2]
actual_value = int(line_fields[3])
# Found the line
if line_domain == domain and line_type == limit_type and line_item == limit_item:
found = True
if value == actual_value:
message = line
nf.write(line)
continue
if use_max:
new_value = max(value, actual_value)
if use_min:
new_value = min(value,actual_value)
# Change line only if value has changed
if new_value != actual_value:
changed = True
new_limit = domain + "\t" + limit_type + "\t" + limit_item + "\t" + str(new_value) + new_comment + "\n"
message = new_limit
nf.write(new_limit)
else:
message = line
nf.write(line)
else:
nf.write(line)
if not found:
changed = True
new_limit = domain + "\t" + limit_type + "\t" + limit_item + "\t" + str(new_value) + new_comment + "\n"
message = new_limit
nf.write(new_limit)
f.close()
nf.close()
# Copy tempfile to newfile
module.atomic_move(nf.name, f.name)
res_args = dict(
changed = changed, msg = message
)
if backup:
res_args['backup_file'] = backup_file
module.exit_json(**res_args)
# import module snippets
from ansible.module_utils.basic import *
main()
| gpl-3.0 |
racker/kafka | tests/kafkatest/services/monitor/jmx.py | 14 | 4472 | # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from kafkatest.services.kafka.directory import kafka_dir
class JmxMixin(object):
"""This mixin helps existing service subclasses start JmxTool on their worker nodes and collect jmx stats.
Note that this is not a service in its own right.
"""
def __init__(self, num_nodes, jmx_object_names=None, jmx_attributes=[]):
self.jmx_object_names = jmx_object_names
self.jmx_attributes = jmx_attributes
self.jmx_port = 9192
self.started = [False] * num_nodes
self.jmx_stats = [{} for x in range(num_nodes)]
self.maximum_jmx_value = {} # map from object_attribute_name to maximum value observed over time
self.average_jmx_value = {} # map from object_attribute_name to average value observed over time
def clean_node(self, node):
node.account.kill_process("jmx", clean_shutdown=False, allow_fail=True)
node.account.ssh("rm -rf /mnt/jmx_tool.log", allow_fail=False)
def start_jmx_tool(self, idx, node):
if self.started[idx-1] or self.jmx_object_names is None:
return
cmd = "/opt/%s/bin/kafka-run-class.sh kafka.tools.JmxTool " \
"--reporting-interval 1000 --jmx-url service:jmx:rmi:///jndi/rmi://127.0.0.1:%d/jmxrmi" % (kafka_dir(node), self.jmx_port)
for jmx_object_name in self.jmx_object_names:
cmd += " --object-name %s" % jmx_object_name
for jmx_attribute in self.jmx_attributes:
cmd += " --attributes %s" % jmx_attribute
cmd += " | tee -a /mnt/jmx_tool.log"
self.logger.debug("Start JmxTool %d command: %s", idx, cmd)
jmx_output = node.account.ssh_capture(cmd, allow_fail=False)
jmx_output.next()
self.started[idx-1] = True
def read_jmx_output(self, idx, node):
if self.started[idx-1] == False:
return
object_attribute_names = []
cmd = "cat /mnt/jmx_tool.log"
self.logger.debug("Read jmx output %d command: %s", idx, cmd)
for line in node.account.ssh_capture(cmd, allow_fail=False):
if "time" in line:
object_attribute_names = line.strip()[1:-1].split("\",\"")[1:]
continue
stats = [float(field) for field in line.split(',')]
time_sec = int(stats[0]/1000)
self.jmx_stats[idx-1][time_sec] = {name : stats[i+1] for i, name in enumerate(object_attribute_names)}
# do not calculate average and maximum of jmx stats until we have read output from all nodes
if any(len(time_to_stats) == 0 for time_to_stats in self.jmx_stats):
return
start_time_sec = min([min(time_to_stats.keys()) for time_to_stats in self.jmx_stats])
end_time_sec = max([max(time_to_stats.keys()) for time_to_stats in self.jmx_stats])
for name in object_attribute_names:
aggregates_per_time = []
for time_sec in xrange(start_time_sec, end_time_sec + 1):
# assume that value is 0 if it is not read by jmx tool at the given time. This is appropriate for metrics such as bandwidth
values_per_node = [time_to_stats.get(time_sec, {}).get(name, 0) for time_to_stats in self.jmx_stats]
# assume that value is aggregated across nodes by sum. This is appropriate for metrics such as bandwidth
aggregates_per_time.append(sum(values_per_node))
self.average_jmx_value[name] = sum(aggregates_per_time) / len(aggregates_per_time)
self.maximum_jmx_value[name] = max(aggregates_per_time)
def read_jmx_output_all_nodes(self):
for node in self.nodes:
self.read_jmx_output(self.idx(node), node) | apache-2.0 |
deathsec/instagram-py | InstagramPy/InstagramPySession.py | 1 | 10708 | # The MIT License.
# Copyright (C) 2017 The Future Shell , DeathSec.
#
# @filename : InstagramPySession.py
# @description : creates a new session , checks for configuration and gets critical data
# , loads save and saves data too.
import json
import os
import uuid
import hashlib
import requests
from stem import Signal
from stem.control import Controller
DEFAULT_PATH = "{}/".format(os.path.expanduser('~'))
class InstagramPySession:
'''
__init__:
- loads configuration from specified file.
- gets the perfect place for the save file.
- sets class variables for later use.
'''
magic_cookie = None
api_url = None
user_agent = None
ig_sig_key = None
ig_sig_version = None
tor_proxy = None
tor_controller = None
save_data = None
dump_data = None
current_save = None
username = ''
password = ''
password_list = None
password_list_md5_sum = None
password_list_buffer = None
password_list_length = 0
eopl = False
current_line = 1
ip = None
cli = None
bot = requests.Session()
def __init__(self, username, password_list, configuration, save_location, cli):
self.username = username
self.cli = cli
if not os.path.isfile(password_list):
self.cli.ReportError(
"password list not found at {}.".format(password_list))
self.password_list = password_list
'''
Note: Always open password list with errors ignored because all password list
mostly has a wrong encoding or the users pc does not support it!
'''
self.password_list_buffer = open(
password_list, encoding='utf-8', errors='ignore')
self.password_list_md5_sum = str(
self.md5sum(open(password_list, "rb")).hexdigest())
with open(password_list, encoding='utf-8', errors='ignore') as f:
for line in f:
self.password_list_length += 1
if configuration == DEFAULT_PATH:
configuration = "{}instapy-config.json".format(DEFAULT_PATH)
if save_location == DEFAULT_PATH:
save_location = "{}.instagram-py/".format(DEFAULT_PATH)
dump_location = "{}dump.json".format(save_location)
if not os.path.isfile(configuration):
self.cli.ReportError(
"configuration file not found at {}".format(configuration))
else:
try:
with open(configuration, "r") as fp:
configuration = json.load(fp)
except Exception as err:
self.cli.ReportError(
"invalid configuration file at {}".format(configuraion))
self.api_url = configuration['api-url']
self.user_agent = configuration['user-agent']
self.ig_sig_key = configuration['ig-sig-key']
self.ig_sig_version = configuration['ig-sig-version']
self.tor_proxy = "{}://{}:{}".format(
configuration['tor']['protocol'], configuration['tor']['server'], configuration['tor']['port'])
if not configuration['tor']['control']['password'] == "":
self.OpenTorController(
configuration['tor']['control']['port'], configuration['tor']['control']['password'])
else:
self.OpenTorController(
configuration['tor']['control']['port'], None)
self.bot.proxies = {
# tor socks proxy!
"https": self.tor_proxy,
"http": self.tor_proxy
}
# build headers
self.bot.headers.update(
{
'Connection': 'close', # make sure requests closes the sockets instead of keep-alive!
'Accept': '*/*',
'Content-type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Cookie2': '$Version=1',
'Accept-Language': 'en-US',
'User-Agent': self.user_agent
}
)
'''
Note: https://icanhazip.com is a free domain to get your current tor ip
this is not a dangerous website for sure , thank you @majorhayden
'''
try:
self.ip = self.bot.get(
'https://icanhazip.com').content.rstrip().decode()
except KeyboardInterrupt:
self.cli.ReportError("process aborted by the user")
except (BaseException, Exception) as err:
self.cli.ReportError(
"Connection to host failed , check your connection and tor configuration.")
if not os.path.exists(save_location):
try:
os.mkdir(save_location)
except (BaseException, Exception) as err:
self.cli.ReportError(err)
self.save_data = save_location
else:
self.save_data = save_location
self.dump_data = dump_location
try:
self.bot.get(
"{}si/fetch_headers/?challenge_type=signup&guid={}".format(
self.api_url, str(uuid.uuid4()).replace('-', ''))
)
self.magic_cookie = self.bot.cookies['csrftoken']
except KeyboardInterrupt:
self.cli.ReportError(
"cannot get the magic cookie , aborted by the user")
except (BaseException, Exception) as err:
self.cli.ReportError(err)
'''
ReadSaveFile()
- Checks if we have located the save file
- if not creates one
- opens the save file and load it as json data
- check if the user uses the same password list file for the same user
- set the current password pointer to the given data
'''
def ReadSaveFile(self, isResume):
if self.current_save == None:
self.CreateSaveFile(isResume)
SaveFile = json.load(open(self.current_save, 'r'))
self.current_line = SaveFile['line-count']
if self.password_list_md5_sum == SaveFile['password-file-md5'] and self.username == SaveFile['username']:
c_line = 1
for line in self.password_list_buffer:
self.password = str(line).rstrip()
if c_line == self.current_line:
break
c_line += 1
return True
'''
UpdateSaveFile()
- check if we have created a save file
- if yes , rewrite the the save file with the current session!
'''
def UpdateSaveFile(self):
if not self.current_save == None:
updatefile = open(self.current_save, 'w')
json.dump(
{
"username": str(self.username),
"password-file-md5": str(self.password_list_md5_sum),
"line-count": self.current_line
}, updatefile)
updatefile.close()
'''
CreateSaveFile()
- checks if we have not openned any save file but know the save location.
- if yes , creates with default settings to the location.
'''
def CreateSaveFile(self, isResume):
if self.current_save == None and not self.save_data == None:
save = '{}{}.dat'.format(self.save_data, hashlib.sha224(
self.username.encode('utf-8')).hexdigest())
self.current_save = save
if not os.path.isfile(save):
self.UpdateSaveFile()
else:
if not isResume:
self.UpdateSaveFile()
def ReadDumpFile(self, username):
if not self.dump_data == None:
if not os.path.isfile(self.dump_data):
return None
json_dump = json.load(open(self.dump_data, 'r'))
required_info = None
try:
required_info = json_dump[username]
except KeyError:
pass
return required_info
def WriteDumpFile(self, info):
if not self.dump_data == None:
json_dump = {}
if os.path.isfile(self.dump_data):
json_dump = json.load(open(self.dump_data, 'r'))
json_dump[info['id']] = info
json.dump(json_dump, open(self.dump_data, 'w'))
return True
'''
CurrentPassword()
- returns the current password pointed to the password list
'''
def CurrentPassword(self):
return self.password
'''
NextPassword()
- increaments and sets the next password as our current password
'''
def NextPassword(self):
if not self.current_line > self.password_list_length:
for line in self.password_list_buffer:
self.password = str(line.rstrip())
break
self.current_line += 1
else:
self.eopl = True
'''
GetUsername()
- returns current session username
'''
def GetUsername(self):
return self.username
'''
md5sum( FILE POINTER , BLOCK SIZE)
- opens large files from FILE POINTER
- calculates md5 with BLOCK SIZE with respect to FILE POINTER
- finalizes and returns a hashlib object!
'''
def md5sum(self, fp, block_size=2**20):
md5 = hashlib.md5()
while True:
data = fp.read(block_size)
if not data:
break
md5.update(data)
return md5
'''
ChangeIPAddress()
- stem <-> Signal
- Changes Tor Identity with the controller!
'''
def ChangeIPAddress(self):
if not self.tor_controller == None:
# signal tor to change ip
self.tor_controller.signal(Signal.NEWNYM)
self.ip = self.bot.get(
'https://icanhazip.com').content.rstrip().decode()
return True
return False
'''
OpenTorController(PORT , PASSWORD)
- Creates a fresh tor controller instance to the session
'''
def OpenTorController(self, port, password):
try:
self.tor_controller = Controller.from_port(port=int(port))
if password == None:
self.tor_controller.authenticate()
else:
self.tor_controller.authenticate(password=password)
except Exception as err:
self.cli.ReportError(
"Tor configuration invalid or server down :: {}".format(err))
| mit |
nguyentu1602/statsmodels | statsmodels/stats/inter_rater.py | 34 | 17035 | # -*- coding: utf-8 -*-
"""Inter Rater Agreement
contains
--------
fleiss_kappa
cohens_kappa
aggregate_raters:
helper function to get data into fleiss_kappa format
to_table:
helper function to create contingency table, can be used for cohens_kappa
Created on Thu Dec 06 22:57:56 2012
Author: Josef Perktold
License: BSD-3
References
----------
Wikipedia: kappa's initially based on these two pages
http://en.wikipedia.org/wiki/Fleiss%27_kappa
http://en.wikipedia.org/wiki/Cohen's_kappa
SAS-Manual : formulas for cohens_kappa, especially variances
see also R package irr
TODO
----
standard errors and hypothesis tests for fleiss_kappa
other statistics and tests,
in R package irr, SAS has more
inconsistent internal naming, changed variable names as I added more
functionality
convenience functions to create required data format from raw data
DONE
"""
import numpy as np
from scipy import stats #get rid of this? need only norm.sf
class ResultsBunch(dict):
template = '%r'
def __init__(self, **kwds):
dict.__init__(self, kwds)
self.__dict__ = self
self._initialize()
def _initialize(self):
pass
def __str__(self):
return self.template % self
def _int_ifclose(x, dec=1, width=4):
'''helper function for creating result string for int or float
only dec=1 and width=4 is implemented
Parameters
----------
x : int or float
value to format
dec : 1
number of decimals to print if x is not an integer
width : 4
width of string
Returns
-------
xint : int or float
x is converted to int if it is within 1e-14 of an integer
x_string : str
x formatted as string, either '%4d' or '%4.1f'
'''
xint = int(round(x))
if np.max(np.abs(xint - x)) < 1e-14:
return xint, '%4d' % xint
else:
return x, '%4.1f' % x
def aggregate_raters(data, n_cat=None):
'''convert raw data with shape (subject, rater) to (subject, cat_counts)
brings data into correct format for fleiss_kappa
bincount will raise exception if data cannot be converted to integer.
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
n_cat : None or int
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
Returns
-------
arr : nd_array, (n_rows, n_cat)
Contains counts of raters that assigned a category level to individuals.
Subjects are in rows, category levels in columns.
'''
data = np.asarray(data)
n_rows = data.shape[0]
if n_cat is None:
#I could add int conversion (reverse_index) to np.unique
cat_uni, cat_int = np.unique(data.ravel(), return_inverse=True)
n_cat = len(cat_uni)
data_ = cat_int.reshape(data.shape)
else:
cat_uni = np.arange(n_cat) #for return only, assumed cat levels
data_ = data
tt = np.zeros((n_rows, n_cat), int)
for idx, row in enumerate(data_):
ro = np.bincount(row)
tt[idx, :len(ro)] = ro
return tt, cat_uni
def to_table(data, bins=None):
'''convert raw data with shape (subject, rater) to (rater1, rater2)
brings data into correct format for cohens_kappa
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
bins : None, int or tuple of array_like
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
If bins are a tuple of two array_like, then the bins are directly used
by ``numpy.histogramdd``. This is useful if we want to merge categories.
Returns
-------
arr : nd_array, (n_cat, n_cat)
Contingency table that contains counts of category level with rater1
in rows and rater2 in columns.
Notes
-----
no NaN handling, delete rows with missing values
This works also for more than two raters. In that case the dimension of
the resulting contingency table is the same as the number of raters
instead of 2-dimensional.
'''
data = np.asarray(data)
n_rows, n_cols = data.shape
if bins is None:
#I could add int conversion (reverse_index) to np.unique
cat_uni, cat_int = np.unique(data.ravel(), return_inverse=True)
n_cat = len(cat_uni)
data_ = cat_int.reshape(data.shape)
bins_ = np.arange(n_cat+1) - 0.5
#alternative implementation with double loop
#tt = np.asarray([[(x == [i,j]).all(1).sum() for j in cat_uni]
# for i in cat_uni] )
#other altervative: unique rows and bincount
elif np.isscalar(bins):
bins_ = np.arange(bins+1) - 0.5
data_ = data
else:
bins_ = bins
data_ = data
tt = np.histogramdd(data_, (bins_,)*n_cols)
return tt[0], bins_
def fleiss_kappa(table):
'''Fleiss' kappa multi-rater agreement measure
Parameters
----------
table : array_like, 2-D
assumes subjects in rows, and categories in columns
Returns
-------
kappa : float
Fleiss's kappa statistic for inter rater agreement
Notes
-----
coded from Wikipedia page
http://en.wikipedia.org/wiki/Fleiss%27_kappa
no variance or tests yet
'''
table = 1.0 * np.asarray(table) #avoid integer division
n_sub, n_cat = table.shape
n_total = table.sum()
n_rater = table.sum(1)
n_rat = n_rater.max()
#assume fully ranked
assert n_total == n_sub * n_rat
#marginal frequency of categories
p_cat = table.sum(0) / n_total
table2 = table * table
p_rat = (table2.sum(1) - n_rat) / (n_rat * (n_rat - 1.))
p_mean = p_rat.mean()
p_mean_exp = (p_cat*p_cat).sum()
kappa = (p_mean - p_mean_exp) / (1- p_mean_exp)
return kappa
def cohens_kappa(table, weights=None, return_results=True, wt=None):
'''Compute Cohen's kappa with variance and equal-zero test
Parameters
----------
table : array_like, 2-Dim
square array with results of two raters, one rater in rows, second
rater in columns
weights : array_like
The interpretation of weights depends on the wt argument.
If both are None, then the simple kappa is computed.
see wt for the case when wt is not None
If weights is two dimensional, then it is directly used as a weight
matrix. For computing the variance of kappa, the maximum of the
weights is assumed to be smaller or equal to one.
TODO: fix conflicting definitions in the 2-Dim case for
wt : None or string
If wt and weights are None, then the simple kappa is computed.
If wt is given, but weights is None, then the weights are set to
be [0, 1, 2, ..., k].
If weights is a one-dimensional array, then it is used to construct
the weight matrix given the following options.
wt in ['linear', 'ca' or None] : use linear weights, Cicchetti-Allison
actual weights are linear in the score "weights" difference
wt in ['quadratic', 'fc'] : use linear weights, Fleiss-Cohen
actual weights are squared in the score "weights" difference
wt = 'toeplitz' : weight matrix is constructed as a toeplitz matrix
from the one dimensional weights.
return_results : bool
If True (default), then an instance of KappaResults is returned.
If False, then only kappa is computed and returned.
Returns
-------
results or kappa
If return_results is True (default), then a results instance with all
statistics is returned
If return_results is False, then only kappa is calculated and returned.
Notes
-----
There are two conflicting definitions of the weight matrix, Wikipedia
versus SAS manual. However, the computation are invariant to rescaling
of the weights matrix, so there is no difference in the results.
Weights for 'linear' and 'quadratic' are interpreted as scores for the
categories, the weights in the computation are based on the pairwise
difference between the scores.
Weights for 'toeplitz' are a interpreted as weighted distance. The distance
only depends on how many levels apart two entries in the table are but
not on the levels themselves.
example:
weights = '0, 1, 2, 3' and wt is either linear or toeplitz means that the
weighting only depends on the simple distance of levels.
weights = '0, 0, 1, 1' and wt = 'linear' means that the first two levels
are zero distance apart and the same for the last two levels. This is
the sampe as forming two aggregated levels by merging the first two and
the last two levels, respectively.
weights = [0, 1, 2, 3] and wt = 'quadratic' is the same as squaring these
weights and using wt = 'toeplitz'.
References
----------
Wikipedia
SAS Manual
'''
table = np.asarray(table, float) #avoid integer division
agree = np.diag(table).sum()
nobs = table.sum()
probs = table / nobs
freqs = probs #TODO: rename to use freqs instead of probs for observed
probs_diag = np.diag(probs)
freq_row = table.sum(1) / nobs
freq_col = table.sum(0) / nobs
prob_exp = freq_col * freq_row[:, None]
assert np.allclose(prob_exp.sum(), 1)
#print prob_exp.sum()
agree_exp = np.diag(prob_exp).sum() #need for kappa_max
if weights is None and wt is None:
kind = 'Simple'
kappa = (agree / nobs - agree_exp) / (1 - agree_exp)
if return_results:
#variance
term_a = probs_diag * (1 - (freq_row + freq_col) * (1 - kappa))**2
term_a = term_a.sum()
term_b = probs * (freq_col[:, None] + freq_row)**2
d_idx = np.arange(table.shape[0])
term_b[d_idx, d_idx] = 0 #set diagonal to zero
term_b = (1 - kappa)**2 * term_b.sum()
term_c = (kappa - agree_exp * (1-kappa))**2
var_kappa = (term_a + term_b - term_c) / (1 - agree_exp)**2 / nobs
#term_c = freq_col * freq_row[:, None] * (freq_col + freq_row[:,None])
term_c = freq_col * freq_row * (freq_col + freq_row)
var_kappa0 = (agree_exp + agree_exp**2 - term_c.sum())
var_kappa0 /= (1 - agree_exp)**2 * nobs
else:
if weights is None:
weights = np.arange(table.shape[0])
#weights follows the Wikipedia definition, not the SAS, which is 1 -
kind = 'Weighted'
weights = np.asarray(weights, float)
if weights.ndim == 1:
if wt in ['ca', 'linear', None]:
weights = np.abs(weights[:, None] - weights) / \
(weights[-1] - weights[0])
elif wt in ['fc', 'quadratic']:
weights = (weights[:, None] - weights)**2 / \
(weights[-1] - weights[0])**2
elif wt == 'toeplitz':
#assume toeplitz structure
from scipy.linalg import toeplitz
#weights = toeplitz(np.arange(table.shape[0]))
weights = toeplitz(weights)
else:
raise ValueError('wt option is not known')
else:
rows, cols = table.shape
if (table.shape != weights.shape):
raise ValueError('weights are not square')
#this is formula from Wikipedia
kappa = 1 - (weights * table).sum() / nobs / (weights * prob_exp).sum()
#TODO: add var_kappa for weighted version
if return_results:
var_kappa = np.nan
var_kappa0 = np.nan
#switch to SAS manual weights, problem if user specifies weights
#w is negative in some examples,
#but weights is scale invariant in examples and rough check of source
w = 1. - weights
w_row = (freq_col * w).sum(1)
w_col = (freq_row[:, None] * w).sum(0)
agree_wexp = (w * freq_col * freq_row[:, None]).sum()
term_a = freqs * (w - (w_col + w_row[:, None]) * (1 - kappa))**2
fac = 1. / ((1 - agree_wexp)**2 * nobs)
var_kappa = term_a.sum() - (kappa - agree_wexp * (1 - kappa))**2
var_kappa *= fac
freqse = freq_col * freq_row[:, None]
var_kappa0 = (freqse * (w - (w_col + w_row[:, None]))**2).sum()
var_kappa0 -= agree_wexp**2
var_kappa0 *= fac
kappa_max = (np.minimum(freq_row, freq_col).sum() - agree_exp) / \
(1 - agree_exp)
if return_results:
res = KappaResults( kind=kind,
kappa=kappa,
kappa_max=kappa_max,
weights=weights,
var_kappa=var_kappa,
var_kappa0=var_kappa0
)
return res
else:
return kappa
_kappa_template = '''\
%(kind)s Kappa Coefficient
--------------------------------
Kappa %(kappa)6.4f
ASE %(std_kappa)6.4f
%(alpha_ci)s%% Lower Conf Limit %(kappa_low)6.4f
%(alpha_ci)s%% Upper Conf Limit %(kappa_upp)6.4f
Test of H0: %(kind)s Kappa = 0
ASE under H0 %(std_kappa0)6.4f
Z %(z_value)6.4f
One-sided Pr > Z %(pvalue_one_sided)6.4f
Two-sided Pr > |Z| %(pvalue_two_sided)6.4f
'''
'''
Weighted Kappa Coefficient
--------------------------------
Weighted Kappa 0.4701
ASE 0.1457
95% Lower Conf Limit 0.1845
95% Upper Conf Limit 0.7558
Test of H0: Weighted Kappa = 0
ASE under H0 0.1426
Z 3.2971
One-sided Pr > Z 0.0005
Two-sided Pr > |Z| 0.0010
'''
class KappaResults(ResultsBunch):
'''Results for Cohen's kappa
Attributes
----------
kappa : cohen's kappa
var_kappa : variance of kappa
std_kappa : standard deviation of kappa
alpha : one-sided probability for confidence interval
kappa_low : lower (1-alpha) confidence limit
kappa_upp : upper (1-alpha) confidence limit
var_kappa0 : variance of kappa under H0: kappa=0
std_kappa0 : standard deviation of kappa under H0: kappa=0
z_value : test statistic for H0: kappa=0, is standard normal distributed
pvalue_one_sided : one sided p-value for H0: kappa=0 and H1: kappa>0
pvalue_two_sided : two sided p-value for H0: kappa=0 and H1: kappa!=0
distribution_kappa : asymptotic normal distribution of kappa
distribution_zero_null : asymptotic normal distribution of kappa under
H0: kappa=0
The confidence interval for kappa and the statistics for the test of
H0: kappa=0 are based on the asymptotic normal distribution of kappa.
'''
template = _kappa_template
def _initialize(self):
if not 'alpha' in self:
self['alpha'] = 0.025
self['alpha_ci'] = _int_ifclose(100 - 0.025 * 200)[1]
self['std_kappa'] = np.sqrt(self['var_kappa'])
self['std_kappa0'] = np.sqrt(self['var_kappa0'])
self['z_value'] = self['kappa'] / self['std_kappa0']
self['pvalue_one_sided'] = stats.norm.sf(self['z_value'])
self['pvalue_two_sided'] = stats.norm.sf(np.abs(self['z_value'])) * 2
delta = stats.norm.isf(self['alpha']) * self['std_kappa']
self['kappa_low'] = self['kappa'] - delta
self['kappa_upp'] = self['kappa'] + delta
self['distribution_kappa'] = stats.norm(loc=self['kappa'],
scale=self['std_kappa'])
self['distribution_zero_null'] = stats.norm(loc=0,
scale=self['std_kappa0'])
def __str__(self):
return self.template % self
| bsd-3-clause |
dfalt974/SickRage | lib/pgi/clib/gir/giregisteredtypeinfo.py | 20 | 1096 | # Copyright 2012 Christoph Reiter
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
from ..glib import gchar_p
from ..gobject import GType
from .gibaseinfo import GIBaseInfo
from .._utils import find_library, wrap_class
_gir = find_library("girepository-1.0")
class GIRegisteredTypeInfo(GIBaseInfo):
def _get_repr(self):
values = super(GIRegisteredTypeInfo, self)._get_repr()
values["type_name"] = repr(self.type_name)
values["type_init"] = repr(self.type_init)
values["g_type"] = repr(self.g_type)
return values
_methods = [
("get_type_name", gchar_p, [GIRegisteredTypeInfo]),
("get_type_init", gchar_p, [GIRegisteredTypeInfo]),
("get_g_type", GType, [GIRegisteredTypeInfo]),
]
wrap_class(_gir, GIRegisteredTypeInfo, GIRegisteredTypeInfo,
"g_registered_type_info_", _methods)
__all__ = ["GIRegisteredTypeInfo"]
| gpl-3.0 |
cisco-open-source/selenium | py/selenium/webdriver/common/desired_capabilities.py | 35 | 3196 | # Copyright 2008-2009 WebDriver committers
# Copyright 2008-2009 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The Desired Capabilities implementation.
"""
class DesiredCapabilities(object):
"""
Set of default supported desired capabilities.
Use this as a starting point for creating a desired capabilities object for
requesting remote webdrivers for connecting to selenium server or selenium grid.
Usage Example:
from selenium import webdriver
selenium_grid_url = "http://198.0.0.1:4444/wd/hub"
# Create a desired capabilities object as a starting point.
capabilities = DesiredCapabilities.FIREFOX.copy()
capabilities['platform'] = "WINDOWS"
capabilities['version'] = "10"
# Instantiate an instance of Remote WebDriver with the desired capabilities.
driver = webdriver.Remote(desired_capabilities=capabilities,
command_executor=selenium_grid_url)
Note: Always use '.copy()' on the DesiredCapabilities object to avoid the side
effects of altering the Global class instance.
"""
FIREFOX = {
"browserName": "firefox",
"version": "",
"platform": "ANY",
"javascriptEnabled": True,
}
INTERNETEXPLORER = {
"browserName": "internet explorer",
"version": "",
"platform": "WINDOWS",
"javascriptEnabled": True,
}
CHROME = {
"browserName": "chrome",
"version": "",
"platform": "ANY",
"javascriptEnabled": True,
}
OPERA = {
"browserName": "opera",
"version": "",
"platform": "ANY",
"javascriptEnabled": True,
}
SAFARI = {
"browserName": "safari",
"version": "",
"platform": "ANY",
"javascriptEnabled": True,
}
HTMLUNIT = {
"browserName": "htmlunit",
"version": "",
"platform": "ANY",
}
HTMLUNITWITHJS = {
"browserName": "htmlunit",
"version": "firefox",
"platform": "ANY",
"javascriptEnabled": True,
}
IPHONE = {
"browserName": "iPhone",
"version": "",
"platform": "MAC",
"javascriptEnabled": True,
}
IPAD = {
"browserName": "iPad",
"version": "",
"platform": "MAC",
"javascriptEnabled": True,
}
ANDROID = {
"browserName": "android",
"version": "",
"platform": "ANDROID",
"javascriptEnabled": True,
}
PHANTOMJS = {
"browserName":"phantomjs",
"version": "",
"platform": "ANY",
"javascriptEnabled": True,
}
| apache-2.0 |
dobermanapp/django-doberman | setup.py | 2 | 1557 | #!/usr/bin/env python
from setuptools import setup, find_packages
requirements = ['Django>=1.7.0', ]
try:
from unittest import mock
except ImportError:
requirements.append('mock')
setup(
name="django-doberman",
version="0.5.9",
author="Nicolas Mendoza",
author_email="[email protected]",
maintainer='Nicolas Mendoza',
maintainer_email='[email protected]',
description="Django app that locks out users after too many failed login attempts.",
long_description=open('README.rst').read(),
license="MIT License",
keywords="django locks users account login attempts banned ip doberman authentication",
url="https://github.com/nicchub/django-doberman",
packages=[
'doberman/', 'doberman/contrib', 'doberman/migrations/', 'doberman/templates/', 'doberman/contrib/captcha/',
],
include_package_data=True,
tests_require=['python-coveralls'],
install_requires=requirements,
classifiers=[
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.2",
"Framework :: Django",
"Framework :: Django :: 1.7",
"Framework :: Django :: 1.8",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries"
]
) | mit |
neumerance/deploy | .venv/lib/python2.7/site-packages/docutils/parsers/rst/languages/gl.py | 130 | 3711 | # -*- coding: utf-8 -*-
# Author: David Goodger
# Contact: [email protected]
# Revision: $Revision: 4229 $
# Date: $Date: 2005-12-23 00:46:16 +0100 (Fri, 23 Dec 2005) $
# Copyright: This module has been placed in the public domain.
# New language mappings are welcome. Before doing a new translation, please
# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
# translated for each language: one in docutils/languages, the other in
# docutils/parsers/rst/languages.
"""
Galician-language mappings for language-dependent features of
reStructuredText.
"""
__docformat__ = 'reStructuredText'
directives = {
# language-dependent: fixed
u'atenci\u00f3n': 'attention',
u'advertencia': 'caution',
u'code (translation required)': 'code',
u'perigo': 'danger',
u'erro': 'error',
u'pista': 'hint',
u'importante': 'important',
u'nota': 'note',
u'consello': 'tip',
u'aviso': 'warning',
u'admonici\u00f3n': 'admonition',
u'barra lateral': 'sidebar',
u't\u00f3pico': 'topic',
u'bloque-li\u00f1a': 'line-block',
u'literal-analizado': 'parsed-literal',
u'r\u00fabrica': 'rubric',
u'ep\u00edgrafe': 'epigraph',
u'realzados': 'highlights',
u'coller-citaci\u00f3n': 'pull-quote',
u'compor': 'compound',
u'recipiente': 'container',
#'questions': 'questions',
u't\u00e1boa': 'table',
u't\u00e1boa-csv': 'csv-table',
u't\u00e1boa-listaxe': 'list-table',
#'qa': 'questions',
#'faq': 'questions',
u'meta': 'meta',
'math (translation required)': 'math',
#'imagemap': 'imagemap',
u'imaxe': 'image',
u'figura': 'figure',
u'inclu\u00edr': 'include',
u'cru': 'raw',
u'substitu\u00edr': 'replace',
u'unicode': 'unicode',
u'data': 'date',
u'clase': 'class',
u'regra': 'role',
u'regra-predeterminada': 'default-role',
u't\u00edtulo': 'title',
u'contido': 'contents',
u'seccnum': 'sectnum',
u'secci\u00f3n-numerar': 'sectnum',
u'cabeceira': 'header',
u'p\u00e9 de p\u00e1xina': 'footer',
#'footnotes': 'footnotes',
#'citations': 'citations',
u'notas-destino': 'target-notes',
u'texto restruturado-proba-directiva': 'restructuredtext-test-directive'}
"""Galician name to registered (in directives/__init__.py) directive name
mapping."""
roles = {
# language-dependent: fixed
u'abreviatura': 'abbreviation',
u'ab': 'abbreviation',
u'acr\u00f3nimo': 'acronym',
u'ac': 'acronym',
u'code (translation required)': 'code',
u'\u00edndice': 'index',
u'i': 'index',
u'sub\u00edndice': 'subscript',
u'sub': 'subscript',
u'super\u00edndice': 'superscript',
u'sup': 'superscript',
u'referencia t\u00edtulo': 'title-reference',
u't\u00edtulo': 'title-reference',
u't': 'title-reference',
u'referencia-pep': 'pep-reference',
u'pep': 'pep-reference',
u'referencia-rfc': 'rfc-reference',
u'rfc': 'rfc-reference',
u'\u00e9nfase': 'emphasis',
u'forte': 'strong',
u'literal': 'literal',
'math (translation required)': 'math',
u'referencia-nome': 'named-reference',
u'referencia-an\u00f3nimo': 'anonymous-reference',
u'referencia-nota ao p\u00e9': 'footnote-reference',
u'referencia-citaci\u00f3n': 'citation-reference',
u'referencia-substituci\u00f3n': 'substitution-reference',
u'destino': 'target',
u'referencia-uri': 'uri-reference',
u'uri': 'uri-reference',
u'url': 'uri-reference',
u'cru': 'raw',}
"""Mapping of Galician role names to canonical role names for interpreted text.
"""
| apache-2.0 |
yanheven/glance | glance/tests/unit/test_context.py | 18 | 6208 | # Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
from glance import context
from glance.tests.unit import utils as unit_utils
from glance.tests import utils
def _fake_image(owner, is_public):
return {
'id': None,
'owner': owner,
'is_public': is_public,
}
def _fake_membership(can_share=False):
return {'can_share': can_share}
class TestContext(utils.BaseTestCase):
def setUp(self):
super(TestContext, self).setUp()
self.db_api = unit_utils.FakeDB()
def do_visible(self, exp_res, img_owner, img_public, **kwargs):
"""
Perform a context visibility test. Creates a (fake) image
with the specified owner and is_public attributes, then
creates a context with the given keyword arguments and expects
exp_res as the result of an is_image_visible() call on the
context.
"""
img = _fake_image(img_owner, img_public)
ctx = context.RequestContext(**kwargs)
self.assertEqual(exp_res, self.db_api.is_image_visible(ctx, img))
def test_empty_public(self):
"""
Tests that an empty context (with is_admin set to True) can
access an image with is_public set to True.
"""
self.do_visible(True, None, True, is_admin=True)
def test_empty_public_owned(self):
"""
Tests that an empty context (with is_admin set to True) can
access an owned image with is_public set to True.
"""
self.do_visible(True, 'pattieblack', True, is_admin=True)
def test_empty_private(self):
"""
Tests that an empty context (with is_admin set to True) can
access an image with is_public set to False.
"""
self.do_visible(True, None, False, is_admin=True)
def test_empty_private_owned(self):
"""
Tests that an empty context (with is_admin set to True) can
access an owned image with is_public set to False.
"""
self.do_visible(True, 'pattieblack', False, is_admin=True)
def test_anon_public(self):
"""
Tests that an anonymous context (with is_admin set to False)
can access an image with is_public set to True.
"""
self.do_visible(True, None, True)
def test_anon_public_owned(self):
"""
Tests that an anonymous context (with is_admin set to False)
can access an owned image with is_public set to True.
"""
self.do_visible(True, 'pattieblack', True)
def test_anon_private(self):
"""
Tests that an anonymous context (with is_admin set to False)
can access an unowned image with is_public set to False.
"""
self.do_visible(True, None, False)
def test_anon_private_owned(self):
"""
Tests that an anonymous context (with is_admin set to False)
cannot access an owned image with is_public set to False.
"""
self.do_visible(False, 'pattieblack', False)
def test_auth_public(self):
"""
Tests that an authenticated context (with is_admin set to
False) can access an image with is_public set to True.
"""
self.do_visible(True, None, True, tenant='froggy')
def test_auth_public_unowned(self):
"""
Tests that an authenticated context (with is_admin set to
False) can access an image (which it does not own) with
is_public set to True.
"""
self.do_visible(True, 'pattieblack', True, tenant='froggy')
def test_auth_public_owned(self):
"""
Tests that an authenticated context (with is_admin set to
False) can access an image (which it does own) with is_public
set to True.
"""
self.do_visible(True, 'pattieblack', True, tenant='pattieblack')
def test_auth_private(self):
"""
Tests that an authenticated context (with is_admin set to
False) can access an image with is_public set to False.
"""
self.do_visible(True, None, False, tenant='froggy')
def test_auth_private_unowned(self):
"""
Tests that an authenticated context (with is_admin set to
False) cannot access an image (which it does not own) with
is_public set to False.
"""
self.do_visible(False, 'pattieblack', False, tenant='froggy')
def test_auth_private_owned(self):
"""
Tests that an authenticated context (with is_admin set to
False) can access an image (which it does own) with is_public
set to False.
"""
self.do_visible(True, 'pattieblack', False, tenant='pattieblack')
def test_request_id(self):
contexts = [context.RequestContext().request_id for _ in range(5)]
# Check for uniqueness -- set() will normalize its argument
self.assertEqual(5, len(set(contexts)))
def test_service_catalog(self):
ctx = context.RequestContext(service_catalog=['foo'])
self.assertEqual(['foo'], ctx.service_catalog)
def test_user_identity(self):
ctx = context.RequestContext(user="user",
tenant="tenant",
domain="domain",
user_domain="user-domain",
project_domain="project-domain")
self.assertEqual('user tenant domain user-domain project-domain',
ctx.to_dict()["user_identity"])
| apache-2.0 |
cainmatt/django | django/db/models/fields/related.py | 5 | 118182 | from __future__ import unicode_literals
import warnings
from functools import partial
from operator import attrgetter
from django import forms
from django.apps import apps
from django.core import checks, exceptions
from django.core.exceptions import FieldDoesNotExist
from django.db import connection, connections, router, transaction
from django.db.backends import utils
from django.db.models import Q, signals
from django.db.models.deletion import CASCADE, SET_DEFAULT, SET_NULL
from django.db.models.fields import (
BLANK_CHOICE_DASH, AutoField, Field, IntegerField, PositiveIntegerField,
PositiveSmallIntegerField,
)
from django.db.models.fields.related_lookups import (
RelatedExact, RelatedGreaterThan, RelatedGreaterThanOrEqual, RelatedIn,
RelatedLessThan, RelatedLessThanOrEqual,
)
from django.db.models.query import QuerySet
from django.db.models.query_utils import PathInfo
from django.db.models.utils import make_model_tuple
from django.utils import six
from django.utils.deprecation import (
RemovedInDjango20Warning, RemovedInDjango110Warning,
)
from django.utils.encoding import force_text, smart_text
from django.utils.functional import cached_property, curry
from django.utils.translation import ugettext_lazy as _
from django.utils.version import get_docs_version
RECURSIVE_RELATIONSHIP_CONSTANT = 'self'
def resolve_relation(scope_model, relation):
"""
Transform relation into a model or fully-qualified model string of the form
"app_label.ModelName", relative to scope_model.
The relation argument can be:
* RECURSIVE_RELATIONSHIP_CONSTANT, i.e. the string "self", in which case
the model argument will be returned.
* A bare model name without an app_label, in which case scope_model's
app_label will be prepended.
* An "app_label.ModelName" string.
* A model class, which will be returned unchanged.
"""
# Check for recursive relations
if relation == RECURSIVE_RELATIONSHIP_CONSTANT:
relation = scope_model
# Look for an "app.Model" relation
if isinstance(relation, six.string_types):
if "." not in relation:
relation = "%s.%s" % (scope_model._meta.app_label, relation)
return relation
def lazy_related_operation(function, model, *related_models, **kwargs):
"""
Schedule `function` to be called once `model` and all `related_models`
have been imported and registered with the app registry. `function` will
be called with the newly-loaded model classes as its positional arguments,
plus any optional keyword arguments.
The `model` argument must be a model class. Each subsequent positional
argument is another model, or a reference to another model - see
`resolve_relation()` for the various forms these may take. Any relative
references will be resolved relative to `model`.
This is a convenience wrapper for `Apps.lazy_model_operation` - the app
registry model used is the one found in `model._meta.apps`.
"""
models = [model] + [resolve_relation(model, rel) for rel in related_models]
model_keys = (make_model_tuple(m) for m in models)
apps = model._meta.apps
return apps.lazy_model_operation(partial(function, **kwargs), *model_keys)
def add_lazy_relation(cls, field, relation, operation):
warnings.warn(
"add_lazy_relation() has been superseded by lazy_related_operation() "
"and related methods on the Apps class.",
RemovedInDjango20Warning, stacklevel=2)
# Rearrange args for new Apps.lazy_model_operation
function = lambda local, related, field: operation(field, related, local)
lazy_related_operation(function, cls, relation, field=field)
class RelatedField(Field):
"""
Base class that all relational fields inherit from.
"""
# Field flags
one_to_many = False
one_to_one = False
many_to_many = False
many_to_one = False
@cached_property
def related_model(self):
# Can't cache this property until all the models are loaded.
apps.check_models_ready()
return self.remote_field.model
def check(self, **kwargs):
errors = super(RelatedField, self).check(**kwargs)
errors.extend(self._check_related_name_is_valid())
errors.extend(self._check_relation_model_exists())
errors.extend(self._check_referencing_to_swapped_model())
errors.extend(self._check_clashes())
return errors
def _check_related_name_is_valid(self):
import re
import keyword
related_name = self.remote_field.related_name
if not related_name:
return []
is_valid_id = True
if keyword.iskeyword(related_name):
is_valid_id = False
if six.PY3:
if not related_name.isidentifier():
is_valid_id = False
else:
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*\Z', related_name):
is_valid_id = False
if not (is_valid_id or related_name.endswith('+')):
return [
checks.Error(
"The name '%s' is invalid related_name for field %s.%s" %
(self.remote_field.related_name, self.model._meta.object_name,
self.name),
hint="Related name must be a valid Python identifier or end with a '+'",
obj=self,
id='fields.E306',
)
]
return []
def _check_relation_model_exists(self):
rel_is_missing = self.remote_field.model not in apps.get_models()
rel_is_string = isinstance(self.remote_field.model, six.string_types)
model_name = self.remote_field.model if rel_is_string else self.remote_field.model._meta.object_name
if rel_is_missing and (rel_is_string or not self.remote_field.model._meta.swapped):
return [
checks.Error(
("Field defines a relation with model '%s', which "
"is either not installed, or is abstract.") % model_name,
hint=None,
obj=self,
id='fields.E300',
)
]
return []
def _check_referencing_to_swapped_model(self):
if (self.remote_field.model not in apps.get_models() and
not isinstance(self.remote_field.model, six.string_types) and
self.remote_field.model._meta.swapped):
model = "%s.%s" % (
self.remote_field.model._meta.app_label,
self.remote_field.model._meta.object_name
)
return [
checks.Error(
("Field defines a relation with the model '%s', "
"which has been swapped out.") % model,
hint="Update the relation to point at 'settings.%s'." % self.remote_field.model._meta.swappable,
obj=self,
id='fields.E301',
)
]
return []
def _check_clashes(self):
"""
Check accessor and reverse query name clashes.
"""
from django.db.models.base import ModelBase
errors = []
opts = self.model._meta
# `f.remote_field.model` may be a string instead of a model. Skip if model name is
# not resolved.
if not isinstance(self.remote_field.model, ModelBase):
return []
# If the field doesn't install backward relation on the target model (so
# `is_hidden` returns True), then there are no clashes to check and we
# can skip these fields.
if self.remote_field.is_hidden():
return []
# Consider that we are checking field `Model.foreign` and the models
# are:
#
# class Target(models.Model):
# model = models.IntegerField()
# model_set = models.IntegerField()
#
# class Model(models.Model):
# foreign = models.ForeignKey(Target)
# m2m = models.ManyToManyField(Target)
rel_opts = self.remote_field.model._meta
# rel_opts.object_name == "Target"
rel_name = self.remote_field.get_accessor_name() # i. e. "model_set"
rel_query_name = self.related_query_name() # i. e. "model"
field_name = "%s.%s" % (opts.object_name,
self.name) # i. e. "Model.field"
# Check clashes between accessor or reverse query name of `field`
# and any other field name -- i.e. accessor for Model.foreign is
# model_set and it clashes with Target.model_set.
potential_clashes = rel_opts.fields + rel_opts.many_to_many
for clash_field in potential_clashes:
clash_name = "%s.%s" % (rel_opts.object_name,
clash_field.name) # i. e. "Target.model_set"
if clash_field.name == rel_name:
errors.append(
checks.Error(
"Reverse accessor for '%s' clashes with field name '%s'." % (field_name, clash_name),
hint=("Rename field '%s', or add/change a related_name "
"argument to the definition for field '%s'.") % (clash_name, field_name),
obj=self,
id='fields.E302',
)
)
if clash_field.name == rel_query_name:
errors.append(
checks.Error(
"Reverse query name for '%s' clashes with field name '%s'." % (field_name, clash_name),
hint=("Rename field '%s', or add/change a related_name "
"argument to the definition for field '%s'.") % (clash_name, field_name),
obj=self,
id='fields.E303',
)
)
# Check clashes between accessors/reverse query names of `field` and
# any other field accessor -- i. e. Model.foreign accessor clashes with
# Model.m2m accessor.
potential_clashes = (r for r in rel_opts.related_objects if r.field is not self)
for clash_field in potential_clashes:
clash_name = "%s.%s" % ( # i. e. "Model.m2m"
clash_field.related_model._meta.object_name,
clash_field.field.name)
if clash_field.get_accessor_name() == rel_name:
errors.append(
checks.Error(
"Reverse accessor for '%s' clashes with reverse accessor for '%s'." % (field_name, clash_name),
hint=("Add or change a related_name argument "
"to the definition for '%s' or '%s'.") % (field_name, clash_name),
obj=self,
id='fields.E304',
)
)
if clash_field.get_accessor_name() == rel_query_name:
errors.append(
checks.Error(
"Reverse query name for '%s' clashes with reverse query name for '%s'."
% (field_name, clash_name),
hint=("Add or change a related_name argument "
"to the definition for '%s' or '%s'.") % (field_name, clash_name),
obj=self,
id='fields.E305',
)
)
return errors
def db_type(self, connection):
# By default related field will not have a column as it relates to
# columns from another table.
return None
def contribute_to_class(self, cls, name, virtual_only=False):
super(RelatedField, self).contribute_to_class(cls, name, virtual_only=virtual_only)
self.opts = cls._meta
if not cls._meta.abstract:
if self.remote_field.related_name:
related_name = force_text(self.remote_field.related_name) % {
'class': cls.__name__.lower(),
'app_label': cls._meta.app_label.lower()
}
self.remote_field.related_name = related_name
def resolve_related_class(model, related, field):
field.remote_field.model = related
field.do_related_class(related, model)
lazy_related_operation(resolve_related_class, cls, self.remote_field.model, field=self)
def get_forward_related_filter(self, obj):
"""
Return the keyword arguments that when supplied to
self.model.object.filter(), would select all instances related through
this field to the remote obj. This is used to build the querysets
returned by related descriptors. obj is an instance of
self.related_field.model.
"""
return {
'%s__%s' % (self.name, rh_field.name): getattr(obj, rh_field.attname)
for _, rh_field in self.related_fields
}
def get_reverse_related_filter(self, obj):
"""
Complement to get_forward_related_filter(). Return the keyword
arguments that when passed to self.related_field.model.object.filter()
select all instances of self.related_field.model related through
this field to obj. obj is an instance of self.model.
"""
base_filter = {
rh_field.attname: getattr(obj, lh_field.attname)
for lh_field, rh_field in self.related_fields
}
base_filter.update(self.get_extra_descriptor_filter(obj) or {})
return base_filter
@property
def swappable_setting(self):
"""
Get the setting that this is powered from for swapping, or None
if it's not swapped in / marked with swappable=False.
"""
if self.swappable:
# Work out string form of "to"
if isinstance(self.remote_field.model, six.string_types):
to_string = self.remote_field.model
else:
to_string = self.remote_field.model._meta.label
return apps.get_swappable_settings_name(to_string)
return None
def set_attributes_from_rel(self):
self.name = (
self.name or
(self.remote_field.model._meta.model_name + '_' + self.remote_field.model._meta.pk.name)
)
if self.verbose_name is None:
self.verbose_name = self.remote_field.model._meta.verbose_name
self.remote_field.set_field_name()
@property
def related(self):
warnings.warn(
"Usage of field.related has been deprecated. Use field.remote_field instead.",
RemovedInDjango110Warning, 2)
return self.remote_field
def do_related_class(self, other, cls):
self.set_attributes_from_rel()
self.contribute_to_related_class(other, self.remote_field)
def get_limit_choices_to(self):
"""
Return ``limit_choices_to`` for this model field.
If it is a callable, it will be invoked and the result will be
returned.
"""
if callable(self.remote_field.limit_choices_to):
return self.remote_field.limit_choices_to()
return self.remote_field.limit_choices_to
def formfield(self, **kwargs):
"""
Pass ``limit_choices_to`` to the field being constructed.
Only passes it if there is a type that supports related fields.
This is a similar strategy used to pass the ``queryset`` to the field
being constructed.
"""
defaults = {}
if hasattr(self.remote_field, 'get_related_field'):
# If this is a callable, do not invoke it here. Just pass
# it in the defaults for when the form class will later be
# instantiated.
limit_choices_to = self.remote_field.limit_choices_to
defaults.update({
'limit_choices_to': limit_choices_to,
})
defaults.update(kwargs)
return super(RelatedField, self).formfield(**defaults)
def related_query_name(self):
"""
Define the name that can be used to identify this related object in a
table-spanning query.
"""
return self.remote_field.related_query_name or self.remote_field.related_name or self.opts.model_name
@property
def target_field(self):
"""
When filtering against this relation, returns the field on the remote
model against which the filtering should happen.
"""
target_fields = self.get_path_info()[-1].target_fields
if len(target_fields) > 1:
raise exceptions.FieldError(
"The relation has multiple target fields, but only single target field was asked for")
return target_fields[0]
class SingleRelatedObjectDescriptor(object):
"""
Accessor to the related object on the reverse side of a one-to-one
relation.
In the example::
class Restaurant(Model):
place = OneToOneField(Place, related_name='restaurant')
``place.restaurant`` is a ``SingleRelatedObjectDescriptor`` instance.
"""
def __init__(self, related):
self.related = related
self.cache_name = related.get_cache_name()
@cached_property
def RelatedObjectDoesNotExist(self):
# The exception isn't created at initialization time for the sake of
# consistency with `ReverseSingleRelatedObjectDescriptor`.
return type(
str('RelatedObjectDoesNotExist'),
(self.related.related_model.DoesNotExist, AttributeError),
{}
)
def is_cached(self, instance):
return hasattr(instance, self.cache_name)
def get_queryset(self, **hints):
manager = self.related.related_model._default_manager
# If the related manager indicates that it should be used for
# related fields, respect that.
if not getattr(manager, 'use_for_related_fields', False):
manager = self.related.related_model._base_manager
return manager.db_manager(hints=hints).all()
def get_prefetch_queryset(self, instances, queryset=None):
if queryset is None:
queryset = self.get_queryset()
queryset._add_hints(instance=instances[0])
rel_obj_attr = attrgetter(self.related.field.attname)
instance_attr = lambda obj: obj._get_pk_val()
instances_dict = {instance_attr(inst): inst for inst in instances}
query = {'%s__in' % self.related.field.name: instances}
queryset = queryset.filter(**query)
# Since we're going to assign directly in the cache,
# we must manage the reverse relation cache manually.
rel_obj_cache_name = self.related.field.get_cache_name()
for rel_obj in queryset:
instance = instances_dict[rel_obj_attr(rel_obj)]
setattr(rel_obj, rel_obj_cache_name, instance)
return queryset, rel_obj_attr, instance_attr, True, self.cache_name
def __get__(self, instance, instance_type=None):
if instance is None:
return self
try:
rel_obj = getattr(instance, self.cache_name)
except AttributeError:
related_pk = instance._get_pk_val()
if related_pk is None:
rel_obj = None
else:
filter_args = self.related.field.get_forward_related_filter(instance)
try:
rel_obj = self.get_queryset(instance=instance).get(**filter_args)
except self.related.related_model.DoesNotExist:
rel_obj = None
else:
setattr(rel_obj, self.related.field.get_cache_name(), instance)
setattr(instance, self.cache_name, rel_obj)
if rel_obj is None:
raise self.RelatedObjectDoesNotExist(
"%s has no %s." % (
instance.__class__.__name__,
self.related.get_accessor_name()
)
)
else:
return rel_obj
def __set__(self, instance, value):
# The similarity of the code below to the code in
# ReverseSingleRelatedObjectDescriptor is annoying, but there's a bunch
# of small differences that would make a common base class convoluted.
# If null=True, we can assign null here, but otherwise the value needs
# to be an instance of the related class.
if value is None and self.related.field.null is False:
raise ValueError(
'Cannot assign None: "%s.%s" does not allow null values.' % (
instance._meta.object_name,
self.related.get_accessor_name(),
)
)
elif value is not None and not isinstance(value, self.related.related_model):
raise ValueError(
'Cannot assign "%r": "%s.%s" must be a "%s" instance.' % (
value,
instance._meta.object_name,
self.related.get_accessor_name(),
self.related.related_model._meta.object_name,
)
)
elif value is not None:
if instance._state.db is None:
instance._state.db = router.db_for_write(instance.__class__, instance=value)
elif value._state.db is None:
value._state.db = router.db_for_write(value.__class__, instance=instance)
elif value._state.db is not None and instance._state.db is not None:
if not router.allow_relation(value, instance):
raise ValueError('Cannot assign "%r": the current database router prevents this relation.' % value)
related_pk = tuple(getattr(instance, field.attname) for field in self.related.field.foreign_related_fields)
# Set the value of the related field to the value of the related object's related field
for index, field in enumerate(self.related.field.local_related_fields):
setattr(value, field.attname, related_pk[index])
# Since we already know what the related object is, seed the related
# object caches now, too. This avoids another db hit if you get the
# object you just set.
setattr(instance, self.cache_name, value)
setattr(value, self.related.field.get_cache_name(), instance)
class ReverseSingleRelatedObjectDescriptor(object):
"""
Accessor to the related object on the forward side of a many-to-one or
one-to-one relation.
In the example::
class Choice(Model):
poll = ForeignKey(Place, related_name='choices')
`choice.poll` is a ReverseSingleRelatedObjectDescriptor instance.
"""
def __init__(self, field_with_rel):
self.field = field_with_rel
self.cache_name = self.field.get_cache_name()
@cached_property
def RelatedObjectDoesNotExist(self):
# The exception can't be created at initialization time since the
# related model might not be resolved yet; `rel.model` might still be
# a string model reference.
return type(
str('RelatedObjectDoesNotExist'),
(self.field.remote_field.model.DoesNotExist, AttributeError),
{}
)
def is_cached(self, instance):
return hasattr(instance, self.cache_name)
def get_queryset(self, **hints):
manager = self.field.remote_field.model._default_manager
# If the related manager indicates that it should be used for
# related fields, respect that.
if not getattr(manager, 'use_for_related_fields', False):
manager = self.field.remote_field.model._base_manager
return manager.db_manager(hints=hints).all()
def get_prefetch_queryset(self, instances, queryset=None):
if queryset is None:
queryset = self.get_queryset()
queryset._add_hints(instance=instances[0])
rel_obj_attr = self.field.get_foreign_related_value
instance_attr = self.field.get_local_related_value
instances_dict = {instance_attr(inst): inst for inst in instances}
related_field = self.field.foreign_related_fields[0]
# FIXME: This will need to be revisited when we introduce support for
# composite fields. In the meantime we take this practical approach to
# solve a regression on 1.6 when the reverse manager in hidden
# (related_name ends with a '+'). Refs #21410.
# The check for len(...) == 1 is a special case that allows the query
# to be join-less and smaller. Refs #21760.
if self.field.remote_field.is_hidden() or len(self.field.foreign_related_fields) == 1:
query = {'%s__in' % related_field.name: set(instance_attr(inst)[0] for inst in instances)}
else:
query = {'%s__in' % self.field.related_query_name(): instances}
queryset = queryset.filter(**query)
# Since we're going to assign directly in the cache,
# we must manage the reverse relation cache manually.
if not self.field.remote_field.multiple:
rel_obj_cache_name = self.field.remote_field.get_cache_name()
for rel_obj in queryset:
instance = instances_dict[rel_obj_attr(rel_obj)]
setattr(rel_obj, rel_obj_cache_name, instance)
return queryset, rel_obj_attr, instance_attr, True, self.cache_name
def __get__(self, instance, instance_type=None):
if instance is None:
return self
try:
rel_obj = getattr(instance, self.cache_name)
except AttributeError:
val = self.field.get_local_related_value(instance)
if None in val:
rel_obj = None
else:
qs = self.get_queryset(instance=instance)
qs = qs.filter(**self.field.get_reverse_related_filter(instance))
# Assuming the database enforces foreign keys, this won't fail.
rel_obj = qs.get()
if not self.field.remote_field.multiple:
setattr(rel_obj, self.field.remote_field.get_cache_name(), instance)
setattr(instance, self.cache_name, rel_obj)
if rel_obj is None and not self.field.null:
raise self.RelatedObjectDoesNotExist(
"%s has no %s." % (self.field.model.__name__, self.field.name)
)
else:
return rel_obj
def __set__(self, instance, value):
# If null=True, we can assign null here, but otherwise the value needs
# to be an instance of the related class.
if value is None and self.field.null is False:
raise ValueError(
'Cannot assign None: "%s.%s" does not allow null values.' %
(instance._meta.object_name, self.field.name)
)
elif value is not None and not isinstance(value, self.field.remote_field.model):
raise ValueError(
'Cannot assign "%r": "%s.%s" must be a "%s" instance.' % (
value,
instance._meta.object_name,
self.field.name,
self.field.remote_field.model._meta.object_name,
)
)
elif value is not None:
if instance._state.db is None:
instance._state.db = router.db_for_write(instance.__class__, instance=value)
elif value._state.db is None:
value._state.db = router.db_for_write(value.__class__, instance=instance)
elif value._state.db is not None and instance._state.db is not None:
if not router.allow_relation(value, instance):
raise ValueError('Cannot assign "%r": the current database router prevents this relation.' % value)
# If we're setting the value of a OneToOneField to None, we need to clear
# out the cache on any old related object. Otherwise, deleting the
# previously-related object will also cause this object to be deleted,
# which is wrong.
if value is None:
# Look up the previously-related object, which may still be available
# since we've not yet cleared out the related field.
# Use the cache directly, instead of the accessor; if we haven't
# populated the cache, then we don't care - we're only accessing
# the object to invalidate the accessor cache, so there's no
# need to populate the cache just to expire it again.
related = getattr(instance, self.cache_name, None)
# If we've got an old related object, we need to clear out its
# cache. This cache also might not exist if the related object
# hasn't been accessed yet.
if related is not None:
setattr(related, self.field.remote_field.get_cache_name(), None)
for lh_field, rh_field in self.field.related_fields:
setattr(instance, lh_field.attname, None)
# Set the values of the related field.
else:
for lh_field, rh_field in self.field.related_fields:
setattr(instance, lh_field.attname, getattr(value, rh_field.attname))
# Since we already know what the related object is, seed the related
# object caches now, too. This avoids another db hit if you get the
# object you just set.
setattr(instance, self.cache_name, value)
if value is not None and not self.field.remote_field.multiple:
setattr(value, self.field.remote_field.get_cache_name(), instance)
def create_foreign_related_manager(superclass, rel):
"""
Factory function to create a manager that subclasses another manager
(generally the default manager of a given model) and adds behaviors
specific to many-to-one relations.
"""
class RelatedManager(superclass):
def __init__(self, instance):
super(RelatedManager, self).__init__()
self.instance = instance
self.model = rel.related_model
self.field = rel.field
self.core_filters = {self.field.name: instance}
def __call__(self, **kwargs):
# We use **kwargs rather than a kwarg argument to enforce the
# `manager='manager_name'` syntax.
manager = getattr(self.model, kwargs.pop('manager'))
manager_class = create_foreign_related_manager(manager.__class__, rel)
return manager_class(self.instance)
do_not_call_in_templates = True
def get_queryset(self):
try:
return self.instance._prefetched_objects_cache[self.field.related_query_name()]
except (AttributeError, KeyError):
db = self._db or router.db_for_read(self.model, instance=self.instance)
empty_strings_as_null = connections[db].features.interprets_empty_strings_as_nulls
qs = super(RelatedManager, self).get_queryset()
qs._add_hints(instance=self.instance)
if self._db:
qs = qs.using(self._db)
qs = qs.filter(**self.core_filters)
for field in self.field.foreign_related_fields:
val = getattr(self.instance, field.attname)
if val is None or (val == '' and empty_strings_as_null):
return qs.none()
qs._known_related_objects = {self.field: {self.instance.pk: self.instance}}
return qs
def get_prefetch_queryset(self, instances, queryset=None):
if queryset is None:
queryset = super(RelatedManager, self).get_queryset()
queryset._add_hints(instance=instances[0])
queryset = queryset.using(queryset._db or self._db)
rel_obj_attr = self.field.get_local_related_value
instance_attr = self.field.get_foreign_related_value
instances_dict = {instance_attr(inst): inst for inst in instances}
query = {'%s__in' % self.field.name: instances}
queryset = queryset.filter(**query)
# Since we just bypassed this class' get_queryset(), we must manage
# the reverse relation manually.
for rel_obj in queryset:
instance = instances_dict[rel_obj_attr(rel_obj)]
setattr(rel_obj, self.field.name, instance)
cache_name = self.field.related_query_name()
return queryset, rel_obj_attr, instance_attr, False, cache_name
def add(self, *objs, **kwargs):
bulk = kwargs.pop('bulk', True)
objs = list(objs)
db = router.db_for_write(self.model, instance=self.instance)
def check_and_update_obj(obj):
if not isinstance(obj, self.model):
raise TypeError("'%s' instance expected, got %r" % (
self.model._meta.object_name, obj,
))
setattr(obj, self.field.name, self.instance)
if bulk:
pks = []
for obj in objs:
check_and_update_obj(obj)
if obj._state.adding or obj._state.db != db:
raise ValueError(
"%r instance isn't saved. Use bulk=False or save "
"the object first." % obj
)
pks.append(obj.pk)
self.model._base_manager.using(db).filter(pk__in=pks).update(**{
self.field.name: self.instance,
})
else:
with transaction.atomic(using=db, savepoint=False):
for obj in objs:
check_and_update_obj(obj)
obj.save()
add.alters_data = True
def create(self, **kwargs):
kwargs[self.field.name] = self.instance
db = router.db_for_write(self.model, instance=self.instance)
return super(RelatedManager, self.db_manager(db)).create(**kwargs)
create.alters_data = True
def get_or_create(self, **kwargs):
kwargs[self.field.name] = self.instance
db = router.db_for_write(self.model, instance=self.instance)
return super(RelatedManager, self.db_manager(db)).get_or_create(**kwargs)
get_or_create.alters_data = True
def update_or_create(self, **kwargs):
kwargs[self.field.name] = self.instance
db = router.db_for_write(self.model, instance=self.instance)
return super(RelatedManager, self.db_manager(db)).update_or_create(**kwargs)
update_or_create.alters_data = True
# remove() and clear() are only provided if the ForeignKey can have a value of null.
if rel.field.null:
def remove(self, *objs, **kwargs):
if not objs:
return
bulk = kwargs.pop('bulk', True)
val = self.field.get_foreign_related_value(self.instance)
old_ids = set()
for obj in objs:
# Is obj actually part of this descriptor set?
if self.field.get_local_related_value(obj) == val:
old_ids.add(obj.pk)
else:
raise self.field.remote_field.model.DoesNotExist(
"%r is not related to %r." % (obj, self.instance)
)
self._clear(self.filter(pk__in=old_ids), bulk)
remove.alters_data = True
def clear(self, **kwargs):
bulk = kwargs.pop('bulk', True)
self._clear(self, bulk)
clear.alters_data = True
def _clear(self, queryset, bulk):
db = router.db_for_write(self.model, instance=self.instance)
queryset = queryset.using(db)
if bulk:
# `QuerySet.update()` is intrinsically atomic.
queryset.update(**{self.field.name: None})
else:
with transaction.atomic(using=db, savepoint=False):
for obj in queryset:
setattr(obj, self.field.name, None)
obj.save(update_fields=[self.field.name])
_clear.alters_data = True
def set(self, objs, **kwargs):
# Force evaluation of `objs` in case it's a queryset whose value
# could be affected by `manager.clear()`. Refs #19816.
objs = tuple(objs)
bulk = kwargs.pop('bulk', True)
clear = kwargs.pop('clear', False)
if self.field.null:
db = router.db_for_write(self.model, instance=self.instance)
with transaction.atomic(using=db, savepoint=False):
if clear:
self.clear()
self.add(*objs, bulk=bulk)
else:
old_objs = set(self.using(db).all())
new_objs = []
for obj in objs:
if obj in old_objs:
old_objs.remove(obj)
else:
new_objs.append(obj)
self.remove(*old_objs, bulk=bulk)
self.add(*new_objs, bulk=bulk)
else:
self.add(*objs, bulk=bulk)
set.alters_data = True
return RelatedManager
class ForeignRelatedObjectsDescriptor(object):
"""
Accessor to the related objects manager on the reverse side of a
many-to-one relation.
In the example::
class Choice(Model):
poll = ForeignKey(Place, related_name='choices')
``poll.choices`` is a ``ForeignRelatedObjectsDescriptor`` instance.
"""
def __init__(self, rel):
self.rel = rel
self.field = rel.field
@cached_property
def related_manager_cls(self):
return create_foreign_related_manager(
self.rel.related_model._default_manager.__class__,
self.rel,
)
def __get__(self, instance, instance_type=None):
if instance is None:
return self
return self.related_manager_cls(instance)
def __set__(self, instance, value):
manager = self.__get__(instance)
manager.set(value)
def create_many_related_manager(superclass, rel, reverse):
"""
Factory function to create a manager that subclasses another manager
(generally the default manager of a given model) and adds behaviors
specific to many-to-many relations.
"""
class ManyRelatedManager(superclass):
def __init__(self, instance=None):
super(ManyRelatedManager, self).__init__()
self.instance = instance
if not reverse:
self.model = rel.model
self.query_field_name = rel.field.related_query_name()
self.prefetch_cache_name = rel.field.name
self.source_field_name = rel.field.m2m_field_name()
self.target_field_name = rel.field.m2m_reverse_field_name()
self.symmetrical = rel.symmetrical
else:
self.model = rel.related_model
self.query_field_name = rel.field.name
self.prefetch_cache_name = rel.field.related_query_name()
self.source_field_name = rel.field.m2m_reverse_field_name()
self.target_field_name = rel.field.m2m_field_name()
self.symmetrical = False
self.through = rel.through
self.reverse = reverse
self.source_field = self.through._meta.get_field(self.source_field_name)
self.target_field = self.through._meta.get_field(self.target_field_name)
self.core_filters = {}
for lh_field, rh_field in self.source_field.related_fields:
core_filter_key = '%s__%s' % (self.query_field_name, rh_field.name)
self.core_filters[core_filter_key] = getattr(instance, rh_field.attname)
self.related_val = self.source_field.get_foreign_related_value(instance)
if None in self.related_val:
raise ValueError('"%r" needs to have a value for field "%s" before '
'this many-to-many relationship can be used.' %
(instance, self.source_field_name))
# Even if this relation is not to pk, we require still pk value.
# The wish is that the instance has been already saved to DB,
# although having a pk value isn't a guarantee of that.
if instance.pk is None:
raise ValueError("%r instance needs to have a primary key value before "
"a many-to-many relationship can be used." %
instance.__class__.__name__)
def __call__(self, **kwargs):
# We use **kwargs rather than a kwarg argument to enforce the
# `manager='manager_name'` syntax.
manager = getattr(self.model, kwargs.pop('manager'))
manager_class = create_many_related_manager(manager.__class__, rel, reverse)
return manager_class(instance=self.instance)
do_not_call_in_templates = True
def _build_remove_filters(self, removed_vals):
filters = Q(**{self.source_field_name: self.related_val})
# No need to add a subquery condition if removed_vals is a QuerySet without
# filters.
removed_vals_filters = (not isinstance(removed_vals, QuerySet) or
removed_vals._has_filters())
if removed_vals_filters:
filters &= Q(**{'%s__in' % self.target_field_name: removed_vals})
if self.symmetrical:
symmetrical_filters = Q(**{self.target_field_name: self.related_val})
if removed_vals_filters:
symmetrical_filters &= Q(
**{'%s__in' % self.source_field_name: removed_vals})
filters |= symmetrical_filters
return filters
def get_queryset(self):
try:
return self.instance._prefetched_objects_cache[self.prefetch_cache_name]
except (AttributeError, KeyError):
qs = super(ManyRelatedManager, self).get_queryset()
qs._add_hints(instance=self.instance)
if self._db:
qs = qs.using(self._db)
return qs._next_is_sticky().filter(**self.core_filters)
def get_prefetch_queryset(self, instances, queryset=None):
if queryset is None:
queryset = super(ManyRelatedManager, self).get_queryset()
queryset._add_hints(instance=instances[0])
queryset = queryset.using(queryset._db or self._db)
query = {'%s__in' % self.query_field_name: instances}
queryset = queryset._next_is_sticky().filter(**query)
# M2M: need to annotate the query in order to get the primary model
# that the secondary model was actually related to. We know that
# there will already be a join on the join table, so we can just add
# the select.
# For non-autocreated 'through' models, can't assume we are
# dealing with PK values.
fk = self.through._meta.get_field(self.source_field_name)
join_table = self.through._meta.db_table
connection = connections[queryset.db]
qn = connection.ops.quote_name
queryset = queryset.extra(select={
'_prefetch_related_val_%s' % f.attname:
'%s.%s' % (qn(join_table), qn(f.column)) for f in fk.local_related_fields})
return (
queryset,
lambda result: tuple(
getattr(result, '_prefetch_related_val_%s' % f.attname)
for f in fk.local_related_fields
),
lambda inst: tuple(
f.get_db_prep_value(getattr(inst, f.attname), connection)
for f in fk.foreign_related_fields
),
False,
self.prefetch_cache_name,
)
def add(self, *objs):
if not rel.through._meta.auto_created:
opts = self.through._meta
raise AttributeError(
"Cannot use add() on a ManyToManyField which specifies an "
"intermediary model. Use %s.%s's Manager instead." %
(opts.app_label, opts.object_name)
)
db = router.db_for_write(self.through, instance=self.instance)
with transaction.atomic(using=db, savepoint=False):
self._add_items(self.source_field_name, self.target_field_name, *objs)
# If this is a symmetrical m2m relation to self, add the mirror entry in the m2m table
if self.symmetrical:
self._add_items(self.target_field_name, self.source_field_name, *objs)
add.alters_data = True
def remove(self, *objs):
if not rel.through._meta.auto_created:
opts = self.through._meta
raise AttributeError(
"Cannot use remove() on a ManyToManyField which specifies "
"an intermediary model. Use %s.%s's Manager instead." %
(opts.app_label, opts.object_name)
)
self._remove_items(self.source_field_name, self.target_field_name, *objs)
remove.alters_data = True
def clear(self):
db = router.db_for_write(self.through, instance=self.instance)
with transaction.atomic(using=db, savepoint=False):
signals.m2m_changed.send(sender=self.through, action="pre_clear",
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=None, using=db)
filters = self._build_remove_filters(super(ManyRelatedManager, self).get_queryset().using(db))
self.through._default_manager.using(db).filter(filters).delete()
signals.m2m_changed.send(sender=self.through, action="post_clear",
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=None, using=db)
clear.alters_data = True
def set(self, objs, **kwargs):
if not rel.through._meta.auto_created:
opts = self.through._meta
raise AttributeError(
"Cannot set values on a ManyToManyField which specifies an "
"intermediary model. Use %s.%s's Manager instead." %
(opts.app_label, opts.object_name)
)
# Force evaluation of `objs` in case it's a queryset whose value
# could be affected by `manager.clear()`. Refs #19816.
objs = tuple(objs)
clear = kwargs.pop('clear', False)
db = router.db_for_write(self.through, instance=self.instance)
with transaction.atomic(using=db, savepoint=False):
if clear:
self.clear()
self.add(*objs)
else:
old_ids = set(self.using(db).values_list(self.target_field.target_field.attname, flat=True))
new_objs = []
for obj in objs:
fk_val = (self.target_field.get_foreign_related_value(obj)[0]
if isinstance(obj, self.model) else obj)
if fk_val in old_ids:
old_ids.remove(fk_val)
else:
new_objs.append(obj)
self.remove(*old_ids)
self.add(*new_objs)
set.alters_data = True
def create(self, **kwargs):
# This check needs to be done here, since we can't later remove this
# from the method lookup table, as we do with add and remove.
if not self.through._meta.auto_created:
opts = self.through._meta
raise AttributeError(
"Cannot use create() on a ManyToManyField which specifies "
"an intermediary model. Use %s.%s's Manager instead." %
(opts.app_label, opts.object_name)
)
db = router.db_for_write(self.instance.__class__, instance=self.instance)
new_obj = super(ManyRelatedManager, self.db_manager(db)).create(**kwargs)
self.add(new_obj)
return new_obj
create.alters_data = True
def get_or_create(self, **kwargs):
db = router.db_for_write(self.instance.__class__, instance=self.instance)
obj, created = super(ManyRelatedManager, self.db_manager(db)).get_or_create(**kwargs)
# We only need to add() if created because if we got an object back
# from get() then the relationship already exists.
if created:
self.add(obj)
return obj, created
get_or_create.alters_data = True
def update_or_create(self, **kwargs):
db = router.db_for_write(self.instance.__class__, instance=self.instance)
obj, created = super(ManyRelatedManager, self.db_manager(db)).update_or_create(**kwargs)
# We only need to add() if created because if we got an object back
# from get() then the relationship already exists.
if created:
self.add(obj)
return obj, created
update_or_create.alters_data = True
def _add_items(self, source_field_name, target_field_name, *objs):
# source_field_name: the PK fieldname in join table for the source object
# target_field_name: the PK fieldname in join table for the target object
# *objs - objects to add. Either object instances, or primary keys of object instances.
# If there aren't any objects, there is nothing to do.
from django.db.models import Model
if objs:
new_ids = set()
for obj in objs:
if isinstance(obj, self.model):
if not router.allow_relation(obj, self.instance):
raise ValueError(
'Cannot add "%r": instance is on database "%s", value is on database "%s"' %
(obj, self.instance._state.db, obj._state.db)
)
fk_val = self.through._meta.get_field(
target_field_name).get_foreign_related_value(obj)[0]
if fk_val is None:
raise ValueError(
'Cannot add "%r": the value for field "%s" is None' %
(obj, target_field_name)
)
new_ids.add(fk_val)
elif isinstance(obj, Model):
raise TypeError(
"'%s' instance expected, got %r" %
(self.model._meta.object_name, obj)
)
else:
new_ids.add(obj)
db = router.db_for_write(self.through, instance=self.instance)
vals = (self.through._default_manager.using(db)
.values_list(target_field_name, flat=True)
.filter(**{
source_field_name: self.related_val[0],
'%s__in' % target_field_name: new_ids,
}))
new_ids = new_ids - set(vals)
with transaction.atomic(using=db, savepoint=False):
if self.reverse or source_field_name == self.source_field_name:
# Don't send the signal when we are inserting the
# duplicate data row for symmetrical reverse entries.
signals.m2m_changed.send(sender=self.through, action='pre_add',
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=new_ids, using=db)
# Add the ones that aren't there already
self.through._default_manager.using(db).bulk_create([
self.through(**{
'%s_id' % source_field_name: self.related_val[0],
'%s_id' % target_field_name: obj_id,
})
for obj_id in new_ids
])
if self.reverse or source_field_name == self.source_field_name:
# Don't send the signal when we are inserting the
# duplicate data row for symmetrical reverse entries.
signals.m2m_changed.send(sender=self.through, action='post_add',
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=new_ids, using=db)
def _remove_items(self, source_field_name, target_field_name, *objs):
# source_field_name: the PK colname in join table for the source object
# target_field_name: the PK colname in join table for the target object
# *objs - objects to remove
if not objs:
return
# Check that all the objects are of the right type
old_ids = set()
for obj in objs:
if isinstance(obj, self.model):
fk_val = self.target_field.get_foreign_related_value(obj)[0]
old_ids.add(fk_val)
else:
old_ids.add(obj)
db = router.db_for_write(self.through, instance=self.instance)
with transaction.atomic(using=db, savepoint=False):
# Send a signal to the other end if need be.
signals.m2m_changed.send(sender=self.through, action="pre_remove",
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=old_ids, using=db)
target_model_qs = super(ManyRelatedManager, self).get_queryset()
if target_model_qs._has_filters():
old_vals = target_model_qs.using(db).filter(**{
'%s__in' % self.target_field.target_field.attname: old_ids})
else:
old_vals = old_ids
filters = self._build_remove_filters(old_vals)
self.through._default_manager.using(db).filter(filters).delete()
signals.m2m_changed.send(sender=self.through, action="post_remove",
instance=self.instance, reverse=self.reverse,
model=self.model, pk_set=old_ids, using=db)
return ManyRelatedManager
class ManyRelatedObjectsDescriptor(ForeignRelatedObjectsDescriptor):
"""
Accessor to the related objects manager on the forward and reverse sides of
a many-to-many relation.
In the example::
class Pizza(Model):
toppings = ManyToManyField(Topping, related_name='pizzas')
``pizza.toppings`` and ``topping.pizzas`` are ManyRelatedObjectsDescriptor
instances.
"""
def __init__(self, rel, reverse=False):
super(ManyRelatedObjectsDescriptor, self).__init__(rel)
self.reverse = reverse
@property
def through(self):
# through is provided so that you have easy access to the through
# model (Book.authors.through) for inlines, etc. This is done as
# a property to ensure that the fully resolved value is returned.
return self.rel.through
@cached_property
def related_manager_cls(self):
model = self.rel.related_model if self.reverse else self.rel.model
return create_many_related_manager(
model._default_manager.__class__,
self.rel,
reverse=self.reverse,
)
class ForeignObjectRel(object):
"""
Used by ForeignObject to store information about the relation.
``_meta.get_fields()`` returns this class to provide access to the field
flags for the reverse relation.
"""
# Field flags
auto_created = True
concrete = False
editable = False
is_relation = True
# Reverse relations are always nullable (Django can't enforce that a
# foreign key on the related model points to this model).
null = True
def __init__(self, field, to, related_name=None, related_query_name=None,
limit_choices_to=None, parent_link=False, on_delete=None):
self.field = field
self.model = to
self.related_name = related_name
self.related_query_name = related_query_name
self.limit_choices_to = {} if limit_choices_to is None else limit_choices_to
self.parent_link = parent_link
self.on_delete = on_delete
self.symmetrical = False
self.multiple = True
# Some of the following cached_properties can't be initialized in
# __init__ as the field doesn't have its model yet. Calling these methods
# before field.contribute_to_class() has been called will result in
# AttributeError
@property
def to(self):
warnings.warn(
"Usage of ForeignObjectRel.to attribute has been deprecated. "
"Use the model attribute instead.",
RemovedInDjango20Warning, 2)
return self.model
@cached_property
def hidden(self):
return self.is_hidden()
@cached_property
def name(self):
return self.field.related_query_name()
@property
def remote_field(self):
return self.field
@property
def target_field(self):
"""
When filtering against this relation, returns the field on the remote
model against which the filtering should happen.
"""
target_fields = self.get_path_info()[-1].target_fields
if len(target_fields) > 1:
raise exceptions.FieldError("Can't use target_field for multicolumn relations.")
return target_fields[0]
@cached_property
def related_model(self):
if not self.field.model:
raise AttributeError(
"This property can't be accessed before self.field.contribute_to_class has been called.")
return self.field.model
@cached_property
def many_to_many(self):
return self.field.many_to_many
@cached_property
def many_to_one(self):
return self.field.one_to_many
@cached_property
def one_to_many(self):
return self.field.many_to_one
@cached_property
def one_to_one(self):
return self.field.one_to_one
def get_prep_lookup(self, lookup_name, value):
return self.field.get_prep_lookup(lookup_name, value)
def get_lookup(self, lookup_name):
return self.field.get_lookup(lookup_name)
def get_internal_type(self):
return self.field.get_internal_type()
@property
def db_type(self):
return self.field.db_type
def __repr__(self):
return '<%s: %s.%s>' % (
type(self).__name__,
self.related_model._meta.app_label,
self.related_model._meta.model_name,
)
def get_choices(self, include_blank=True, blank_choice=BLANK_CHOICE_DASH,
limit_to_currently_related=False):
"""
Return choices with a default blank choices included, for use as
SelectField choices for this field.
Analog of django.db.models.fields.Field.get_choices(), provided
initially for utilization by RelatedFieldListFilter.
"""
first_choice = blank_choice if include_blank else []
queryset = self.related_model._default_manager.all()
if limit_to_currently_related:
queryset = queryset.complex_filter(
{'%s__isnull' % self.related_model._meta.model_name: False}
)
lst = [(x._get_pk_val(), smart_text(x)) for x in queryset]
return first_choice + lst
def get_db_prep_lookup(self, lookup_type, value, connection, prepared=False):
# Defer to the actual field definition for db prep
return self.field.get_db_prep_lookup(lookup_type, value, connection=connection, prepared=prepared)
def is_hidden(self):
"Should the related object be hidden?"
return self.related_name is not None and self.related_name[-1] == '+'
def get_joining_columns(self):
return self.field.get_reverse_joining_columns()
def get_extra_restriction(self, where_class, alias, related_alias):
return self.field.get_extra_restriction(where_class, related_alias, alias)
def set_field_name(self):
"""
Sets the related field's name, this is not available until later stages
of app loading, so set_field_name is called from
set_attributes_from_rel()
"""
# By default foreign object doesn't relate to any remote field (for
# example custom multicolumn joins currently have no remote field).
self.field_name = None
def get_accessor_name(self, model=None):
# This method encapsulates the logic that decides what name to give an
# accessor descriptor that retrieves related many-to-one or
# many-to-many objects. It uses the lower-cased object_name + "_set",
# but this can be overridden with the "related_name" option.
# Due to backwards compatibility ModelForms need to be able to provide
# an alternate model. See BaseInlineFormSet.get_default_prefix().
opts = model._meta if model else self.related_model._meta
model = model or self.related_model
if self.multiple:
# If this is a symmetrical m2m relation on self, there is no reverse accessor.
if self.symmetrical and model == self.model:
return None
if self.related_name:
return self.related_name
if opts.default_related_name:
return opts.default_related_name % {
'model_name': opts.model_name.lower(),
'app_label': opts.app_label.lower(),
}
return opts.model_name + ('_set' if self.multiple else '')
def get_cache_name(self):
return "_%s_cache" % self.get_accessor_name()
def get_path_info(self):
return self.field.get_reverse_path_info()
class ManyToOneRel(ForeignObjectRel):
"""
Used by the ForeignKey field to store information about the relation.
``_meta.get_fields()`` returns this class to provide access to the field
flags for the reverse relation.
Note: Because we somewhat abuse the Rel objects by using them as reverse
fields we get the funny situation where
``ManyToOneRel.many_to_one == False`` and
``ManyToOneRel.one_to_many == True``. This is unfortunate but the actual
ManyToOneRel class is a private API and there is work underway to turn
reverse relations into actual fields.
"""
def __init__(self, field, to, field_name, related_name=None, related_query_name=None,
limit_choices_to=None, parent_link=False, on_delete=None):
super(ManyToOneRel, self).__init__(
field, to,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
parent_link=parent_link,
on_delete=on_delete,
)
self.field_name = field_name
def __getstate__(self):
state = self.__dict__.copy()
state.pop('related_model', None)
return state
def get_related_field(self):
"""
Return the Field in the 'to' object to which this relationship is tied.
"""
field = self.model._meta.get_field(self.field_name)
if not field.concrete:
raise FieldDoesNotExist("No related field named '%s'" %
self.field_name)
return field
def set_field_name(self):
self.field_name = self.field_name or self.model._meta.pk.name
class OneToOneRel(ManyToOneRel):
"""
Used by OneToOneField to store information about the relation.
``_meta.get_fields()`` returns this class to provide access to the field
flags for the reverse relation.
"""
def __init__(self, field, to, field_name, related_name=None, related_query_name=None,
limit_choices_to=None, parent_link=False, on_delete=None):
super(OneToOneRel, self).__init__(
field, to, field_name,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
parent_link=parent_link,
on_delete=on_delete,
)
self.multiple = False
class ManyToManyRel(ForeignObjectRel):
"""
Used by ManyToManyField to store information about the relation.
``_meta.get_fields()`` returns this class to provide access to the field
flags for the reverse relation.
"""
def __init__(self, field, to, related_name=None, related_query_name=None,
limit_choices_to=None, symmetrical=True, through=None, through_fields=None,
db_constraint=True):
super(ManyToManyRel, self).__init__(
field, to,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
)
if through and not db_constraint:
raise ValueError("Can't supply a through model and db_constraint=False")
self.through = through
if through_fields and not through:
raise ValueError("Cannot specify through_fields without a through model")
self.through_fields = through_fields
self.symmetrical = symmetrical
self.db_constraint = db_constraint
def get_related_field(self):
"""
Return the field in the 'to' object to which this relationship is tied.
Provided for symmetry with ManyToOneRel.
"""
opts = self.through._meta
if self.through_fields:
field = opts.get_field(self.through_fields[0])
else:
for field in opts.fields:
rel = getattr(field, 'remote_field', None)
if rel and rel.model == self.model:
break
return field.foreign_related_fields[0]
class ForeignObject(RelatedField):
"""
Abstraction of the ForeignKey relation, supports multi-column relations.
"""
# Field flags
many_to_many = False
many_to_one = True
one_to_many = False
one_to_one = False
requires_unique_target = True
related_accessor_class = ForeignRelatedObjectsDescriptor
rel_class = ForeignObjectRel
def __init__(self, to, on_delete, from_fields, to_fields, rel=None, related_name=None,
related_query_name=None, limit_choices_to=None, parent_link=False,
swappable=True, **kwargs):
if rel is None:
rel = self.rel_class(
self, to,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
parent_link=parent_link,
on_delete=on_delete,
)
super(ForeignObject, self).__init__(rel=rel, **kwargs)
self.from_fields = from_fields
self.to_fields = to_fields
self.swappable = swappable
def check(self, **kwargs):
errors = super(ForeignObject, self).check(**kwargs)
errors.extend(self._check_unique_target())
return errors
def _check_unique_target(self):
rel_is_string = isinstance(self.remote_field.model, six.string_types)
if rel_is_string or not self.requires_unique_target:
return []
try:
self.foreign_related_fields
except FieldDoesNotExist:
return []
has_unique_field = any(rel_field.unique
for rel_field in self.foreign_related_fields)
if not has_unique_field and len(self.foreign_related_fields) > 1:
field_combination = ', '.join("'%s'" % rel_field.name
for rel_field in self.foreign_related_fields)
model_name = self.remote_field.model.__name__
return [
checks.Error(
"None of the fields %s on model '%s' have a unique=True constraint."
% (field_combination, model_name),
hint=None,
obj=self,
id='fields.E310',
)
]
elif not has_unique_field:
field_name = self.foreign_related_fields[0].name
model_name = self.remote_field.model.__name__
return [
checks.Error(
("'%s.%s' must set unique=True "
"because it is referenced by a foreign key.") % (model_name, field_name),
hint=None,
obj=self,
id='fields.E311',
)
]
else:
return []
def deconstruct(self):
name, path, args, kwargs = super(ForeignObject, self).deconstruct()
kwargs['on_delete'] = self.remote_field.on_delete
kwargs['from_fields'] = self.from_fields
kwargs['to_fields'] = self.to_fields
if self.remote_field.related_name is not None:
kwargs['related_name'] = self.remote_field.related_name
if self.remote_field.related_query_name is not None:
kwargs['related_query_name'] = self.remote_field.related_query_name
if self.remote_field.parent_link:
kwargs['parent_link'] = self.remote_field.parent_link
# Work out string form of "to"
if isinstance(self.remote_field.model, six.string_types):
kwargs['to'] = self.remote_field.model
else:
kwargs['to'] = "%s.%s" % (
self.remote_field.model._meta.app_label,
self.remote_field.model._meta.object_name,
)
# If swappable is True, then see if we're actually pointing to the target
# of a swap.
swappable_setting = self.swappable_setting
if swappable_setting is not None:
# If it's already a settings reference, error
if hasattr(kwargs['to'], "setting_name"):
if kwargs['to'].setting_name != swappable_setting:
raise ValueError(
"Cannot deconstruct a ForeignKey pointing to a model "
"that is swapped in place of more than one model (%s and %s)"
% (kwargs['to'].setting_name, swappable_setting)
)
# Set it
from django.db.migrations.writer import SettingsReference
kwargs['to'] = SettingsReference(
kwargs['to'],
swappable_setting,
)
return name, path, args, kwargs
def resolve_related_fields(self):
if len(self.from_fields) < 1 or len(self.from_fields) != len(self.to_fields):
raise ValueError('Foreign Object from and to fields must be the same non-zero length')
if isinstance(self.remote_field.model, six.string_types):
raise ValueError('Related model %r cannot be resolved' % self.remote_field.model)
related_fields = []
for index in range(len(self.from_fields)):
from_field_name = self.from_fields[index]
to_field_name = self.to_fields[index]
from_field = (self if from_field_name == 'self'
else self.opts.get_field(from_field_name))
to_field = (self.remote_field.model._meta.pk if to_field_name is None
else self.remote_field.model._meta.get_field(to_field_name))
related_fields.append((from_field, to_field))
return related_fields
@property
def related_fields(self):
if not hasattr(self, '_related_fields'):
self._related_fields = self.resolve_related_fields()
return self._related_fields
@property
def reverse_related_fields(self):
return [(rhs_field, lhs_field) for lhs_field, rhs_field in self.related_fields]
@property
def local_related_fields(self):
return tuple(lhs_field for lhs_field, rhs_field in self.related_fields)
@property
def foreign_related_fields(self):
return tuple(rhs_field for lhs_field, rhs_field in self.related_fields)
def get_local_related_value(self, instance):
return self.get_instance_value_for_fields(instance, self.local_related_fields)
def get_foreign_related_value(self, instance):
return self.get_instance_value_for_fields(instance, self.foreign_related_fields)
@staticmethod
def get_instance_value_for_fields(instance, fields):
ret = []
opts = instance._meta
for field in fields:
# Gotcha: in some cases (like fixture loading) a model can have
# different values in parent_ptr_id and parent's id. So, use
# instance.pk (that is, parent_ptr_id) when asked for instance.id.
if field.primary_key:
possible_parent_link = opts.get_ancestor_link(field.model)
if (not possible_parent_link or
possible_parent_link.primary_key or
possible_parent_link.model._meta.abstract):
ret.append(instance.pk)
continue
ret.append(getattr(instance, field.attname))
return tuple(ret)
def get_attname_column(self):
attname, column = super(ForeignObject, self).get_attname_column()
return attname, None
def get_joining_columns(self, reverse_join=False):
source = self.reverse_related_fields if reverse_join else self.related_fields
return tuple((lhs_field.column, rhs_field.column) for lhs_field, rhs_field in source)
def get_reverse_joining_columns(self):
return self.get_joining_columns(reverse_join=True)
def get_extra_descriptor_filter(self, instance):
"""
Return an extra filter condition for related object fetching when
user does 'instance.fieldname', that is the extra filter is used in
the descriptor of the field.
The filter should be either a dict usable in .filter(**kwargs) call or
a Q-object. The condition will be ANDed together with the relation's
joining columns.
A parallel method is get_extra_restriction() which is used in
JOIN and subquery conditions.
"""
return {}
def get_extra_restriction(self, where_class, alias, related_alias):
"""
Return a pair condition used for joining and subquery pushdown. The
condition is something that responds to as_sql(compiler, connection)
method.
Note that currently referring both the 'alias' and 'related_alias'
will not work in some conditions, like subquery pushdown.
A parallel method is get_extra_descriptor_filter() which is used in
instance.fieldname related object fetching.
"""
return None
def get_path_info(self):
"""
Get path from this field to the related model.
"""
opts = self.remote_field.model._meta
from_opts = self.model._meta
return [PathInfo(from_opts, opts, self.foreign_related_fields, self, False, True)]
def get_reverse_path_info(self):
"""
Get path from the related model to this field's model.
"""
opts = self.model._meta
from_opts = self.remote_field.model._meta
pathinfos = [PathInfo(from_opts, opts, (opts.pk,), self.remote_field, not self.unique, False)]
return pathinfos
def get_lookup(self, lookup_name):
if lookup_name == 'in':
return RelatedIn
elif lookup_name == 'exact':
return RelatedExact
elif lookup_name == 'gt':
return RelatedGreaterThan
elif lookup_name == 'gte':
return RelatedGreaterThanOrEqual
elif lookup_name == 'lt':
return RelatedLessThan
elif lookup_name == 'lte':
return RelatedLessThanOrEqual
elif lookup_name != 'isnull':
raise TypeError('Related Field got invalid lookup: %s' % lookup_name)
return super(ForeignObject, self).get_lookup(lookup_name)
def get_transform(self, *args, **kwargs):
raise NotImplementedError('Relational fields do not support transforms.')
@property
def attnames(self):
return tuple(field.attname for field in self.local_related_fields)
def get_defaults(self):
return tuple(field.get_default() for field in self.local_related_fields)
def contribute_to_class(self, cls, name, virtual_only=False):
super(ForeignObject, self).contribute_to_class(cls, name, virtual_only=virtual_only)
setattr(cls, self.name, ReverseSingleRelatedObjectDescriptor(self))
def contribute_to_related_class(self, cls, related):
# Internal FK's - i.e., those with a related name ending with '+' -
# and swapped models don't get a related descriptor.
if not self.remote_field.is_hidden() and not related.related_model._meta.swapped:
setattr(cls, related.get_accessor_name(), self.related_accessor_class(related))
# While 'limit_choices_to' might be a callable, simply pass
# it along for later - this is too early because it's still
# model load time.
if self.remote_field.limit_choices_to:
cls._meta.related_fkey_lookups.append(self.remote_field.limit_choices_to)
class ForeignKey(ForeignObject):
"""
Provide a many-to-one relation by adding a column to the local model
to hold the remote value.
By default ForeignKey will target the pk of the remote model but this
behavior can be changed by using the ``to_field`` argument.
"""
# Field flags
many_to_many = False
many_to_one = True
one_to_many = False
one_to_one = False
rel_class = ManyToOneRel
empty_strings_allowed = False
default_error_messages = {
'invalid': _('%(model)s instance with %(field)s %(value)r does not exist.')
}
description = _("Foreign Key (type determined by related field)")
def __init__(self, to, on_delete=None, related_name=None, related_query_name=None,
limit_choices_to=None, parent_link=False, to_field=None,
db_constraint=True, **kwargs):
try:
to._meta.model_name
except AttributeError:
assert isinstance(to, six.string_types), (
"%s(%r) is invalid. First parameter to ForeignKey must be "
"either a model, a model name, or the string %r" % (
self.__class__.__name__, to,
RECURSIVE_RELATIONSHIP_CONSTANT,
)
)
else:
# For backwards compatibility purposes, we need to *try* and set
# the to_field during FK construction. It won't be guaranteed to
# be correct until contribute_to_class is called. Refs #12190.
to_field = to_field or (to._meta.pk and to._meta.pk.name)
if on_delete is None:
warnings.warn(
"on_delete will be a required arg for %s in Django 2.0. "
"Set it to models.CASCADE if you want to maintain the current default behavior. "
"See https://docs.djangoproject.com/en/%s/ref/models/fields/"
"#django.db.models.ForeignKey.on_delete" % (
self.__class__.__name__,
get_docs_version(),
),
RemovedInDjango20Warning, 2)
on_delete = CASCADE
elif not callable(on_delete):
warnings.warn(
"The signature for {0} will change in Django 2.0. "
"Pass to_field='{1}' as a kwarg instead of as an arg.".format(
self.__class__.__name__,
on_delete,
),
RemovedInDjango20Warning, 2)
on_delete, to_field = to_field, on_delete
kwargs['rel'] = self.rel_class(
self, to, to_field,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
parent_link=parent_link,
on_delete=on_delete,
)
kwargs['db_index'] = kwargs.get('db_index', True)
super(ForeignKey, self).__init__(
to, on_delete, from_fields=['self'], to_fields=[to_field], **kwargs)
self.db_constraint = db_constraint
def check(self, **kwargs):
errors = super(ForeignKey, self).check(**kwargs)
errors.extend(self._check_on_delete())
errors.extend(self._check_unique())
return errors
def _check_on_delete(self):
on_delete = getattr(self.remote_field, 'on_delete', None)
if on_delete == SET_NULL and not self.null:
return [
checks.Error(
'Field specifies on_delete=SET_NULL, but cannot be null.',
hint='Set null=True argument on the field, or change the on_delete rule.',
obj=self,
id='fields.E320',
)
]
elif on_delete == SET_DEFAULT and not self.has_default():
return [
checks.Error(
'Field specifies on_delete=SET_DEFAULT, but has no default value.',
hint='Set a default value, or change the on_delete rule.',
obj=self,
id='fields.E321',
)
]
else:
return []
def _check_unique(self, **kwargs):
return [
checks.Warning(
'Setting unique=True on a ForeignKey has the same effect as using a OneToOneField.',
hint='ForeignKey(unique=True) is usually better served by a OneToOneField.',
obj=self,
id='fields.W342',
)
] if self.unique else []
def deconstruct(self):
name, path, args, kwargs = super(ForeignKey, self).deconstruct()
del kwargs['to_fields']
del kwargs['from_fields']
# Handle the simpler arguments
if self.db_index:
del kwargs['db_index']
else:
kwargs['db_index'] = False
if self.db_constraint is not True:
kwargs['db_constraint'] = self.db_constraint
# Rel needs more work.
to_meta = getattr(self.remote_field.model, "_meta", None)
if self.remote_field.field_name and (
not to_meta or (to_meta.pk and self.remote_field.field_name != to_meta.pk.name)):
kwargs['to_field'] = self.remote_field.field_name
return name, path, args, kwargs
@property
def target_field(self):
return self.foreign_related_fields[0]
def get_reverse_path_info(self):
"""
Get path from the related model to this field's model.
"""
opts = self.model._meta
from_opts = self.remote_field.model._meta
pathinfos = [PathInfo(from_opts, opts, (opts.pk,), self.remote_field, not self.unique, False)]
return pathinfos
def validate(self, value, model_instance):
if self.remote_field.parent_link:
return
super(ForeignKey, self).validate(value, model_instance)
if value is None:
return
using = router.db_for_read(model_instance.__class__, instance=model_instance)
qs = self.remote_field.model._default_manager.using(using).filter(
**{self.remote_field.field_name: value}
)
qs = qs.complex_filter(self.get_limit_choices_to())
if not qs.exists():
raise exceptions.ValidationError(
self.error_messages['invalid'],
code='invalid',
params={
'model': self.remote_field.model._meta.verbose_name, 'pk': value,
'field': self.remote_field.field_name, 'value': value,
}, # 'pk' is included for backwards compatibility
)
def get_attname(self):
return '%s_id' % self.name
def get_attname_column(self):
attname = self.get_attname()
column = self.db_column or attname
return attname, column
def get_default(self):
"Here we check if the default value is an object and return the to_field if so."
field_default = super(ForeignKey, self).get_default()
if isinstance(field_default, self.remote_field.model):
return getattr(field_default, self.target_field.attname)
return field_default
def get_db_prep_save(self, value, connection):
if value is None or (value == '' and
(not self.target_field.empty_strings_allowed or
connection.features.interprets_empty_strings_as_nulls)):
return None
else:
return self.target_field.get_db_prep_save(value, connection=connection)
def get_db_prep_value(self, value, connection, prepared=False):
return self.target_field.get_db_prep_value(value, connection, prepared)
def value_to_string(self, obj):
if not obj:
# In required many-to-one fields with only one available choice,
# select that one available choice. Note: For SelectFields
# we have to check that the length of choices is *2*, not 1,
# because SelectFields always have an initial "blank" value.
if not self.blank and self.choices:
choice_list = self.get_choices_default()
if len(choice_list) == 2:
return smart_text(choice_list[1][0])
return super(ForeignKey, self).value_to_string(obj)
def contribute_to_related_class(self, cls, related):
super(ForeignKey, self).contribute_to_related_class(cls, related)
if self.remote_field.field_name is None:
self.remote_field.field_name = cls._meta.pk.name
def formfield(self, **kwargs):
db = kwargs.pop('using', None)
if isinstance(self.remote_field.model, six.string_types):
raise ValueError("Cannot create form field for %r yet, because "
"its related model %r has not been loaded yet" %
(self.name, self.remote_field.model))
defaults = {
'form_class': forms.ModelChoiceField,
'queryset': self.remote_field.model._default_manager.using(db),
'to_field_name': self.remote_field.field_name,
}
defaults.update(kwargs)
return super(ForeignKey, self).formfield(**defaults)
def db_type(self, connection):
# The database column type of a ForeignKey is the column type
# of the field to which it points. An exception is if the ForeignKey
# points to an AutoField/PositiveIntegerField/PositiveSmallIntegerField,
# in which case the column type is simply that of an IntegerField.
# If the database needs similar types for key fields however, the only
# thing we can do is making AutoField an IntegerField.
rel_field = self.target_field
if (isinstance(rel_field, AutoField) or
(not connection.features.related_fields_match_type and
isinstance(rel_field, (PositiveIntegerField,
PositiveSmallIntegerField)))):
return IntegerField().db_type(connection=connection)
return rel_field.db_type(connection=connection)
def db_parameters(self, connection):
return {"type": self.db_type(connection), "check": []}
def convert_empty_strings(self, value, expression, connection, context):
if (not value) and isinstance(value, six.string_types):
return None
return value
def get_db_converters(self, connection):
converters = super(ForeignKey, self).get_db_converters(connection)
if connection.features.interprets_empty_strings_as_nulls:
converters += [self.convert_empty_strings]
return converters
def get_col(self, alias, output_field=None):
return super(ForeignKey, self).get_col(alias, output_field or self.target_field)
class OneToOneField(ForeignKey):
"""
A OneToOneField is essentially the same as a ForeignKey, with the exception
that it always carries a "unique" constraint with it and the reverse
relation always returns the object pointed to (since there will only ever
be one), rather than returning a list.
"""
# Field flags
many_to_many = False
many_to_one = False
one_to_many = False
one_to_one = True
related_accessor_class = SingleRelatedObjectDescriptor
rel_class = OneToOneRel
description = _("One-to-one relationship")
def __init__(self, to, on_delete=None, to_field=None, **kwargs):
kwargs['unique'] = True
if on_delete is None:
warnings.warn(
"on_delete will be a required arg for %s in Django 2.0. "
"Set it to models.CASCADE if you want to maintain the current default behavior. "
"See https://docs.djangoproject.com/en/%s/ref/models/fields/"
"#django.db.models.ForeignKey.on_delete" % (
self.__class__.__name__,
get_docs_version(),
),
RemovedInDjango20Warning, 2)
on_delete = CASCADE
elif not callable(on_delete):
warnings.warn(
"The signature for {0} will change in Django 2.0. "
"Pass to_field='{1}' as a kwarg instead of as an arg.".format(
self.__class__.__name__,
on_delete,
),
RemovedInDjango20Warning, 2)
to_field = on_delete
on_delete = CASCADE # Avoid warning in superclass
super(OneToOneField, self).__init__(to, on_delete, to_field=to_field, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super(OneToOneField, self).deconstruct()
if "unique" in kwargs:
del kwargs['unique']
return name, path, args, kwargs
def formfield(self, **kwargs):
if self.remote_field.parent_link:
return None
return super(OneToOneField, self).formfield(**kwargs)
def save_form_data(self, instance, data):
if isinstance(data, self.remote_field.model):
setattr(instance, self.name, data)
else:
setattr(instance, self.attname, data)
def _check_unique(self, **kwargs):
# Override ForeignKey since check isn't applicable here.
return []
def create_many_to_many_intermediary_model(field, klass):
from django.db import models
def set_managed(model, related, through):
through._meta.managed = model._meta.managed or related._meta.managed
to_model = resolve_relation(klass, field.remote_field.model)
name = '%s_%s' % (klass._meta.object_name, field.name)
lazy_related_operation(set_managed, klass, to_model, name)
to = make_model_tuple(to_model)[1]
from_ = klass._meta.model_name
if to == from_:
to = 'to_%s' % to
from_ = 'from_%s' % from_
meta = type(str('Meta'), (object,), {
'db_table': field._get_m2m_db_table(klass._meta),
'auto_created': klass,
'app_label': klass._meta.app_label,
'db_tablespace': klass._meta.db_tablespace,
'unique_together': (from_, to),
'verbose_name': '%(from)s-%(to)s relationship' % {'from': from_, 'to': to},
'verbose_name_plural': '%(from)s-%(to)s relationships' % {'from': from_, 'to': to},
'apps': field.model._meta.apps,
})
# Construct and return the new class.
return type(str(name), (models.Model,), {
'Meta': meta,
'__module__': klass.__module__,
from_: models.ForeignKey(
klass,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
to: models.ForeignKey(
to_model,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
)
})
class ManyToManyField(RelatedField):
"""
Provide a many-to-many relation by using an intermediary model that
holds two ForeignKey fields pointed at the two sides of the relation.
Unless a ``through`` model was provided, ManyToManyField will use the
create_many_to_many_intermediary_model factory to automatically generate
the intermediary model.
"""
# Field flags
many_to_many = True
many_to_one = False
one_to_many = False
one_to_one = False
rel_class = ManyToManyRel
description = _("Many-to-many relationship")
def __init__(self, to, related_name=None, related_query_name=None,
limit_choices_to=None, symmetrical=None, through=None,
through_fields=None, db_constraint=True, db_table=None,
swappable=True, **kwargs):
try:
to._meta
except AttributeError:
assert isinstance(to, six.string_types), (
"%s(%r) is invalid. First parameter to ManyToManyField must be "
"either a model, a model name, or the string %r" %
(self.__class__.__name__, to, RECURSIVE_RELATIONSHIP_CONSTANT)
)
# Class names must be ASCII in Python 2.x, so we forcibly coerce it
# here to break early if there's a problem.
to = str(to)
if symmetrical is None:
symmetrical = (to == RECURSIVE_RELATIONSHIP_CONSTANT)
if through is not None:
assert db_table is None, (
"Cannot specify a db_table if an intermediary model is used."
)
kwargs['rel'] = self.rel_class(
self, to,
related_name=related_name,
related_query_name=related_query_name,
limit_choices_to=limit_choices_to,
symmetrical=symmetrical,
through=through,
through_fields=through_fields,
db_constraint=db_constraint,
)
self.has_null_arg = 'null' in kwargs
super(ManyToManyField, self).__init__(**kwargs)
self.db_table = db_table
self.swappable = swappable
def check(self, **kwargs):
errors = super(ManyToManyField, self).check(**kwargs)
errors.extend(self._check_unique(**kwargs))
errors.extend(self._check_relationship_model(**kwargs))
errors.extend(self._check_ignored_options(**kwargs))
return errors
def _check_unique(self, **kwargs):
if self.unique:
return [
checks.Error(
'ManyToManyFields cannot be unique.',
hint=None,
obj=self,
id='fields.E330',
)
]
return []
def _check_ignored_options(self, **kwargs):
warnings = []
if self.has_null_arg:
warnings.append(
checks.Warning(
'null has no effect on ManyToManyField.',
hint=None,
obj=self,
id='fields.W340',
)
)
if len(self._validators) > 0:
warnings.append(
checks.Warning(
'ManyToManyField does not support validators.',
hint=None,
obj=self,
id='fields.W341',
)
)
return warnings
def _check_relationship_model(self, from_model=None, **kwargs):
if hasattr(self.remote_field.through, '_meta'):
qualified_model_name = "%s.%s" % (
self.remote_field.through._meta.app_label, self.remote_field.through.__name__)
else:
qualified_model_name = self.remote_field.through
errors = []
if self.remote_field.through not in apps.get_models(include_auto_created=True):
# The relationship model is not installed.
errors.append(
checks.Error(
("Field specifies a many-to-many relation through model "
"'%s', which has not been installed.") %
qualified_model_name,
hint=None,
obj=self,
id='fields.E331',
)
)
else:
assert from_model is not None, (
"ManyToManyField with intermediate "
"tables cannot be checked if you don't pass the model "
"where the field is attached to."
)
# Set some useful local variables
to_model = resolve_relation(from_model, self.remote_field.model)
from_model_name = from_model._meta.object_name
if isinstance(to_model, six.string_types):
to_model_name = to_model
else:
to_model_name = to_model._meta.object_name
relationship_model_name = self.remote_field.through._meta.object_name
self_referential = from_model == to_model
# Check symmetrical attribute.
if (self_referential and self.remote_field.symmetrical and
not self.remote_field.through._meta.auto_created):
errors.append(
checks.Error(
'Many-to-many fields with intermediate tables must not be symmetrical.',
hint=None,
obj=self,
id='fields.E332',
)
)
# Count foreign keys in intermediate model
if self_referential:
seen_self = sum(from_model == getattr(field.remote_field, 'model', None)
for field in self.remote_field.through._meta.fields)
if seen_self > 2 and not self.remote_field.through_fields:
errors.append(
checks.Error(
("The model is used as an intermediate model by "
"'%s', but it has more than two foreign keys "
"to '%s', which is ambiguous. You must specify "
"which two foreign keys Django should use via the "
"through_fields keyword argument.") % (self, from_model_name),
hint=("Use through_fields to specify which two "
"foreign keys Django should use."),
obj=self.remote_field.through,
id='fields.E333',
)
)
else:
# Count foreign keys in relationship model
seen_from = sum(from_model == getattr(field.remote_field, 'model', None)
for field in self.remote_field.through._meta.fields)
seen_to = sum(to_model == getattr(field.remote_field, 'model', None)
for field in self.remote_field.through._meta.fields)
if seen_from > 1 and not self.remote_field.through_fields:
errors.append(
checks.Error(
("The model is used as an intermediate model by "
"'%s', but it has more than one foreign key "
"from '%s', which is ambiguous. You must specify "
"which foreign key Django should use via the "
"through_fields keyword argument.") % (self, from_model_name),
hint=('If you want to create a recursive relationship, '
'use ForeignKey("self", symmetrical=False, '
'through="%s").') % relationship_model_name,
obj=self,
id='fields.E334',
)
)
if seen_to > 1 and not self.remote_field.through_fields:
errors.append(
checks.Error(
("The model is used as an intermediate model by "
"'%s', but it has more than one foreign key "
"to '%s', which is ambiguous. You must specify "
"which foreign key Django should use via the "
"through_fields keyword argument.") % (self, to_model_name),
hint=('If you want to create a recursive '
'relationship, use ForeignKey("self", '
'symmetrical=False, through="%s").') % relationship_model_name,
obj=self,
id='fields.E335',
)
)
if seen_from == 0 or seen_to == 0:
errors.append(
checks.Error(
("The model is used as an intermediate model by "
"'%s', but it does not have a foreign key to '%s' or '%s'.") % (
self, from_model_name, to_model_name
),
hint=None,
obj=self.remote_field.through,
id='fields.E336',
)
)
# Validate `through_fields`.
if self.remote_field.through_fields is not None:
# Validate that we're given an iterable of at least two items
# and that none of them is "falsy".
if not (len(self.remote_field.through_fields) >= 2 and
self.remote_field.through_fields[0] and self.remote_field.through_fields[1]):
errors.append(
checks.Error(
("Field specifies 'through_fields' but does not "
"provide the names of the two link fields that should be "
"used for the relation through model "
"'%s'.") % qualified_model_name,
hint=("Make sure you specify 'through_fields' as "
"through_fields=('field1', 'field2')"),
obj=self,
id='fields.E337',
)
)
# Validate the given through fields -- they should be actual
# fields on the through model, and also be foreign keys to the
# expected models.
else:
assert from_model is not None, (
"ManyToManyField with intermediate "
"tables cannot be checked if you don't pass the model "
"where the field is attached to."
)
source, through, target = from_model, self.remote_field.through, self.remote_field.model
source_field_name, target_field_name = self.remote_field.through_fields[:2]
for field_name, related_model in ((source_field_name, source),
(target_field_name, target)):
possible_field_names = []
for f in through._meta.fields:
if hasattr(f, 'remote_field') and getattr(f.remote_field, 'model', None) == related_model:
possible_field_names.append(f.name)
if possible_field_names:
hint = ("Did you mean one of the following foreign "
"keys to '%s': %s?") % (related_model._meta.object_name,
', '.join(possible_field_names))
else:
hint = None
try:
field = through._meta.get_field(field_name)
except FieldDoesNotExist:
errors.append(
checks.Error(
("The intermediary model '%s' has no field '%s'.") % (
qualified_model_name, field_name),
hint=hint,
obj=self,
id='fields.E338',
)
)
else:
if not (hasattr(field, 'remote_field') and
getattr(field.remote_field, 'model', None) == related_model):
errors.append(
checks.Error(
"'%s.%s' is not a foreign key to '%s'." % (
through._meta.object_name, field_name,
related_model._meta.object_name),
hint=hint,
obj=self,
id='fields.E339',
)
)
return errors
def deconstruct(self):
name, path, args, kwargs = super(ManyToManyField, self).deconstruct()
# Handle the simpler arguments.
if self.db_table is not None:
kwargs['db_table'] = self.db_table
if self.remote_field.db_constraint is not True:
kwargs['db_constraint'] = self.remote_field.db_constraint
if self.remote_field.related_name is not None:
kwargs['related_name'] = self.remote_field.related_name
if self.remote_field.related_query_name is not None:
kwargs['related_query_name'] = self.remote_field.related_query_name
# Rel needs more work.
if isinstance(self.remote_field.model, six.string_types):
kwargs['to'] = self.remote_field.model
else:
kwargs['to'] = "%s.%s" % (
self.remote_field.model._meta.app_label,
self.remote_field.model._meta.object_name,
)
if getattr(self.remote_field, 'through', None) is not None:
if isinstance(self.remote_field.through, six.string_types):
kwargs['through'] = self.remote_field.through
elif not self.remote_field.through._meta.auto_created:
kwargs['through'] = "%s.%s" % (
self.remote_field.through._meta.app_label,
self.remote_field.through._meta.object_name,
)
# If swappable is True, then see if we're actually pointing to the target
# of a swap.
swappable_setting = self.swappable_setting
if swappable_setting is not None:
# If it's already a settings reference, error.
if hasattr(kwargs['to'], "setting_name"):
if kwargs['to'].setting_name != swappable_setting:
raise ValueError(
"Cannot deconstruct a ManyToManyField pointing to a "
"model that is swapped in place of more than one model "
"(%s and %s)" % (kwargs['to'].setting_name, swappable_setting)
)
from django.db.migrations.writer import SettingsReference
kwargs['to'] = SettingsReference(
kwargs['to'],
swappable_setting,
)
return name, path, args, kwargs
def _get_path_info(self, direct=False):
"""
Called by both direct and indirect m2m traversal.
"""
pathinfos = []
int_model = self.remote_field.through
linkfield1 = int_model._meta.get_field(self.m2m_field_name())
linkfield2 = int_model._meta.get_field(self.m2m_reverse_field_name())
if direct:
join1infos = linkfield1.get_reverse_path_info()
join2infos = linkfield2.get_path_info()
else:
join1infos = linkfield2.get_reverse_path_info()
join2infos = linkfield1.get_path_info()
pathinfos.extend(join1infos)
pathinfos.extend(join2infos)
return pathinfos
def get_path_info(self):
return self._get_path_info(direct=True)
def get_reverse_path_info(self):
return self._get_path_info(direct=False)
def get_choices_default(self):
return Field.get_choices(self, include_blank=False)
def _get_m2m_db_table(self, opts):
"""
Function that can be curried to provide the m2m table name for this
relation.
"""
if self.remote_field.through is not None:
return self.remote_field.through._meta.db_table
elif self.db_table:
return self.db_table
else:
return utils.truncate_name('%s_%s' % (opts.db_table, self.name),
connection.ops.max_name_length())
def _get_m2m_attr(self, related, attr):
"""
Function that can be curried to provide the source accessor or DB
column name for the m2m table.
"""
cache_attr = '_m2m_%s_cache' % attr
if hasattr(self, cache_attr):
return getattr(self, cache_attr)
if self.remote_field.through_fields is not None:
link_field_name = self.remote_field.through_fields[0]
else:
link_field_name = None
for f in self.remote_field.through._meta.fields:
if (f.is_relation and f.remote_field.model == related.related_model and
(link_field_name is None or link_field_name == f.name)):
setattr(self, cache_attr, getattr(f, attr))
return getattr(self, cache_attr)
def _get_m2m_reverse_attr(self, related, attr):
"""
Function that can be curried to provide the related accessor or DB
column name for the m2m table.
"""
cache_attr = '_m2m_reverse_%s_cache' % attr
if hasattr(self, cache_attr):
return getattr(self, cache_attr)
found = False
if self.remote_field.through_fields is not None:
link_field_name = self.remote_field.through_fields[1]
else:
link_field_name = None
for f in self.remote_field.through._meta.fields:
if f.is_relation and f.remote_field.model == related.model:
if link_field_name is None and related.related_model == related.model:
# If this is an m2m-intermediate to self,
# the first foreign key you find will be
# the source column. Keep searching for
# the second foreign key.
if found:
setattr(self, cache_attr, getattr(f, attr))
break
else:
found = True
elif link_field_name is None or link_field_name == f.name:
setattr(self, cache_attr, getattr(f, attr))
break
return getattr(self, cache_attr)
def value_to_string(self, obj):
data = ''
if obj:
qs = getattr(obj, self.name).all()
data = [instance._get_pk_val() for instance in qs]
else:
# In required many-to-many fields with only one available choice,
# select that one available choice.
if not self.blank:
choices_list = self.get_choices_default()
if len(choices_list) == 1:
data = [choices_list[0][0]]
return smart_text(data)
def contribute_to_class(self, cls, name, **kwargs):
# To support multiple relations to self, it's useful to have a non-None
# related name on symmetrical relations for internal reasons. The
# concept doesn't make a lot of sense externally ("you want me to
# specify *what* on my non-reversible relation?!"), so we set it up
# automatically. The funky name reduces the chance of an accidental
# clash.
if self.remote_field.symmetrical and (
self.remote_field.model == "self" or self.remote_field.model == cls._meta.object_name):
self.remote_field.related_name = "%s_rel_+" % name
elif self.remote_field.is_hidden():
# If the backwards relation is disabled, replace the original
# related_name with one generated from the m2m field name. Django
# still uses backwards relations internally and we need to avoid
# clashes between multiple m2m fields with related_name == '+'.
self.remote_field.related_name = "_%s_%s_+" % (cls.__name__.lower(), name)
super(ManyToManyField, self).contribute_to_class(cls, name, **kwargs)
# The intermediate m2m model is not auto created if:
# 1) There is a manually specified intermediate, or
# 2) The class owning the m2m field is abstract.
# 3) The class owning the m2m field has been swapped out.
if not cls._meta.abstract:
if self.remote_field.through:
def resolve_through_model(_, model, field):
field.remote_field.through = model
lazy_related_operation(resolve_through_model, cls, self.remote_field.through, field=self)
elif not cls._meta.swapped:
self.remote_field.through = create_many_to_many_intermediary_model(self, cls)
# Add the descriptor for the m2m relation.
setattr(cls, self.name, ManyRelatedObjectsDescriptor(self.remote_field, reverse=False))
# Set up the accessor for the m2m table name for the relation.
self.m2m_db_table = curry(self._get_m2m_db_table, cls._meta)
def contribute_to_related_class(self, cls, related):
# Internal M2Ms (i.e., those with a related name ending with '+')
# and swapped models don't get a related descriptor.
if not self.remote_field.is_hidden() and not related.related_model._meta.swapped:
setattr(cls, related.get_accessor_name(), ManyRelatedObjectsDescriptor(self.remote_field, reverse=True))
# Set up the accessors for the column names on the m2m table.
self.m2m_column_name = curry(self._get_m2m_attr, related, 'column')
self.m2m_reverse_name = curry(self._get_m2m_reverse_attr, related, 'column')
self.m2m_field_name = curry(self._get_m2m_attr, related, 'name')
self.m2m_reverse_field_name = curry(self._get_m2m_reverse_attr, related, 'name')
get_m2m_rel = curry(self._get_m2m_attr, related, 'remote_field')
self.m2m_target_field_name = lambda: get_m2m_rel().field_name
get_m2m_reverse_rel = curry(self._get_m2m_reverse_attr, related, 'remote_field')
self.m2m_reverse_target_field_name = lambda: get_m2m_reverse_rel().field_name
def set_attributes_from_rel(self):
pass
def value_from_object(self, obj):
"""
Return the value of this field in the given model instance.
"""
return getattr(obj, self.attname).all()
def save_form_data(self, instance, data):
setattr(instance, self.attname, data)
def formfield(self, **kwargs):
db = kwargs.pop('using', None)
defaults = {
'form_class': forms.ModelMultipleChoiceField,
'queryset': self.remote_field.model._default_manager.using(db),
}
defaults.update(kwargs)
# If initial is passed in, it's a list of related objects, but the
# MultipleChoiceField takes a list of IDs.
if defaults.get('initial') is not None:
initial = defaults['initial']
if callable(initial):
initial = initial()
defaults['initial'] = [i._get_pk_val() for i in initial]
return super(ManyToManyField, self).formfield(**defaults)
def db_type(self, connection):
# A ManyToManyField is not represented by a single column,
# so return None.
return None
def db_parameters(self, connection):
return {"type": None, "check": None}
| bsd-3-clause |
Nebelhom/WordPuzzleCreator | lib/werkzeug/debug/console.py | 314 | 5557 | # -*- coding: utf-8 -*-
"""
werkzeug.debug.console
~~~~~~~~~~~~~~~~~~~~~~
Interactive console support.
:copyright: (c) 2013 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD.
"""
import sys
import code
from types import CodeType
from werkzeug.utils import escape
from werkzeug.local import Local
from werkzeug.debug.repr import debug_repr, dump, helper
_local = Local()
class HTMLStringO(object):
"""A StringO version that HTML escapes on write."""
def __init__(self):
self._buffer = []
def isatty(self):
return False
def close(self):
pass
def flush(self):
pass
def seek(self, n, mode=0):
pass
def readline(self):
if len(self._buffer) == 0:
return ''
ret = self._buffer[0]
del self._buffer[0]
return ret
def reset(self):
val = ''.join(self._buffer)
del self._buffer[:]
return val
def _write(self, x):
if isinstance(x, bytes):
x = x.decode('utf-8', 'replace')
self._buffer.append(x)
def write(self, x):
self._write(escape(x))
def writelines(self, x):
self._write(escape(''.join(x)))
class ThreadedStream(object):
"""Thread-local wrapper for sys.stdout for the interactive console."""
def push():
if not isinstance(sys.stdout, ThreadedStream):
sys.stdout = ThreadedStream()
_local.stream = HTMLStringO()
push = staticmethod(push)
def fetch():
try:
stream = _local.stream
except AttributeError:
return ''
return stream.reset()
fetch = staticmethod(fetch)
def displayhook(obj):
try:
stream = _local.stream
except AttributeError:
return _displayhook(obj)
# stream._write bypasses escaping as debug_repr is
# already generating HTML for us.
if obj is not None:
_local._current_ipy.locals['_'] = obj
stream._write(debug_repr(obj))
displayhook = staticmethod(displayhook)
def __setattr__(self, name, value):
raise AttributeError('read only attribute %s' % name)
def __dir__(self):
return dir(sys.__stdout__)
def __getattribute__(self, name):
if name == '__members__':
return dir(sys.__stdout__)
try:
stream = _local.stream
except AttributeError:
stream = sys.__stdout__
return getattr(stream, name)
def __repr__(self):
return repr(sys.__stdout__)
# add the threaded stream as display hook
_displayhook = sys.displayhook
sys.displayhook = ThreadedStream.displayhook
class _ConsoleLoader(object):
def __init__(self):
self._storage = {}
def register(self, code, source):
self._storage[id(code)] = source
# register code objects of wrapped functions too.
for var in code.co_consts:
if isinstance(var, CodeType):
self._storage[id(var)] = source
def get_source_by_code(self, code):
try:
return self._storage[id(code)]
except KeyError:
pass
def _wrap_compiler(console):
compile = console.compile
def func(source, filename, symbol):
code = compile(source, filename, symbol)
console.loader.register(code, source)
return code
console.compile = func
class _InteractiveConsole(code.InteractiveInterpreter):
def __init__(self, globals, locals):
code.InteractiveInterpreter.__init__(self, locals)
self.globals = dict(globals)
self.globals['dump'] = dump
self.globals['help'] = helper
self.globals['__loader__'] = self.loader = _ConsoleLoader()
self.more = False
self.buffer = []
_wrap_compiler(self)
def runsource(self, source):
source = source.rstrip() + '\n'
ThreadedStream.push()
prompt = self.more and '... ' or '>>> '
try:
source_to_eval = ''.join(self.buffer + [source])
if code.InteractiveInterpreter.runsource(self,
source_to_eval, '<debugger>', 'single'):
self.more = True
self.buffer.append(source)
else:
self.more = False
del self.buffer[:]
finally:
output = ThreadedStream.fetch()
return prompt + source + output
def runcode(self, code):
try:
eval(code, self.globals, self.locals)
except Exception:
self.showtraceback()
def showtraceback(self):
from werkzeug.debug.tbtools import get_current_traceback
tb = get_current_traceback(skip=1)
sys.stdout._write(tb.render_summary())
def showsyntaxerror(self, filename=None):
from werkzeug.debug.tbtools import get_current_traceback
tb = get_current_traceback(skip=4)
sys.stdout._write(tb.render_summary())
def write(self, data):
sys.stdout.write(data)
class Console(object):
"""An interactive console."""
def __init__(self, globals=None, locals=None):
if locals is None:
locals = {}
if globals is None:
globals = {}
self._ipy = _InteractiveConsole(globals, locals)
def eval(self, code):
_local._current_ipy = self._ipy
old_sys_stdout = sys.stdout
try:
return self._ipy.runsource(code)
finally:
sys.stdout = old_sys_stdout
| apache-2.0 |
suncycheng/intellij-community | python/testData/inspections/PyPropertyDefinitionInspection26/test.py | 22 | 6220 | import abc
class A(object):
def __init__(self):
self._x = 1
@property
def foo(self):
return self._x
@foo.setter
def foo(self, x):
self._x = x
@foo.deleter
def foo(self):
pass
@property
def boo(self):
return self._x
<warning descr="Names of function and decorator don't match; property accessor is not created">@boo.setter</warning>
def boo1(self, x):
self._x = x
<warning descr="Names of function and decorator don't match; property accessor is not created">@boo.deleter</warning>
def boo2(self):
pass
@property
def <warning descr="Getter should return or yield something">moo</warning>(self):
pass
@moo.setter
def <warning descr="Setter should not return a value">moo</warning>(self, x):
return 1
@moo.deleter
def <warning descr="Deleter should not return a value">moo</warning>(self):
return self._x
@qoo.setter # unknown qoo is reported in ref inspection
def qoo(self, v):
self._x = v
@property
def futuroo(self):
raise NotImplementedError("Override!") # ok though no return
@property
def futuroo(self):
"""Docstring."""
raise NotImplementedError("Override!") # ok though no return
@property
def xoo(self):
return self._x
@xoo.setter
def xoo(self, x):
self._x = x
return
get_foo2 = lambda self: 'foo2'
foo2 = property(get_foo2)
@property
@abc.abstractproperty
def abstract_property(self):
pass
# PY-19701
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
def inner_func(n):
return n
self._myprop = inner_func(val)
myprop = property(get_myprop, set_myprop)
# all flows have exit point
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
if a > b:
<error descr="Python versions < 3.3 do not allow 'return' with argument inside generator.">return self._myprop</error>
elif a < b:
raise self._myprop
else:
yield self._myprop
myprop = property(get_myprop)
# some flows have not exit point
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
if a > b:
return self._myprop
elif a < b:
raise self._myprop
myprop = property(get_myprop)
# some flows have not exit point
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
if a > b:
return self._myprop
myprop = property(get_myprop)
# non-empty for
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
for i in range(5):
yield i
myprop = property(get_myprop)
# empty for
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
for i in []:
yield i
myprop = property(get_myprop) # shouldn't pass with better analysis, pass at the moment
# non-empty while
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
i = 0
while i < 5:
yield i
i += 1
myprop = property(get_myprop)
# empty while
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
while False:
yield i
myprop = property(get_myprop) # shouldn't pass with better analysis, pass at the moment
# non-empty while with two conditions
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
i = 0
j = 0
while i < 5 and j == 0:
yield i
i += 1
myprop = property(get_myprop)
# empty while with two conditions
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
i = 0
j = 0
while i > 5 and j == 0:
yield i
myprop = property(get_myprop) # shouldn't pass with better analysis, pass at the moment
# setter has exit point
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
return 10
myprop = property(get_myprop, <warning descr="Setter should not return a value">set_myprop</warning>)
# setter has exit point
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
yield 10
myprop = property(get_myprop, <warning descr="Setter should not return a value">set_myprop</warning>)
# setter has raise statement
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
raise NotImplementedError()
myprop = property(get_myprop, set_myprop)
# setter has exit point in some flow
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
if a > b:
return 10
myprop = property(get_myprop, <warning descr="Setter should not return a value">set_myprop</warning>)
# setter has exit point in some flow
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
if a > b:
yield 10
myprop = property(get_myprop, <warning descr="Setter should not return a value">set_myprop</warning>)
# setter has raise statement in some flow
class Test(object):
def __init__(self):
self._myprop = None
def get_myprop(self):
return self._myprop
def set_myprop(self, val):
self._myprop = val
if a > b:
raise NotImplementedError()
myprop = property(get_myprop, set_myprop)
| apache-2.0 |
forrestv/bitcoin | contrib/testgen/gen_base58_test_vectors.py | 1000 | 4343 | #!/usr/bin/env python
'''
Generate valid and invalid base58 address and private key test vectors.
Usage:
gen_base58_test_vectors.py valid 50 > ../../src/test/data/base58_keys_valid.json
gen_base58_test_vectors.py invalid 50 > ../../src/test/data/base58_keys_invalid.json
'''
# 2012 Wladimir J. van der Laan
# Released under MIT License
import os
from itertools import islice
from base58 import b58encode, b58decode, b58encode_chk, b58decode_chk, b58chars
import random
from binascii import b2a_hex
# key types
PUBKEY_ADDRESS = 0
SCRIPT_ADDRESS = 5
PUBKEY_ADDRESS_TEST = 111
SCRIPT_ADDRESS_TEST = 196
PRIVKEY = 128
PRIVKEY_TEST = 239
metadata_keys = ['isPrivkey', 'isTestnet', 'addrType', 'isCompressed']
# templates for valid sequences
templates = [
# prefix, payload_size, suffix, metadata
# None = N/A
((PUBKEY_ADDRESS,), 20, (), (False, False, 'pubkey', None)),
((SCRIPT_ADDRESS,), 20, (), (False, False, 'script', None)),
((PUBKEY_ADDRESS_TEST,), 20, (), (False, True, 'pubkey', None)),
((SCRIPT_ADDRESS_TEST,), 20, (), (False, True, 'script', None)),
((PRIVKEY,), 32, (), (True, False, None, False)),
((PRIVKEY,), 32, (1,), (True, False, None, True)),
((PRIVKEY_TEST,), 32, (), (True, True, None, False)),
((PRIVKEY_TEST,), 32, (1,), (True, True, None, True))
]
def is_valid(v):
'''Check vector v for validity'''
result = b58decode_chk(v)
if result is None:
return False
valid = False
for template in templates:
prefix = str(bytearray(template[0]))
suffix = str(bytearray(template[2]))
if result.startswith(prefix) and result.endswith(suffix):
if (len(result) - len(prefix) - len(suffix)) == template[1]:
return True
return False
def gen_valid_vectors():
'''Generate valid test vectors'''
while True:
for template in templates:
prefix = str(bytearray(template[0]))
payload = os.urandom(template[1])
suffix = str(bytearray(template[2]))
rv = b58encode_chk(prefix + payload + suffix)
assert is_valid(rv)
metadata = dict([(x,y) for (x,y) in zip(metadata_keys,template[3]) if y is not None])
yield (rv, b2a_hex(payload), metadata)
def gen_invalid_vector(template, corrupt_prefix, randomize_payload_size, corrupt_suffix):
'''Generate possibly invalid vector'''
if corrupt_prefix:
prefix = os.urandom(1)
else:
prefix = str(bytearray(template[0]))
if randomize_payload_size:
payload = os.urandom(max(int(random.expovariate(0.5)), 50))
else:
payload = os.urandom(template[1])
if corrupt_suffix:
suffix = os.urandom(len(template[2]))
else:
suffix = str(bytearray(template[2]))
return b58encode_chk(prefix + payload + suffix)
def randbool(p = 0.5):
'''Return True with P(p)'''
return random.random() < p
def gen_invalid_vectors():
'''Generate invalid test vectors'''
# start with some manual edge-cases
yield "",
yield "x",
while True:
# kinds of invalid vectors:
# invalid prefix
# invalid payload length
# invalid (randomized) suffix (add random data)
# corrupt checksum
for template in templates:
val = gen_invalid_vector(template, randbool(0.2), randbool(0.2), randbool(0.2))
if random.randint(0,10)<1: # line corruption
if randbool(): # add random character to end
val += random.choice(b58chars)
else: # replace random character in the middle
n = random.randint(0, len(val))
val = val[0:n] + random.choice(b58chars) + val[n+1:]
if not is_valid(val):
yield val,
if __name__ == '__main__':
import sys, json
iters = {'valid':gen_valid_vectors, 'invalid':gen_invalid_vectors}
try:
uiter = iters[sys.argv[1]]
except IndexError:
uiter = gen_valid_vectors
try:
count = int(sys.argv[2])
except IndexError:
count = 0
data = list(islice(uiter(), count))
json.dump(data, sys.stdout, sort_keys=True, indent=4)
sys.stdout.write('\n')
| mit |
Bismarrck/tensorflow | tensorflow/python/debug/cli/evaluator_test.py | 89 | 11162 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for arbitrary expression evaluator."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.debug.cli import evaluator
from tensorflow.python.debug.lib import debug_data
from tensorflow.python.framework import test_util
from tensorflow.python.platform import test
class ParseDebugTensorNameTest(test_util.TensorFlowTestCase):
def testParseNamesWithoutPrefixOrSuffix(self):
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name("foo:1"))
self.assertIsNone(device_name)
self.assertEqual("foo", node_name)
self.assertEqual(1, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(0, exec_index)
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name("hidden_0/Weights:0"))
self.assertIsNone(device_name)
self.assertEqual("hidden_0/Weights", node_name)
self.assertEqual(0, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(0, exec_index)
def testParseNamesWithoutPrefixWithDebugOpSuffix(self):
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name("foo:1:DebugNanCount"))
self.assertIsNone(device_name)
self.assertEqual("foo", node_name)
self.assertEqual(1, output_slot)
self.assertEqual("DebugNanCount", debug_op)
self.assertEqual(0, exec_index)
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name(
"hidden_0/Weights:0:DebugNumericSummary"))
self.assertIsNone(device_name)
self.assertEqual("hidden_0/Weights", node_name)
self.assertEqual(0, output_slot)
self.assertEqual("DebugNumericSummary", debug_op)
self.assertEqual(0, exec_index)
def testParseNamesWithDeviceNamePrefixWithoutDebugOpSuffix(self):
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name(
"/job:ps/replica:0/task:2/cpu:0:foo:1"))
self.assertEqual("/job:ps/replica:0/task:2/cpu:0", device_name)
self.assertEqual("foo", node_name)
self.assertEqual(1, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(0, exec_index)
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name(
"/job:worker/replica:0/task:3/gpu:0:hidden_0/Weights:0"))
self.assertEqual("/job:worker/replica:0/task:3/gpu:0", device_name)
self.assertEqual("hidden_0/Weights", node_name)
self.assertEqual(0, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(0, exec_index)
def testParseNamesWithDeviceNamePrefixWithDebugOpSuffix(self):
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name(
"/job:ps/replica:0/task:2/cpu:0:foo:1:DebugNanCount"))
self.assertEqual("/job:ps/replica:0/task:2/cpu:0", device_name)
self.assertEqual("foo", node_name)
self.assertEqual(1, output_slot)
self.assertEqual("DebugNanCount", debug_op)
self.assertEqual(0, exec_index)
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name(
"/job:worker/replica:0/task:3/gpu:0:"
"hidden_0/Weights:0:DebugNumericSummary"))
self.assertEqual("/job:worker/replica:0/task:3/gpu:0", device_name)
self.assertEqual("hidden_0/Weights", node_name)
self.assertEqual(0, output_slot)
self.assertEqual("DebugNumericSummary", debug_op)
self.assertEqual(0, exec_index)
def testParseMalformedDebugTensorName(self):
with self.assertRaisesRegexp(
ValueError,
r"The debug tensor name in the to-be-evaluated expression is "
r"malformed:"):
evaluator._parse_debug_tensor_name(
"/job:ps/replica:0/task:2/cpu:0:foo:1:DebugNanCount:1337")
with self.assertRaisesRegexp(
ValueError,
r"The debug tensor name in the to-be-evaluated expression is "
r"malformed:"):
evaluator._parse_debug_tensor_name(
"/job:ps/replica:0/cpu:0:foo:1:DebugNanCount")
with self.assertRaises(ValueError):
evaluator._parse_debug_tensor_name(
"foo:1:DebugNanCount[]")
with self.assertRaises(ValueError):
evaluator._parse_debug_tensor_name(
"foo:1[DebugNanCount]")
def testParseNamesWithExecIndex(self):
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name("foo:1[20]"))
self.assertIsNone(device_name)
self.assertEqual("foo", node_name)
self.assertEqual(1, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(20, exec_index)
device_name, node_name, output_slot, debug_op, exec_index = (
evaluator._parse_debug_tensor_name("hidden_0/Weights:0[3]"))
self.assertIsNone(device_name)
self.assertEqual("hidden_0/Weights", node_name)
self.assertEqual(0, output_slot)
self.assertEqual("DebugIdentity", debug_op)
self.assertEqual(3, exec_index)
class EvaluatorTest(test_util.TensorFlowTestCase):
def testEvaluateSingleTensor(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del node_name, output_slot, debug_op, device_name # Unused.
return [np.array([[1.0, 2.0, 3.0]])]
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
self.assertEqual(3, ev.evaluate("np.size(`a:0`)"))
# Whitespace in backticks should be tolerated.
self.assertEqual(3, ev.evaluate("np.size(` a:0 `)"))
def testEvaluateTwoTensors(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del debug_op, device_name # Unused.
if node_name == "a" and output_slot == 0:
return [np.array([[1.0, -2.0], [0.0, 1.0]])]
elif node_name == "b" and output_slot == 0:
return [np.array([[-1.0], [1.0]])]
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
self.assertAllClose([[-3.0], [1.0]],
ev.evaluate("np.matmul(`a:0`, `b:0`)"))
self.assertAllClose(
[[-4.0], [2.0]], ev.evaluate("np.matmul(`a:0`, `b:0`) + `b:0`"))
def testEvaluateNoneExistentTensorGeneratesError(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del node_name, output_slot, debug_op, device_name # Unused.
raise debug_data.WatchKeyDoesNotExistInDebugDumpDirError()
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
with self.assertRaisesRegexp(
ValueError, "Eval failed due to the value of .* being unavailable"):
ev.evaluate("np.matmul(`a:0`, `b:0`)")
def testEvaluateWithMultipleDevicesContainingTheSameTensorName(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del output_slot, debug_op # Unused.
if node_name == "a" and device_name is None:
raise ValueError(
"There are multiple (2) devices with nodes named 'a' but "
"device_name is not specified")
elif (node_name == "a" and
device_name == "/job:worker/replica:0/task:0/cpu:0"):
return [np.array(10.0)]
elif (node_name == "a" and
device_name == "/job:worker/replica:0/task:1/cpu:0"):
return [np.array(20.0)]
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
with self.assertRaisesRegexp(ValueError, r"multiple \(2\) devices"):
ev.evaluate("`a:0` + `a:0`")
self.assertAllClose(
30.0,
ev.evaluate("`/job:worker/replica:0/task:0/cpu:0:a:0` + "
"`/job:worker/replica:0/task:1/cpu:0:a:0`"))
def testEvaluateWithNonDefaultDebugOp(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del device_name # Unused.
if node_name == "a" and output_slot == 0 and debug_op == "DebugIdentity":
return [np.array([[-1.0], [1.0]])]
elif node_name == "a" and output_slot == 0 and debug_op == "DebugFoo":
return [np.array([[-2.0, 2.0]])]
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
self.assertAllClose(
[[4.0]],
ev.evaluate("np.matmul(`a:0:DebugFoo`, `a:0:DebugIdentity`)"))
def testEvaluateWithMultipleExecIndexes(self):
dump = test.mock.MagicMock()
def fake_get_tensors(node_name, output_slot, debug_op, device_name=None):
del debug_op, device_name # Unused.
if node_name == "a" and output_slot == 0:
return [np.array([[-1.0], [1.0]]), np.array([[-2.0], [2.0]])]
with test.mock.patch.object(
dump, "get_tensors", side_effect=fake_get_tensors, autospec=True):
ev = evaluator.ExpressionEvaluator(dump)
self.assertAllClose(
[[4.0]], ev.evaluate("np.matmul(`a:0[1]`.T, `a:0[0]`)"))
def testEvaluateExpressionWithUnmatchedBacktick(self):
dump = test.mock.MagicMock()
ev = evaluator.ExpressionEvaluator(dump)
with self.assertRaises(SyntaxError):
ev.evaluate("np.matmul(`a:0`, `b:0`) + `b:0")
def testEvaluateExpressionWithInvalidDebugTensorName(self):
dump = test.mock.MagicMock()
ev = evaluator.ExpressionEvaluator(dump)
with self.assertRaisesRegexp(
ValueError, r".* tensor name .* expression .* malformed"):
ev.evaluate("np.matmul(`a`, `b`)")
with self.assertRaisesRegexp(
ValueError, r".* tensor name .* expression .* malformed"):
ev.evaluate("np.matmul(`a:0:DebugIdentity:0`, `b:1:DebugNanCount:2`)")
with self.assertRaises(ValueError):
ev.evaluate("np.matmul(`a:0[]`, `b:0[]`)")
if __name__ == "__main__":
test.main()
| apache-2.0 |
giacomov/astromodels | astromodels/core/parameter_transformation.py | 2 | 1122 | import numpy as np
class ParameterTransformation(object):
def forward(self, external_value):
raise NotImplementedError("You have to implement this")
def backward(self, internal_value):
raise NotImplementedError("You have to implement this")
class LogarithmicTransformation(ParameterTransformation):
def forward(self, external_value):
# Throw an error if taking the logarithm of a negative number (or nan)
with np.errstate(invalid='raise'):
res = np.log10(external_value)
return res
def backward(self, internal_value):
return 10**internal_value
_known_transformations = {'log10': LogarithmicTransformation}
def get_transformation(transformation_name):
"""
Returns an instance of a transformation by name
:param transformation_name:
:return: instance of transformation with provided name
"""
if not transformation_name in _known_transformations:
raise ValueError("Transformation %s is not known" % transformation_name)
else:
return _known_transformations[transformation_name]()
| bsd-3-clause |
avadacatavra/servo | tests/wpt/web-platform-tests/tools/manifest/vcs.py | 11 | 3257 | import os
import subprocess
from .sourcefile import SourceFile
class Git(object):
def __init__(self, repo_root, url_base):
self.root = os.path.abspath(repo_root)
self.git = Git.get_func(repo_root)
self.url_base = url_base
@staticmethod
def get_func(repo_path):
def git(cmd, *args):
full_cmd = ["git", cmd] + list(args)
try:
return subprocess.check_output(full_cmd, cwd=repo_path, stderr=subprocess.STDOUT)
except WindowsError:
full_cmd[0] = "git.bat"
return subprocess.check_output(full_cmd, cwd=repo_path, stderr=subprocess.STDOUT)
return git
@classmethod
def for_path(cls, path, url_base):
git = Git.get_func(path)
try:
return cls(git("rev-parse", "--show-toplevel").rstrip(), url_base)
except subprocess.CalledProcessError:
return None
def _local_changes(self):
changes = {}
cmd = ["status", "-z", "--ignore-submodules=all"]
data = self.git(*cmd)
if data == "":
return changes
rename_data = None
for entry in data.split("\0")[:-1]:
if rename_data is not None:
status, rel_path = entry.split(" ")
if status[0] == "R":
rename_data = (rel_path, status)
else:
changes[rel_path] = (status, None)
else:
rel_path = entry
changes[rel_path] = rename_data
rename_data = None
return changes
def _show_file(self, path):
path = os.path.relpath(os.path.abspath(path), self.root)
return self.git("show", "HEAD:%s" % path)
def __iter__(self):
cmd = ["ls-tree", "-r", "-z", "--name-only", "HEAD"]
local_changes = self._local_changes()
for rel_path in self.git(*cmd).split("\0")[:-1]:
if not os.path.isdir(os.path.join(self.root, rel_path)):
if rel_path in local_changes:
contents = self._show_file(rel_path)
else:
contents = None
yield SourceFile(self.root,
rel_path,
self.url_base,
contents=contents)
class FileSystem(object):
def __init__(self, root, url_base):
self.root = root
self.url_base = url_base
from gitignore import gitignore
self.path_filter = gitignore.PathFilter(self.root)
def __iter__(self):
is_root = True
for dir_path, dir_names, filenames in os.walk(self.root):
rel_root = os.path.relpath(dir_path, self.root)
if is_root:
dir_names[:] = [item for item in dir_names if item not in
["tools", "resources", ".git"]]
is_root = False
for filename in filenames:
rel_path = os.path.join(rel_root, filename)
if self.path_filter(rel_path):
yield SourceFile(self.root,
rel_path,
self.url_base)
| mpl-2.0 |
lthurlow/Network-Grapher | proj/external/numpy-1.7.0/numpy/distutils/fcompiler/absoft.py | 89 | 5525 |
# http://www.absoft.com/literature/osxuserguide.pdf
# http://www.absoft.com/documentation.html
# Notes:
# - when using -g77 then use -DUNDERSCORE_G77 to compile f2py
# generated extension modules (works for f2py v2.45.241_1936 and up)
import os
from numpy.distutils.cpuinfo import cpu
from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file
from numpy.distutils.misc_util import cyg2win32
compilers = ['AbsoftFCompiler']
class AbsoftFCompiler(FCompiler):
compiler_type = 'absoft'
description = 'Absoft Corp Fortran Compiler'
#version_pattern = r'FORTRAN 77 Compiler (?P<version>[^\s*,]*).*?Absoft Corp'
version_pattern = r'(f90:.*?(Absoft Pro FORTRAN Version|FORTRAN 77 Compiler|Absoft Fortran Compiler Version|Copyright Absoft Corporation.*?Version))'+\
r' (?P<version>[^\s*,]*)(.*?Absoft Corp|)'
# on windows: f90 -V -c dummy.f
# f90: Copyright Absoft Corporation 1994-1998 mV2; Cray Research, Inc. 1994-1996 CF90 (2.x.x.x f36t87) Version 2.3 Wed Apr 19, 2006 13:05:16
# samt5735(8)$ f90 -V -c dummy.f
# f90: Copyright Absoft Corporation 1994-2002; Absoft Pro FORTRAN Version 8.0
# Note that fink installs g77 as f77, so need to use f90 for detection.
executables = {
'version_cmd' : None, # set by update_executables
'compiler_f77' : ["f77"],
'compiler_fix' : ["f90"],
'compiler_f90' : ["f90"],
'linker_so' : ["<F90>"],
'archiver' : ["ar", "-cr"],
'ranlib' : ["ranlib"]
}
if os.name=='nt':
library_switch = '/out:' #No space after /out:!
module_dir_switch = None
module_include_switch = '-p'
def update_executables(self):
f = cyg2win32(dummy_fortran_file())
self.executables['version_cmd'] = ['<F90>', '-V', '-c',
f+'.f', '-o', f+'.o']
def get_flags_linker_so(self):
if os.name=='nt':
opt = ['/dll']
# The "-K shared" switches are being left in for pre-9.0 versions
# of Absoft though I don't think versions earlier than 9 can
# actually be used to build shared libraries. In fact, version
# 8 of Absoft doesn't recognize "-K shared" and will fail.
elif self.get_version() >= '9.0':
opt = ['-shared']
else:
opt = ["-K","shared"]
return opt
def library_dir_option(self, dir):
if os.name=='nt':
return ['-link','/PATH:"%s"' % (dir)]
return "-L" + dir
def library_option(self, lib):
if os.name=='nt':
return '%s.lib' % (lib)
return "-l" + lib
def get_library_dirs(self):
opt = FCompiler.get_library_dirs(self)
d = os.environ.get('ABSOFT')
if d:
if self.get_version() >= '10.0':
# use shared libraries, the static libraries were not compiled -fPIC
prefix = 'sh'
else:
prefix = ''
if cpu.is_64bit():
suffix = '64'
else:
suffix = ''
opt.append(os.path.join(d, '%slib%s' % (prefix, suffix)))
return opt
def get_libraries(self):
opt = FCompiler.get_libraries(self)
if self.get_version() >= '11.0':
opt.extend(['af90math', 'afio', 'af77math', 'amisc'])
elif self.get_version() >= '10.0':
opt.extend(['af90math', 'afio', 'af77math', 'U77'])
elif self.get_version() >= '8.0':
opt.extend(['f90math','fio','f77math','U77'])
else:
opt.extend(['fio','f90math','fmath','U77'])
if os.name =='nt':
opt.append('COMDLG32')
return opt
def get_flags(self):
opt = FCompiler.get_flags(self)
if os.name != 'nt':
opt.extend(['-s'])
if self.get_version():
if self.get_version()>='8.2':
opt.append('-fpic')
return opt
def get_flags_f77(self):
opt = FCompiler.get_flags_f77(self)
opt.extend(['-N22','-N90','-N110'])
v = self.get_version()
if os.name == 'nt':
if v and v>='8.0':
opt.extend(['-f','-N15'])
else:
opt.append('-f')
if v:
if v<='4.6':
opt.append('-B108')
else:
# Though -N15 is undocumented, it works with
# Absoft 8.0 on Linux
opt.append('-N15')
return opt
def get_flags_f90(self):
opt = FCompiler.get_flags_f90(self)
opt.extend(["-YCFRL=1","-YCOM_NAMES=LCS","-YCOM_PFX","-YEXT_PFX",
"-YCOM_SFX=_","-YEXT_SFX=_","-YEXT_NAMES=LCS"])
if self.get_version():
if self.get_version()>'4.6':
opt.extend(["-YDEALLOC=ALL"])
return opt
def get_flags_fix(self):
opt = FCompiler.get_flags_fix(self)
opt.extend(["-YCFRL=1","-YCOM_NAMES=LCS","-YCOM_PFX","-YEXT_PFX",
"-YCOM_SFX=_","-YEXT_SFX=_","-YEXT_NAMES=LCS"])
opt.extend(["-f","fixed"])
return opt
def get_flags_opt(self):
opt = ['-O']
return opt
if __name__ == '__main__':
from distutils import log
log.set_verbosity(2)
from numpy.distutils.fcompiler import new_fcompiler
compiler = new_fcompiler(compiler='absoft')
compiler.customize()
print(compiler.get_version())
| mit |
sugartom/tensorflow-alien | tensorflow/examples/learn/text_classification.py | 39 | 5106 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example of Estimator for DNN-based text classification with DBpedia data."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
import numpy as np
import pandas
from sklearn import metrics
import tensorflow as tf
from tensorflow.contrib.layers.python.layers import encoders
learn = tf.contrib.learn
FLAGS = None
MAX_DOCUMENT_LENGTH = 10
EMBEDDING_SIZE = 50
n_words = 0
def bag_of_words_model(features, target):
"""A bag-of-words model. Note it disregards the word order in the text."""
target = tf.one_hot(target, 15, 1, 0)
features = encoders.bow_encoder(
features, vocab_size=n_words, embed_dim=EMBEDDING_SIZE)
logits = tf.contrib.layers.fully_connected(features, 15, activation_fn=None)
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer='Adam',
learning_rate=0.01)
return ({
'class': tf.argmax(logits, 1),
'prob': tf.nn.softmax(logits)
}, loss, train_op)
def rnn_model(features, target):
"""RNN model to predict from sequence of words to a class."""
# Convert indexes of words into embeddings.
# This creates embeddings matrix of [n_words, EMBEDDING_SIZE] and then
# maps word indexes of the sequence into [batch_size, sequence_length,
# EMBEDDING_SIZE].
word_vectors = tf.contrib.layers.embed_sequence(
features, vocab_size=n_words, embed_dim=EMBEDDING_SIZE, scope='words')
# Split into list of embedding per word, while removing doc length dim.
# word_list results to be a list of tensors [batch_size, EMBEDDING_SIZE].
word_list = tf.unstack(word_vectors, axis=1)
# Create a Gated Recurrent Unit cell with hidden size of EMBEDDING_SIZE.
cell = tf.contrib.rnn.GRUCell(EMBEDDING_SIZE)
# Create an unrolled Recurrent Neural Networks to length of
# MAX_DOCUMENT_LENGTH and passes word_list as inputs for each unit.
_, encoding = tf.contrib.rnn.static_rnn(cell, word_list, dtype=tf.float32)
# Given encoding of RNN, take encoding of last step (e.g hidden size of the
# neural network of last step) and pass it as features for logistic
# regression over output classes.
target = tf.one_hot(target, 15, 1, 0)
logits = tf.contrib.layers.fully_connected(encoding, 15, activation_fn=None)
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
# Create a training op.
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer='Adam',
learning_rate=0.01)
return ({
'class': tf.argmax(logits, 1),
'prob': tf.nn.softmax(logits)
}, loss, train_op)
def main(unused_argv):
global n_words
# Prepare training and testing data
dbpedia = learn.datasets.load_dataset(
'dbpedia', test_with_fake_data=FLAGS.test_with_fake_data)
x_train = pandas.DataFrame(dbpedia.train.data)[1]
y_train = pandas.Series(dbpedia.train.target)
x_test = pandas.DataFrame(dbpedia.test.data)[1]
y_test = pandas.Series(dbpedia.test.target)
# Process vocabulary
vocab_processor = learn.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)
x_transform_train = vocab_processor.fit_transform(x_train)
x_transform_test = vocab_processor.transform(x_test)
x_train = np.array(list(x_transform_train))
x_test = np.array(list(x_transform_test))
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
# Build model
# Switch between rnn_model and bag_of_words_model to test different models.
model_fn = rnn_model
if FLAGS.bow_model:
model_fn = bag_of_words_model
classifier = learn.Estimator(model_fn=model_fn)
# Train and predict
classifier.fit(x_train, y_train, steps=100)
y_predicted = [
p['class'] for p in classifier.predict(
x_test, as_iterable=True)
]
score = metrics.accuracy_score(y_test, y_predicted)
print('Accuracy: {0:f}'.format(score))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--test_with_fake_data',
default=False,
help='Test the example code with fake data.',
action='store_true')
parser.add_argument(
'--bow_model',
default=False,
help='Run with BOW model instead of RNN.',
action='store_true')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
| apache-2.0 |
Belxjander/Kirito | Python-3.5.0-main/Lib/distutils/tests/test_config.py | 90 | 3148 | """Tests for distutils.pypirc.pypirc."""
import sys
import os
import unittest
import tempfile
from distutils.core import PyPIRCCommand
from distutils.core import Distribution
from distutils.log import set_threshold
from distutils.log import WARN
from distutils.tests import support
from test.support import run_unittest
PYPIRC = """\
[distutils]
index-servers =
server1
server2
[server1]
username:me
password:secret
[server2]
username:meagain
password: secret
realm:acme
repository:http://another.pypi/
"""
PYPIRC_OLD = """\
[server-login]
username:tarek
password:secret
"""
WANTED = """\
[distutils]
index-servers =
pypi
[pypi]
username:tarek
password:xxx
"""
class PyPIRCCommandTestCase(support.TempdirManager,
support.LoggingSilencer,
support.EnvironGuard,
unittest.TestCase):
def setUp(self):
"""Patches the environment."""
super(PyPIRCCommandTestCase, self).setUp()
self.tmp_dir = self.mkdtemp()
os.environ['HOME'] = self.tmp_dir
self.rc = os.path.join(self.tmp_dir, '.pypirc')
self.dist = Distribution()
class command(PyPIRCCommand):
def __init__(self, dist):
PyPIRCCommand.__init__(self, dist)
def initialize_options(self):
pass
finalize_options = initialize_options
self._cmd = command
self.old_threshold = set_threshold(WARN)
def tearDown(self):
"""Removes the patch."""
set_threshold(self.old_threshold)
super(PyPIRCCommandTestCase, self).tearDown()
def test_server_registration(self):
# This test makes sure PyPIRCCommand knows how to:
# 1. handle several sections in .pypirc
# 2. handle the old format
# new format
self.write_file(self.rc, PYPIRC)
cmd = self._cmd(self.dist)
config = cmd._read_pypirc()
config = list(sorted(config.items()))
waited = [('password', 'secret'), ('realm', 'pypi'),
('repository', 'https://pypi.python.org/pypi'),
('server', 'server1'), ('username', 'me')]
self.assertEqual(config, waited)
# old format
self.write_file(self.rc, PYPIRC_OLD)
config = cmd._read_pypirc()
config = list(sorted(config.items()))
waited = [('password', 'secret'), ('realm', 'pypi'),
('repository', 'https://pypi.python.org/pypi'),
('server', 'server-login'), ('username', 'tarek')]
self.assertEqual(config, waited)
def test_server_empty_registration(self):
cmd = self._cmd(self.dist)
rc = cmd._get_rc_file()
self.assertFalse(os.path.exists(rc))
cmd._store_pypirc('tarek', 'xxx')
self.assertTrue(os.path.exists(rc))
f = open(rc)
try:
content = f.read()
self.assertEqual(content, WANTED)
finally:
f.close()
def test_suite():
return unittest.makeSuite(PyPIRCCommandTestCase)
if __name__ == "__main__":
run_unittest(test_suite())
| gpl-3.0 |
biicode/bii-server | test/model/social_account_test.py | 2 | 2004 | import unittest
from biicode.server.model.social_account import SocialAccount, SocialAccountToken
from biicode.server.model.epoch.utc_datetime import UtcDatetime
import datetime
class SocialAccountTest(unittest.TestCase):
def setUp(self):
self.utc_datetime = UtcDatetime.deserialize(datetime.datetime.now())
def test_social_token_serialization(self):
social_token = SocialAccountToken("xxzc", "zxcc", self.utc_datetime)
serialized_social_token = social_token.serialize()
self.assertEquals(SocialAccountToken.deserialize(serialized_social_token), social_token)
def test_social_token_no_secret_serialization(self):
social_token = SocialAccountToken("xxzc", "", self.utc_datetime)
serialized_social_token = social_token.serialize()
self.assertEquals(SocialAccountToken.deserialize(serialized_social_token), social_token)
def test_social_account_serialization(self):
tokens = [SocialAccountToken("xxzc", "zxcc", self.utc_datetime),
SocialAccountToken("xxzc", "zxcc", self.utc_datetime)]
social_account = SocialAccount("zcas",
self.utc_datetime,
self.utc_datetime,
tokens,
"zcc")
serialized_social_account = social_account.serialize()
self.assertEquals(SocialAccount.deserialize(serialized_social_account), social_account)
def test_social_account_without_token_serialization(self):
tokens = []
social_account = SocialAccount("zcas",
self.utc_datetime,
self.utc_datetime,
tokens,
"zcc")
serialized_social_account = social_account.serialize()
self.assertEquals(SocialAccount.deserialize(serialized_social_account), social_account)
| mit |
PythonicNinja/django-ddp | dddp/management/commands/dddp.py | 1 | 5318 | """Django DDP WebSocket service."""
from __future__ import print_function, absolute_import
import collections
import inspect
import optparse
import random
import signal
import socket
from django.core.management.base import BaseCommand
from django.db import connection, close_old_connections
from django.utils.module_loading import import_string
import ejson
import gevent
import gevent.monkey
import gevent.queue
import gevent.select
import geventwebsocket
import psycogreen.gevent
from dddp import autodiscover
from dddp.postgres import PostgresGreenlet
from dddp.websocket import DDPWebSocketApplication
def ddpp_sockjs_xhr(environ, start_response):
"""Dummy method that doesn't handle XHR requests."""
start_response(
'404 Not found',
[
('Content-Type', 'text/plain; charset=UTF-8'),
(
'Access-Control-Allow-Origin',
'/'.join(environ['HTTP_REFERER'].split('/')[:3]),
),
('Access-Control-Allow-Credentials', 'true'),
# ('access-control-allow-credentials', 'true'),
('Cache-Control', 'no-store, no-cache, must-revalidate, max-age=0'),
('Connection', 'keep-alive'),
('Vary', 'Origin'),
],
)
yield 'No.'
def ddpp_sockjs_info(environ, start_response):
"""Inform client that WebSocket service is available."""
start_response(
'200 OK',
[
('Content-Type', 'application/json; charset=UTF-8'),
(
'Access-Control-Allow-Origin',
'/'.join(environ['HTTP_REFERER'].split('/')[:3]),
),
('Access-Control-Allow-Credentials', 'true'),
# ('access-control-allow-credentials', 'true'),
('Cache-Control', 'no-store, no-cache, must-revalidate, max-age=0'),
('Connection', 'keep-alive'),
('Vary', 'Origin'),
],
)
yield ejson.dumps(collections.OrderedDict([
('websocket', True),
('origins', [
'*:*',
]),
('cookie_needed', False),
('entropy', random.getrandbits(32)),
]))
class Command(BaseCommand):
"""Command to run DDP web service."""
args = 'HOST PORT'
help = 'Run DDP service'
requires_system_checks = False
option_list = BaseCommand.option_list + (
optparse.make_option(
'-H', '--host', dest="host", metavar='HOST',
help='TCP listening host (default: localhost)', default='localhost',
),
optparse.make_option(
'-p', '--port', dest="port", metavar='PORT',
help='TCP listening port (default: 8000)', default='8000',
),
)
def handle(self, *args, **options):
"""Spawn greenlets for handling websockets and PostgreSQL calls."""
# shutdown existing connections, mokey patch stdlib for gevent.
close_old_connections()
gevent.monkey.patch_all()
psycogreen.gevent.patch_psycopg()
debug = int(options['verbosity']) > 1
# setup PostgresGreenlet to multiplex DB calls
postgres = PostgresGreenlet(connection, debug=debug)
DDPWebSocketApplication.pgworker = postgres
# use settings.WSGI_APPLICATION or fallback to default Django WSGI app
from django.conf import settings
if hasattr(settings, 'WSGI_APPLICATION'):
wsgi_name = settings.WSGI_APPLICATION
wsgi_app = import_string(wsgi_name)
else:
from django.core.wsgi import get_wsgi_application
wsgi_app = get_wsgi_application()
wsgi_name = str(wsgi_app.__class__)
resource = geventwebsocket.Resource({
r'/websocket': DDPWebSocketApplication,
r'^/sockjs/\d+/\w+/websocket$': DDPWebSocketApplication,
r'^/sockjs/\d+/\w+/xhr$': ddpp_sockjs_xhr,
r'^/sockjs/info$': ddpp_sockjs_info,
r'^/(?!(websocket|sockjs)/)': wsgi_app,
})
# setup WebSocketServer to dispatch web requests
host = options['host']
port = options['port']
if port.isdigit():
port = int(port)
else:
port = socket.getservbyname(port)
webserver = geventwebsocket.WebSocketServer(
(host, port),
resource,
debug=debug,
)
def killall(*args, **kwargs):
"""Kill all green threads."""
postgres.stop()
webserver.stop()
# die gracefully with SIGINT or SIGQUIT
gevent.signal(signal.SIGINT, killall)
gevent.signal(signal.SIGQUIT, killall)
print('=> Discovering DDP endpoints...')
ddp = autodiscover()
ddp.pgworker = postgres
print(
'\n'.join(
' %s' % api_path
for api_path
in sorted(ddp.api_path_map())
),
)
# start greenlets
postgres.start()
print('=> Started PostgresGreenlet.')
web = gevent.spawn(webserver.serve_forever)
print('=> Started DDPWebSocketApplication.')
print('=> Started your app (%s).' % wsgi_name)
print('')
print('=> App running at: http://%s:%d/' % (host, port))
gevent.joinall([postgres, web])
| mit |
mrkm4ntr/incubator-airflow | airflow/migrations/versions/03bc53e68815_add_sm_dag_index.py | 8 | 1274 | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""merge_heads_2
Revision ID: 03bc53e68815
Revises: 0a2a5b66e19d, bf00311e1990
Create Date: 2018-11-24 20:21:46.605414
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '03bc53e68815'
down_revision = ('0a2a5b66e19d', 'bf00311e1990')
branch_labels = None
depends_on = None
def upgrade(): # noqa: D103
op.create_index('sm_dag', 'sla_miss', ['dag_id'], unique=False)
def downgrade(): # noqa: D103
op.drop_index('sm_dag', table_name='sla_miss')
| apache-2.0 |
Bjay1435/capstone | rootfs/usr/lib/python3.4/email/_parseaddr.py | 125 | 17199 | # Copyright (C) 2002-2007 Python Software Foundation
# Contact: [email protected]
"""Email address parsing code.
Lifted directly from rfc822.py. This should eventually be rewritten.
"""
__all__ = [
'mktime_tz',
'parsedate',
'parsedate_tz',
'quote',
]
import time, calendar
SPACE = ' '
EMPTYSTRING = ''
COMMASPACE = ', '
# Parse a date field
_monthnames = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul',
'aug', 'sep', 'oct', 'nov', 'dec',
'january', 'february', 'march', 'april', 'may', 'june', 'july',
'august', 'september', 'october', 'november', 'december']
_daynames = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']
# The timezone table does not include the military time zones defined
# in RFC822, other than Z. According to RFC1123, the description in
# RFC822 gets the signs wrong, so we can't rely on any such time
# zones. RFC1123 recommends that numeric timezone indicators be used
# instead of timezone names.
_timezones = {'UT':0, 'UTC':0, 'GMT':0, 'Z':0,
'AST': -400, 'ADT': -300, # Atlantic (used in Canada)
'EST': -500, 'EDT': -400, # Eastern
'CST': -600, 'CDT': -500, # Central
'MST': -700, 'MDT': -600, # Mountain
'PST': -800, 'PDT': -700 # Pacific
}
def parsedate_tz(data):
"""Convert a date string to a time tuple.
Accounts for military timezones.
"""
res = _parsedate_tz(data)
if not res:
return
if res[9] is None:
res[9] = 0
return tuple(res)
def _parsedate_tz(data):
"""Convert date to extended time tuple.
The last (additional) element is the time zone offset in seconds, except if
the timezone was specified as -0000. In that case the last element is
None. This indicates a UTC timestamp that explicitly declaims knowledge of
the source timezone, as opposed to a +0000 timestamp that indicates the
source timezone really was UTC.
"""
if not data:
return
data = data.split()
# The FWS after the comma after the day-of-week is optional, so search and
# adjust for this.
if data[0].endswith(',') or data[0].lower() in _daynames:
# There's a dayname here. Skip it
del data[0]
else:
i = data[0].rfind(',')
if i >= 0:
data[0] = data[0][i+1:]
if len(data) == 3: # RFC 850 date, deprecated
stuff = data[0].split('-')
if len(stuff) == 3:
data = stuff + data[1:]
if len(data) == 4:
s = data[3]
i = s.find('+')
if i == -1:
i = s.find('-')
if i > 0:
data[3:] = [s[:i], s[i:]]
else:
data.append('') # Dummy tz
if len(data) < 5:
return None
data = data[:5]
[dd, mm, yy, tm, tz] = data
mm = mm.lower()
if mm not in _monthnames:
dd, mm = mm, dd.lower()
if mm not in _monthnames:
return None
mm = _monthnames.index(mm) + 1
if mm > 12:
mm -= 12
if dd[-1] == ',':
dd = dd[:-1]
i = yy.find(':')
if i > 0:
yy, tm = tm, yy
if yy[-1] == ',':
yy = yy[:-1]
if not yy[0].isdigit():
yy, tz = tz, yy
if tm[-1] == ',':
tm = tm[:-1]
tm = tm.split(':')
if len(tm) == 2:
[thh, tmm] = tm
tss = '0'
elif len(tm) == 3:
[thh, tmm, tss] = tm
elif len(tm) == 1 and '.' in tm[0]:
# Some non-compliant MUAs use '.' to separate time elements.
tm = tm[0].split('.')
if len(tm) == 2:
[thh, tmm] = tm
tss = 0
elif len(tm) == 3:
[thh, tmm, tss] = tm
else:
return None
try:
yy = int(yy)
dd = int(dd)
thh = int(thh)
tmm = int(tmm)
tss = int(tss)
except ValueError:
return None
# Check for a yy specified in two-digit format, then convert it to the
# appropriate four-digit format, according to the POSIX standard. RFC 822
# calls for a two-digit yy, but RFC 2822 (which obsoletes RFC 822)
# mandates a 4-digit yy. For more information, see the documentation for
# the time module.
if yy < 100:
# The year is between 1969 and 1999 (inclusive).
if yy > 68:
yy += 1900
# The year is between 2000 and 2068 (inclusive).
else:
yy += 2000
tzoffset = None
tz = tz.upper()
if tz in _timezones:
tzoffset = _timezones[tz]
else:
try:
tzoffset = int(tz)
except ValueError:
pass
if tzoffset==0 and tz.startswith('-'):
tzoffset = None
# Convert a timezone offset into seconds ; -0500 -> -18000
if tzoffset:
if tzoffset < 0:
tzsign = -1
tzoffset = -tzoffset
else:
tzsign = 1
tzoffset = tzsign * ( (tzoffset//100)*3600 + (tzoffset % 100)*60)
# Daylight Saving Time flag is set to -1, since DST is unknown.
return [yy, mm, dd, thh, tmm, tss, 0, 1, -1, tzoffset]
def parsedate(data):
"""Convert a time string to a time tuple."""
t = parsedate_tz(data)
if isinstance(t, tuple):
return t[:9]
else:
return t
def mktime_tz(data):
"""Turn a 10-tuple as returned by parsedate_tz() into a POSIX timestamp."""
if data[9] is None:
# No zone info, so localtime is better assumption than GMT
return time.mktime(data[:8] + (-1,))
else:
t = calendar.timegm(data)
return t - data[9]
def quote(str):
"""Prepare string to be used in a quoted string.
Turns backslash and double quote characters into quoted pairs. These
are the only characters that need to be quoted inside a quoted string.
Does not add the surrounding double quotes.
"""
return str.replace('\\', '\\\\').replace('"', '\\"')
class AddrlistClass:
"""Address parser class by Ben Escoto.
To understand what this class does, it helps to have a copy of RFC 2822 in
front of you.
Note: this class interface is deprecated and may be removed in the future.
Use email.utils.AddressList instead.
"""
def __init__(self, field):
"""Initialize a new instance.
`field' is an unparsed address header field, containing
one or more addresses.
"""
self.specials = '()<>@,:;.\"[]'
self.pos = 0
self.LWS = ' \t'
self.CR = '\r\n'
self.FWS = self.LWS + self.CR
self.atomends = self.specials + self.LWS + self.CR
# Note that RFC 2822 now specifies `.' as obs-phrase, meaning that it
# is obsolete syntax. RFC 2822 requires that we recognize obsolete
# syntax, so allow dots in phrases.
self.phraseends = self.atomends.replace('.', '')
self.field = field
self.commentlist = []
def gotonext(self):
"""Skip white space and extract comments."""
wslist = []
while self.pos < len(self.field):
if self.field[self.pos] in self.LWS + '\n\r':
if self.field[self.pos] not in '\n\r':
wslist.append(self.field[self.pos])
self.pos += 1
elif self.field[self.pos] == '(':
self.commentlist.append(self.getcomment())
else:
break
return EMPTYSTRING.join(wslist)
def getaddrlist(self):
"""Parse all addresses.
Returns a list containing all of the addresses.
"""
result = []
while self.pos < len(self.field):
ad = self.getaddress()
if ad:
result += ad
else:
result.append(('', ''))
return result
def getaddress(self):
"""Parse the next address."""
self.commentlist = []
self.gotonext()
oldpos = self.pos
oldcl = self.commentlist
plist = self.getphraselist()
self.gotonext()
returnlist = []
if self.pos >= len(self.field):
# Bad email address technically, no domain.
if plist:
returnlist = [(SPACE.join(self.commentlist), plist[0])]
elif self.field[self.pos] in '.@':
# email address is just an addrspec
# this isn't very efficient since we start over
self.pos = oldpos
self.commentlist = oldcl
addrspec = self.getaddrspec()
returnlist = [(SPACE.join(self.commentlist), addrspec)]
elif self.field[self.pos] == ':':
# address is a group
returnlist = []
fieldlen = len(self.field)
self.pos += 1
while self.pos < len(self.field):
self.gotonext()
if self.pos < fieldlen and self.field[self.pos] == ';':
self.pos += 1
break
returnlist = returnlist + self.getaddress()
elif self.field[self.pos] == '<':
# Address is a phrase then a route addr
routeaddr = self.getrouteaddr()
if self.commentlist:
returnlist = [(SPACE.join(plist) + ' (' +
' '.join(self.commentlist) + ')', routeaddr)]
else:
returnlist = [(SPACE.join(plist), routeaddr)]
else:
if plist:
returnlist = [(SPACE.join(self.commentlist), plist[0])]
elif self.field[self.pos] in self.specials:
self.pos += 1
self.gotonext()
if self.pos < len(self.field) and self.field[self.pos] == ',':
self.pos += 1
return returnlist
def getrouteaddr(self):
"""Parse a route address (Return-path value).
This method just skips all the route stuff and returns the addrspec.
"""
if self.field[self.pos] != '<':
return
expectroute = False
self.pos += 1
self.gotonext()
adlist = ''
while self.pos < len(self.field):
if expectroute:
self.getdomain()
expectroute = False
elif self.field[self.pos] == '>':
self.pos += 1
break
elif self.field[self.pos] == '@':
self.pos += 1
expectroute = True
elif self.field[self.pos] == ':':
self.pos += 1
else:
adlist = self.getaddrspec()
self.pos += 1
break
self.gotonext()
return adlist
def getaddrspec(self):
"""Parse an RFC 2822 addr-spec."""
aslist = []
self.gotonext()
while self.pos < len(self.field):
preserve_ws = True
if self.field[self.pos] == '.':
if aslist and not aslist[-1].strip():
aslist.pop()
aslist.append('.')
self.pos += 1
preserve_ws = False
elif self.field[self.pos] == '"':
aslist.append('"%s"' % quote(self.getquote()))
elif self.field[self.pos] in self.atomends:
if aslist and not aslist[-1].strip():
aslist.pop()
break
else:
aslist.append(self.getatom())
ws = self.gotonext()
if preserve_ws and ws:
aslist.append(ws)
if self.pos >= len(self.field) or self.field[self.pos] != '@':
return EMPTYSTRING.join(aslist)
aslist.append('@')
self.pos += 1
self.gotonext()
return EMPTYSTRING.join(aslist) + self.getdomain()
def getdomain(self):
"""Get the complete domain name from an address."""
sdlist = []
while self.pos < len(self.field):
if self.field[self.pos] in self.LWS:
self.pos += 1
elif self.field[self.pos] == '(':
self.commentlist.append(self.getcomment())
elif self.field[self.pos] == '[':
sdlist.append(self.getdomainliteral())
elif self.field[self.pos] == '.':
self.pos += 1
sdlist.append('.')
elif self.field[self.pos] in self.atomends:
break
else:
sdlist.append(self.getatom())
return EMPTYSTRING.join(sdlist)
def getdelimited(self, beginchar, endchars, allowcomments=True):
"""Parse a header fragment delimited by special characters.
`beginchar' is the start character for the fragment.
If self is not looking at an instance of `beginchar' then
getdelimited returns the empty string.
`endchars' is a sequence of allowable end-delimiting characters.
Parsing stops when one of these is encountered.
If `allowcomments' is non-zero, embedded RFC 2822 comments are allowed
within the parsed fragment.
"""
if self.field[self.pos] != beginchar:
return ''
slist = ['']
quote = False
self.pos += 1
while self.pos < len(self.field):
if quote:
slist.append(self.field[self.pos])
quote = False
elif self.field[self.pos] in endchars:
self.pos += 1
break
elif allowcomments and self.field[self.pos] == '(':
slist.append(self.getcomment())
continue # have already advanced pos from getcomment
elif self.field[self.pos] == '\\':
quote = True
else:
slist.append(self.field[self.pos])
self.pos += 1
return EMPTYSTRING.join(slist)
def getquote(self):
"""Get a quote-delimited fragment from self's field."""
return self.getdelimited('"', '"\r', False)
def getcomment(self):
"""Get a parenthesis-delimited fragment from self's field."""
return self.getdelimited('(', ')\r', True)
def getdomainliteral(self):
"""Parse an RFC 2822 domain-literal."""
return '[%s]' % self.getdelimited('[', ']\r', False)
def getatom(self, atomends=None):
"""Parse an RFC 2822 atom.
Optional atomends specifies a different set of end token delimiters
(the default is to use self.atomends). This is used e.g. in
getphraselist() since phrase endings must not include the `.' (which
is legal in phrases)."""
atomlist = ['']
if atomends is None:
atomends = self.atomends
while self.pos < len(self.field):
if self.field[self.pos] in atomends:
break
else:
atomlist.append(self.field[self.pos])
self.pos += 1
return EMPTYSTRING.join(atomlist)
def getphraselist(self):
"""Parse a sequence of RFC 2822 phrases.
A phrase is a sequence of words, which are in turn either RFC 2822
atoms or quoted-strings. Phrases are canonicalized by squeezing all
runs of continuous whitespace into one space.
"""
plist = []
while self.pos < len(self.field):
if self.field[self.pos] in self.FWS:
self.pos += 1
elif self.field[self.pos] == '"':
plist.append(self.getquote())
elif self.field[self.pos] == '(':
self.commentlist.append(self.getcomment())
elif self.field[self.pos] in self.phraseends:
break
else:
plist.append(self.getatom(self.phraseends))
return plist
class AddressList(AddrlistClass):
"""An AddressList encapsulates a list of parsed RFC 2822 addresses."""
def __init__(self, field):
AddrlistClass.__init__(self, field)
if field:
self.addresslist = self.getaddrlist()
else:
self.addresslist = []
def __len__(self):
return len(self.addresslist)
def __add__(self, other):
# Set union
newaddr = AddressList(None)
newaddr.addresslist = self.addresslist[:]
for x in other.addresslist:
if not x in self.addresslist:
newaddr.addresslist.append(x)
return newaddr
def __iadd__(self, other):
# Set union, in-place
for x in other.addresslist:
if not x in self.addresslist:
self.addresslist.append(x)
return self
def __sub__(self, other):
# Set difference
newaddr = AddressList(None)
for x in self.addresslist:
if not x in other.addresslist:
newaddr.addresslist.append(x)
return newaddr
def __isub__(self, other):
# Set difference, in-place
for x in other.addresslist:
if x in self.addresslist:
self.addresslist.remove(x)
return self
def __getitem__(self, index):
# Make indexing, slices, and 'in' work
return self.addresslist[index]
| mit |
glatard/nipype | nipype/interfaces/mne/tests/test_auto_WatershedBEM.py | 9 | 1571 | # AUTO-GENERATED by tools/checkspecs.py - DO NOT EDIT
from nipype.testing import assert_equal
from nipype.interfaces.mne.base import WatershedBEM
def test_WatershedBEM_inputs():
input_map = dict(args=dict(argstr='%s',
),
atlas_mode=dict(argstr='--atlas',
),
environ=dict(nohash=True,
usedefault=True,
),
ignore_exception=dict(nohash=True,
usedefault=True,
),
overwrite=dict(argstr='--overwrite',
usedefault=True,
),
subject_id=dict(argstr='--subject %s',
mandatory=True,
),
subjects_dir=dict(mandatory=True,
usedefault=True,
),
terminal_output=dict(nohash=True,
),
volume=dict(argstr='--volume %s',
usedefault=True,
),
)
inputs = WatershedBEM.input_spec()
for key, metadata in input_map.items():
for metakey, value in metadata.items():
yield assert_equal, getattr(inputs.traits()[key], metakey), value
def test_WatershedBEM_outputs():
output_map = dict(brain_surface=dict(loc='bem/watershed',
),
cor_files=dict(altkey='COR',
loc='bem/watershed/ws',
),
fif_file=dict(altkey='fif',
loc='bem',
),
inner_skull_surface=dict(loc='bem/watershed',
),
mesh_files=dict(),
outer_skin_surface=dict(loc='bem/watershed',
),
outer_skull_surface=dict(loc='bem/watershed',
),
)
outputs = WatershedBEM.output_spec()
for key, metadata in output_map.items():
for metakey, value in metadata.items():
yield assert_equal, getattr(outputs.traits()[key], metakey), value
| bsd-3-clause |
stuarteberg/numpy | numpy/lib/tests/test_stride_tricks.py | 40 | 14732 | from __future__ import division, absolute_import, print_function
import numpy as np
from numpy.testing import (
run_module_suite, assert_equal, assert_array_equal,
assert_raises, assert_
)
from numpy.lib.stride_tricks import (
as_strided, broadcast_arrays, _broadcast_shape, broadcast_to
)
def assert_shapes_correct(input_shapes, expected_shape):
# Broadcast a list of arrays with the given input shapes and check the
# common output shape.
inarrays = [np.zeros(s) for s in input_shapes]
outarrays = broadcast_arrays(*inarrays)
outshapes = [a.shape for a in outarrays]
expected = [expected_shape] * len(inarrays)
assert_equal(outshapes, expected)
def assert_incompatible_shapes_raise(input_shapes):
# Broadcast a list of arrays with the given (incompatible) input shapes
# and check that they raise a ValueError.
inarrays = [np.zeros(s) for s in input_shapes]
assert_raises(ValueError, broadcast_arrays, *inarrays)
def assert_same_as_ufunc(shape0, shape1, transposed=False, flipped=False):
# Broadcast two shapes against each other and check that the data layout
# is the same as if a ufunc did the broadcasting.
x0 = np.zeros(shape0, dtype=int)
# Note that multiply.reduce's identity element is 1.0, so when shape1==(),
# this gives the desired n==1.
n = int(np.multiply.reduce(shape1))
x1 = np.arange(n).reshape(shape1)
if transposed:
x0 = x0.T
x1 = x1.T
if flipped:
x0 = x0[::-1]
x1 = x1[::-1]
# Use the add ufunc to do the broadcasting. Since we're adding 0s to x1, the
# result should be exactly the same as the broadcasted view of x1.
y = x0 + x1
b0, b1 = broadcast_arrays(x0, x1)
assert_array_equal(y, b1)
def test_same():
x = np.arange(10)
y = np.arange(10)
bx, by = broadcast_arrays(x, y)
assert_array_equal(x, bx)
assert_array_equal(y, by)
def test_one_off():
x = np.array([[1, 2, 3]])
y = np.array([[1], [2], [3]])
bx, by = broadcast_arrays(x, y)
bx0 = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3]])
by0 = bx0.T
assert_array_equal(bx0, bx)
assert_array_equal(by0, by)
def test_same_input_shapes():
# Check that the final shape is just the input shape.
data = [
(),
(1,),
(3,),
(0, 1),
(0, 3),
(1, 0),
(3, 0),
(1, 3),
(3, 1),
(3, 3),
]
for shape in data:
input_shapes = [shape]
# Single input.
assert_shapes_correct(input_shapes, shape)
# Double input.
input_shapes2 = [shape, shape]
assert_shapes_correct(input_shapes2, shape)
# Triple input.
input_shapes3 = [shape, shape, shape]
assert_shapes_correct(input_shapes3, shape)
def test_two_compatible_by_ones_input_shapes():
# Check that two different input shapes of the same length, but some have
# ones, broadcast to the correct shape.
data = [
[[(1,), (3,)], (3,)],
[[(1, 3), (3, 3)], (3, 3)],
[[(3, 1), (3, 3)], (3, 3)],
[[(1, 3), (3, 1)], (3, 3)],
[[(1, 1), (3, 3)], (3, 3)],
[[(1, 1), (1, 3)], (1, 3)],
[[(1, 1), (3, 1)], (3, 1)],
[[(1, 0), (0, 0)], (0, 0)],
[[(0, 1), (0, 0)], (0, 0)],
[[(1, 0), (0, 1)], (0, 0)],
[[(1, 1), (0, 0)], (0, 0)],
[[(1, 1), (1, 0)], (1, 0)],
[[(1, 1), (0, 1)], (0, 1)],
]
for input_shapes, expected_shape in data:
assert_shapes_correct(input_shapes, expected_shape)
# Reverse the input shapes since broadcasting should be symmetric.
assert_shapes_correct(input_shapes[::-1], expected_shape)
def test_two_compatible_by_prepending_ones_input_shapes():
# Check that two different input shapes (of different lengths) broadcast
# to the correct shape.
data = [
[[(), (3,)], (3,)],
[[(3,), (3, 3)], (3, 3)],
[[(3,), (3, 1)], (3, 3)],
[[(1,), (3, 3)], (3, 3)],
[[(), (3, 3)], (3, 3)],
[[(1, 1), (3,)], (1, 3)],
[[(1,), (3, 1)], (3, 1)],
[[(1,), (1, 3)], (1, 3)],
[[(), (1, 3)], (1, 3)],
[[(), (3, 1)], (3, 1)],
[[(), (0,)], (0,)],
[[(0,), (0, 0)], (0, 0)],
[[(0,), (0, 1)], (0, 0)],
[[(1,), (0, 0)], (0, 0)],
[[(), (0, 0)], (0, 0)],
[[(1, 1), (0,)], (1, 0)],
[[(1,), (0, 1)], (0, 1)],
[[(1,), (1, 0)], (1, 0)],
[[(), (1, 0)], (1, 0)],
[[(), (0, 1)], (0, 1)],
]
for input_shapes, expected_shape in data:
assert_shapes_correct(input_shapes, expected_shape)
# Reverse the input shapes since broadcasting should be symmetric.
assert_shapes_correct(input_shapes[::-1], expected_shape)
def test_incompatible_shapes_raise_valueerror():
# Check that a ValueError is raised for incompatible shapes.
data = [
[(3,), (4,)],
[(2, 3), (2,)],
[(3,), (3,), (4,)],
[(1, 3, 4), (2, 3, 3)],
]
for input_shapes in data:
assert_incompatible_shapes_raise(input_shapes)
# Reverse the input shapes since broadcasting should be symmetric.
assert_incompatible_shapes_raise(input_shapes[::-1])
def test_same_as_ufunc():
# Check that the data layout is the same as if a ufunc did the operation.
data = [
[[(1,), (3,)], (3,)],
[[(1, 3), (3, 3)], (3, 3)],
[[(3, 1), (3, 3)], (3, 3)],
[[(1, 3), (3, 1)], (3, 3)],
[[(1, 1), (3, 3)], (3, 3)],
[[(1, 1), (1, 3)], (1, 3)],
[[(1, 1), (3, 1)], (3, 1)],
[[(1, 0), (0, 0)], (0, 0)],
[[(0, 1), (0, 0)], (0, 0)],
[[(1, 0), (0, 1)], (0, 0)],
[[(1, 1), (0, 0)], (0, 0)],
[[(1, 1), (1, 0)], (1, 0)],
[[(1, 1), (0, 1)], (0, 1)],
[[(), (3,)], (3,)],
[[(3,), (3, 3)], (3, 3)],
[[(3,), (3, 1)], (3, 3)],
[[(1,), (3, 3)], (3, 3)],
[[(), (3, 3)], (3, 3)],
[[(1, 1), (3,)], (1, 3)],
[[(1,), (3, 1)], (3, 1)],
[[(1,), (1, 3)], (1, 3)],
[[(), (1, 3)], (1, 3)],
[[(), (3, 1)], (3, 1)],
[[(), (0,)], (0,)],
[[(0,), (0, 0)], (0, 0)],
[[(0,), (0, 1)], (0, 0)],
[[(1,), (0, 0)], (0, 0)],
[[(), (0, 0)], (0, 0)],
[[(1, 1), (0,)], (1, 0)],
[[(1,), (0, 1)], (0, 1)],
[[(1,), (1, 0)], (1, 0)],
[[(), (1, 0)], (1, 0)],
[[(), (0, 1)], (0, 1)],
]
for input_shapes, expected_shape in data:
assert_same_as_ufunc(input_shapes[0], input_shapes[1],
"Shapes: %s %s" % (input_shapes[0], input_shapes[1]))
# Reverse the input shapes since broadcasting should be symmetric.
assert_same_as_ufunc(input_shapes[1], input_shapes[0])
# Try them transposed, too.
assert_same_as_ufunc(input_shapes[0], input_shapes[1], True)
# ... and flipped for non-rank-0 inputs in order to test negative
# strides.
if () not in input_shapes:
assert_same_as_ufunc(input_shapes[0], input_shapes[1], False, True)
assert_same_as_ufunc(input_shapes[0], input_shapes[1], True, True)
def test_broadcast_to_succeeds():
data = [
[np.array(0), (0,), np.array(0)],
[np.array(0), (1,), np.zeros(1)],
[np.array(0), (3,), np.zeros(3)],
[np.ones(1), (1,), np.ones(1)],
[np.ones(1), (2,), np.ones(2)],
[np.ones(1), (1, 2, 3), np.ones((1, 2, 3))],
[np.arange(3), (3,), np.arange(3)],
[np.arange(3), (1, 3), np.arange(3).reshape(1, -1)],
[np.arange(3), (2, 3), np.array([[0, 1, 2], [0, 1, 2]])],
# test if shape is not a tuple
[np.ones(0), 0, np.ones(0)],
[np.ones(1), 1, np.ones(1)],
[np.ones(1), 2, np.ones(2)],
# these cases with size 0 are strange, but they reproduce the behavior
# of broadcasting with ufuncs (see test_same_as_ufunc above)
[np.ones(1), (0,), np.ones(0)],
[np.ones((1, 2)), (0, 2), np.ones((0, 2))],
[np.ones((2, 1)), (2, 0), np.ones((2, 0))],
]
for input_array, shape, expected in data:
actual = broadcast_to(input_array, shape)
assert_array_equal(expected, actual)
def test_broadcast_to_raises():
data = [
[(0,), ()],
[(1,), ()],
[(3,), ()],
[(3,), (1,)],
[(3,), (2,)],
[(3,), (4,)],
[(1, 2), (2, 1)],
[(1, 1), (1,)],
[(1,), -1],
[(1,), (-1,)],
[(1, 2), (-1, 2)],
]
for orig_shape, target_shape in data:
arr = np.zeros(orig_shape)
assert_raises(ValueError, lambda: broadcast_to(arr, target_shape))
def test_broadcast_shape():
# broadcast_shape is already exercized indirectly by broadcast_arrays
assert_raises(ValueError, _broadcast_shape)
assert_equal(_broadcast_shape([1, 2]), (2,))
assert_equal(_broadcast_shape(np.ones((1, 1))), (1, 1))
assert_equal(_broadcast_shape(np.ones((1, 1)), np.ones((3, 4))), (3, 4))
assert_equal(_broadcast_shape(*([np.ones((1, 2))] * 32)), (1, 2))
assert_equal(_broadcast_shape(*([np.ones((1, 2))] * 100)), (1, 2))
# regression tests for gh-5862
assert_equal(_broadcast_shape(*([np.ones(2)] * 32 + [1])), (2,))
bad_args = [np.ones(2)] * 32 + [np.ones(3)] * 32
assert_raises(ValueError, lambda: _broadcast_shape(*bad_args))
def test_as_strided():
a = np.array([None])
a_view = as_strided(a)
expected = np.array([None])
assert_array_equal(a_view, np.array([None]))
a = np.array([1, 2, 3, 4])
a_view = as_strided(a, shape=(2,), strides=(2 * a.itemsize,))
expected = np.array([1, 3])
assert_array_equal(a_view, expected)
a = np.array([1, 2, 3, 4])
a_view = as_strided(a, shape=(3, 4), strides=(0, 1 * a.itemsize))
expected = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]])
assert_array_equal(a_view, expected)
# Regression test for gh-5081
dt = np.dtype([('num', 'i4'), ('obj', 'O')])
a = np.empty((4,), dtype=dt)
a['num'] = np.arange(1, 5)
a_view = as_strided(a, shape=(3, 4), strides=(0, a.itemsize))
expected_num = [[1, 2, 3, 4]] * 3
expected_obj = [[None]*4]*3
assert_equal(a_view.dtype, dt)
assert_array_equal(expected_num, a_view['num'])
assert_array_equal(expected_obj, a_view['obj'])
# Make sure that void types without fields are kept unchanged
a = np.empty((4,), dtype='V4')
a_view = as_strided(a, shape=(3, 4), strides=(0, a.itemsize))
assert_equal(a.dtype, a_view.dtype)
# Make sure that the only type that could fail is properly handled
dt = np.dtype({'names': [''], 'formats': ['V4']})
a = np.empty((4,), dtype=dt)
a_view = as_strided(a, shape=(3, 4), strides=(0, a.itemsize))
assert_equal(a.dtype, a_view.dtype)
def as_strided_writeable():
arr = np.ones(10)
view = as_strided(arr, writeable=False)
assert_(not view.flags.writeable)
# Check that writeable also is fine:
view = as_strided(arr, writeable=True)
assert_(view.flags.writeable)
view[...] = 3
assert_array_equal(arr, np.full_like(arr, 3))
# Test that things do not break down for readonly:
arr.flags.writeable = False
view = as_strided(arr, writeable=False)
view = as_strided(arr, writeable=True)
assert_(not view.flags.writeable)
class VerySimpleSubClass(np.ndarray):
def __new__(cls, *args, **kwargs):
kwargs['subok'] = True
return np.array(*args, **kwargs).view(cls)
class SimpleSubClass(VerySimpleSubClass):
def __new__(cls, *args, **kwargs):
kwargs['subok'] = True
self = np.array(*args, **kwargs).view(cls)
self.info = 'simple'
return self
def __array_finalize__(self, obj):
self.info = getattr(obj, 'info', '') + ' finalized'
def test_subclasses():
# test that subclass is preserved only if subok=True
a = VerySimpleSubClass([1, 2, 3, 4])
assert_(type(a) is VerySimpleSubClass)
a_view = as_strided(a, shape=(2,), strides=(2 * a.itemsize,))
assert_(type(a_view) is np.ndarray)
a_view = as_strided(a, shape=(2,), strides=(2 * a.itemsize,), subok=True)
assert_(type(a_view) is VerySimpleSubClass)
# test that if a subclass has __array_finalize__, it is used
a = SimpleSubClass([1, 2, 3, 4])
a_view = as_strided(a, shape=(2,), strides=(2 * a.itemsize,), subok=True)
assert_(type(a_view) is SimpleSubClass)
assert_(a_view.info == 'simple finalized')
# similar tests for broadcast_arrays
b = np.arange(len(a)).reshape(-1, 1)
a_view, b_view = broadcast_arrays(a, b)
assert_(type(a_view) is np.ndarray)
assert_(type(b_view) is np.ndarray)
assert_(a_view.shape == b_view.shape)
a_view, b_view = broadcast_arrays(a, b, subok=True)
assert_(type(a_view) is SimpleSubClass)
assert_(a_view.info == 'simple finalized')
assert_(type(b_view) is np.ndarray)
assert_(a_view.shape == b_view.shape)
# and for broadcast_to
shape = (2, 4)
a_view = broadcast_to(a, shape)
assert_(type(a_view) is np.ndarray)
assert_(a_view.shape == shape)
a_view = broadcast_to(a, shape, subok=True)
assert_(type(a_view) is SimpleSubClass)
assert_(a_view.info == 'simple finalized')
assert_(a_view.shape == shape)
def test_writeable():
# broadcast_to should return a readonly array
original = np.array([1, 2, 3])
result = broadcast_to(original, (2, 3))
assert_equal(result.flags.writeable, False)
assert_raises(ValueError, result.__setitem__, slice(None), 0)
# but the result of broadcast_arrays needs to be writeable (for now), to
# preserve backwards compatibility
for results in [broadcast_arrays(original),
broadcast_arrays(0, original)]:
for result in results:
assert_equal(result.flags.writeable, True)
# keep readonly input readonly
original.flags.writeable = False
_, result = broadcast_arrays(0, original)
assert_equal(result.flags.writeable, False)
# regresssion test for GH6491
shape = (2,)
strides = [0]
tricky_array = as_strided(np.array(0), shape, strides)
other = np.zeros((1,))
first, second = broadcast_arrays(tricky_array, other)
assert_(first.shape == second.shape)
def test_reference_types():
input_array = np.array('a', dtype=object)
expected = np.array(['a'] * 3, dtype=object)
actual = broadcast_to(input_array, (3,))
assert_array_equal(expected, actual)
actual, _ = broadcast_arrays(input_array, np.ones(3))
assert_array_equal(expected, actual)
if __name__ == "__main__":
run_module_suite()
| bsd-3-clause |
oseledets/pybtex | pybtex/database/input/bibtexml.py | 1 | 2801 | # Copyright (c) 2006, 2007, 2008, 2009, 2010, 2011, 2012 Andrey Golovizin
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from xml.etree import cElementTree as ET
from pybtex.database import Entry, Person
from pybtex.database.input import BaseParser
bibtexns = '{http://bibtexml.sf.net/}'
def remove_ns(s):
if s.startswith(bibtexns):
return s[len(bibtexns):]
class Parser(BaseParser):
default_suffix = '.xml'
def parse_stream(self, stream):
t = ET.parse(stream)
entries = t.findall(bibtexns + 'entry')
self.data.add_entries(self.process_entry(entry) for entry in entries)
return self.data
def process_entry(self, entry):
def process_person(person_entry, role):
persons = person_entry.findall(bibtexns + 'person')
if persons:
for person in persons:
process_person(person, role)
else:
text = person_entry.text.strip()
if text:
e.add_person(Person(text), role)
else:
names = {}
for name in person_entry.getchildren():
names[remove_ns(name.tag)] = name.text
e.add_person(Person(**names), role)
id_ = entry.get('id')
item = entry.getchildren()[0]
type = remove_ns(item.tag)
e = Entry(type)
for field in item.getchildren():
field_name = remove_ns(field.tag)
if field_name in Person.valid_roles:
process_person(field, field_name)
else:
field_text = field.text if field.text is not None else ''
e.fields[field_name] = field_text
return id_, e
| mit |
zzzeek/test | mako/ext/beaker_cache.py | 2 | 1947 | """Provide a :class:`.CacheImpl` for the Beaker caching system."""
from mako import exceptions
from mako.cache import CacheImpl
_beaker_cache = None
class BeakerCacheImpl(CacheImpl):
"""A :class:`.CacheImpl` provided for the Beaker caching system.
This plugin is used by default, based on the default
value of ``'beaker'`` for the ``cache_impl`` parameter of the
:class:`.Template` or :class:`.TemplateLookup` classes.
"""
def __init__(self, cache):
global _beaker_cache
if _beaker_cache is None:
try:
from beaker import cache as beaker_cache
except ImportError, e:
raise exceptions.RuntimeException(
"the Beaker package is required to use cache "
"functionality.")
_beaker_cache = beaker_cache.CacheManager()
super(BeakerCacheImpl, self).__init__(cache)
def _get_cache(self, **kw):
expiretime = kw.pop('timeout', None)
if 'dir' in kw:
kw['data_dir'] = kw.pop('dir')
elif self.cache.template.module_directory:
kw['data_dir'] = self.cache.template.module_directory
if kw.get('type') == 'memcached':
kw['type'] = 'ext:memcached'
return _beaker_cache.get_cache(self.cache.id, **kw), \
{'expiretime':expiretime, 'starttime':self.cache.starttime}
def get_and_replace(self, key, creation_function, **kw):
cache, kw = self._get_cache(**kw)
return cache.get(key, createfunc=creation_function, **kw)
def put(self, key, value, **kw):
cache, kw = self._get_cache(**kw)
cache.put(key, value, **kw)
def get(self, key, **kw):
cache, kw = self._get_cache(**kw)
return cache.get(key, **kw)
def invalidate(self, key, **kw):
cache, kw = self._get_cache(**kw)
cache.remove_value(key, **kw)
| mit |
espadrine/opera | chromium/src/third_party/libvpx/source/libvpx/third_party/googletest/src/xcode/Scripts/versiongenerate.py | 3088 | 4536 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A script to prepare version informtion for use the gtest Info.plist file.
This script extracts the version information from the configure.ac file and
uses it to generate a header file containing the same information. The
#defines in this header file will be included in during the generation of
the Info.plist of the framework, giving the correct value to the version
shown in the Finder.
This script makes the following assumptions (these are faults of the script,
not problems with the Autoconf):
1. The AC_INIT macro will be contained within the first 1024 characters
of configure.ac
2. The version string will be 3 integers separated by periods and will be
surrounded by squre brackets, "[" and "]" (e.g. [1.0.1]). The first
segment represents the major version, the second represents the minor
version and the third represents the fix version.
3. No ")" character exists between the opening "(" and closing ")" of
AC_INIT, including in comments and character strings.
"""
import sys
import re
# Read the command line argument (the output directory for Version.h)
if (len(sys.argv) < 3):
print "Usage: versiongenerate.py input_dir output_dir"
sys.exit(1)
else:
input_dir = sys.argv[1]
output_dir = sys.argv[2]
# Read the first 1024 characters of the configure.ac file
config_file = open("%s/configure.ac" % input_dir, 'r')
buffer_size = 1024
opening_string = config_file.read(buffer_size)
config_file.close()
# Extract the version string from the AC_INIT macro
# The following init_expression means:
# Extract three integers separated by periods and surrounded by squre
# brackets(e.g. "[1.0.1]") between "AC_INIT(" and ")". Do not be greedy
# (*? is the non-greedy flag) since that would pull in everything between
# the first "(" and the last ")" in the file.
version_expression = re.compile(r"AC_INIT\(.*?\[(\d+)\.(\d+)\.(\d+)\].*?\)",
re.DOTALL)
version_values = version_expression.search(opening_string)
major_version = version_values.group(1)
minor_version = version_values.group(2)
fix_version = version_values.group(3)
# Write the version information to a header file to be included in the
# Info.plist file.
file_data = """//
// DO NOT MODIFY THIS FILE (but you can delete it)
//
// This file is autogenerated by the versiongenerate.py script. This script
// is executed in a "Run Script" build phase when creating gtest.framework. This
// header file is not used during compilation of C-source. Rather, it simply
// defines some version strings for substitution in the Info.plist. Because of
// this, we are not not restricted to C-syntax nor are we using include guards.
//
#define GTEST_VERSIONINFO_SHORT %s.%s
#define GTEST_VERSIONINFO_LONG %s.%s.%s
""" % (major_version, minor_version, major_version, minor_version, fix_version)
version_file = open("%s/Version.h" % output_dir, 'w')
version_file.write(file_data)
version_file.close()
| bsd-3-clause |
xombiemp/CouchPotatoServer | libs/requests/sessions.py | 43 | 24273 | # -*- coding: utf-8 -*-
"""
requests.session
~~~~~~~~~~~~~~~~
This module provides a Session object to manage and persist settings across
requests (cookies, auth, proxies).
"""
import os
from collections import Mapping
from datetime import datetime
from .auth import _basic_auth_str
from .compat import cookielib, OrderedDict, urljoin, urlparse
from .cookies import (
cookiejar_from_dict, extract_cookies_to_jar, RequestsCookieJar, merge_cookies)
from .models import Request, PreparedRequest, DEFAULT_REDIRECT_LIMIT
from .hooks import default_hooks, dispatch_hook
from .utils import to_key_val_list, default_headers, to_native_string
from .exceptions import (
TooManyRedirects, InvalidSchema, ChunkedEncodingError, ContentDecodingError)
from .packages.urllib3._collections import RecentlyUsedContainer
from .structures import CaseInsensitiveDict
from .adapters import HTTPAdapter
from .utils import (
requote_uri, get_environ_proxies, get_netrc_auth, should_bypass_proxies,
get_auth_from_url
)
from .status_codes import codes
# formerly defined here, reexposed here for backward compatibility
from .models import REDIRECT_STATI
REDIRECT_CACHE_SIZE = 1000
def merge_setting(request_setting, session_setting, dict_class=OrderedDict):
"""
Determines appropriate setting for a given request, taking into account the
explicit setting on that request, and the setting in the session. If a
setting is a dictionary, they will be merged together using `dict_class`
"""
if session_setting is None:
return request_setting
if request_setting is None:
return session_setting
# Bypass if not a dictionary (e.g. verify)
if not (
isinstance(session_setting, Mapping) and
isinstance(request_setting, Mapping)
):
return request_setting
merged_setting = dict_class(to_key_val_list(session_setting))
merged_setting.update(to_key_val_list(request_setting))
# Remove keys that are set to None.
for (k, v) in merged_setting.items():
if v is None:
del merged_setting[k]
return merged_setting
def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict):
"""
Properly merges both requests and session hooks.
This is necessary because when request_hooks == {'response': []}, the
merge breaks Session hooks entirely.
"""
if session_hooks is None or session_hooks.get('response') == []:
return request_hooks
if request_hooks is None or request_hooks.get('response') == []:
return session_hooks
return merge_setting(request_hooks, session_hooks, dict_class)
class SessionRedirectMixin(object):
def resolve_redirects(self, resp, req, stream=False, timeout=None,
verify=True, cert=None, proxies=None, **adapter_kwargs):
"""Receives a Response. Returns a generator of Responses."""
i = 0
hist = [] # keep track of history
while resp.is_redirect:
prepared_request = req.copy()
if i > 0:
# Update history and keep track of redirects.
hist.append(resp)
new_hist = list(hist)
resp.history = new_hist
try:
resp.content # Consume socket so it can be released
except (ChunkedEncodingError, ContentDecodingError, RuntimeError):
resp.raw.read(decode_content=False)
if i >= self.max_redirects:
raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects)
# Release the connection back into the pool.
resp.close()
url = resp.headers['location']
method = req.method
# Handle redirection without scheme (see: RFC 1808 Section 4)
if url.startswith('//'):
parsed_rurl = urlparse(resp.url)
url = '%s:%s' % (parsed_rurl.scheme, url)
# The scheme should be lower case...
parsed = urlparse(url)
url = parsed.geturl()
# Facilitate relative 'location' headers, as allowed by RFC 7231.
# (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
# Compliant with RFC3986, we percent encode the url.
if not parsed.netloc:
url = urljoin(resp.url, requote_uri(url))
else:
url = requote_uri(url)
prepared_request.url = to_native_string(url)
# Cache the url, unless it redirects to itself.
if resp.is_permanent_redirect and req.url != prepared_request.url:
self.redirect_cache[req.url] = prepared_request.url
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if (resp.status_code == codes.see_other and
method != 'HEAD'):
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if resp.status_code == codes.found and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
# This bizarre behaviour is explained in Issue 1704.
if resp.status_code == codes.moved and method == 'POST':
method = 'GET'
prepared_request.method = method
# https://github.com/kennethreitz/requests/issues/1084
if resp.status_code not in (codes.temporary_redirect, codes.permanent_redirect):
if 'Content-Length' in prepared_request.headers:
del prepared_request.headers['Content-Length']
prepared_request.body = None
headers = prepared_request.headers
try:
del headers['Cookie']
except KeyError:
pass
# Extract any cookies sent on the response to the cookiejar
# in the new request. Because we've mutated our copied prepared
# request, use the old one that we haven't yet touched.
extract_cookies_to_jar(prepared_request._cookies, req, resp.raw)
prepared_request._cookies.update(self.cookies)
prepared_request.prepare_cookies(prepared_request._cookies)
# Rebuild auth and proxy information.
proxies = self.rebuild_proxies(prepared_request, proxies)
self.rebuild_auth(prepared_request, resp)
# Override the original request.
req = prepared_request
resp = self.send(
req,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies,
allow_redirects=False,
**adapter_kwargs
)
extract_cookies_to_jar(self.cookies, prepared_request, resp.raw)
i += 1
yield resp
def rebuild_auth(self, prepared_request, response):
"""
When being redirected we may want to strip authentication from the
request to avoid leaking credentials. This method intelligently removes
and reapplies authentication where possible to avoid credential loss.
"""
headers = prepared_request.headers
url = prepared_request.url
if 'Authorization' in headers:
# If we get redirected to a new host, we should strip out any
# authentication headers.
original_parsed = urlparse(response.request.url)
redirect_parsed = urlparse(url)
if (original_parsed.hostname != redirect_parsed.hostname):
del headers['Authorization']
# .netrc might have more auth for us on our new host.
new_auth = get_netrc_auth(url) if self.trust_env else None
if new_auth is not None:
prepared_request.prepare_auth(new_auth)
return
def rebuild_proxies(self, prepared_request, proxies):
"""
This method re-evaluates the proxy configuration by considering the
environment variables. If we are redirected to a URL covered by
NO_PROXY, we strip the proxy configuration. Otherwise, we set missing
proxy keys for this URL (in case they were stripped by a previous
redirect).
This method also replaces the Proxy-Authorization header where
necessary.
"""
headers = prepared_request.headers
url = prepared_request.url
scheme = urlparse(url).scheme
new_proxies = proxies.copy() if proxies is not None else {}
if self.trust_env and not should_bypass_proxies(url):
environ_proxies = get_environ_proxies(url)
proxy = environ_proxies.get(scheme)
if proxy:
new_proxies.setdefault(scheme, environ_proxies[scheme])
if 'Proxy-Authorization' in headers:
del headers['Proxy-Authorization']
try:
username, password = get_auth_from_url(new_proxies[scheme])
except KeyError:
username, password = None, None
if username and password:
headers['Proxy-Authorization'] = _basic_auth_str(username, password)
return new_proxies
class Session(SessionRedirectMixin):
"""A Requests session.
Provides cookie persistence, connection-pooling, and configuration.
Basic Usage::
>>> import requests
>>> s = requests.Session()
>>> s.get('http://httpbin.org/get')
200
Or as a context manager::
>>> with requests.Session() as s:
>>> s.get('http://httpbin.org/get')
200
"""
__attrs__ = [
'headers', 'cookies', 'auth', 'proxies', 'hooks', 'params', 'verify',
'cert', 'prefetch', 'adapters', 'stream', 'trust_env',
'max_redirects',
]
def __init__(self):
#: A case-insensitive dictionary of headers to be sent on each
#: :class:`Request <Request>` sent from this
#: :class:`Session <Session>`.
self.headers = default_headers()
#: Default Authentication tuple or object to attach to
#: :class:`Request <Request>`.
self.auth = None
#: Dictionary mapping protocol to the URL of the proxy (e.g.
#: {'http': 'foo.bar:3128'}) to be used on each
#: :class:`Request <Request>`.
self.proxies = {}
#: Event-handling hooks.
self.hooks = default_hooks()
#: Dictionary of querystring data to attach to each
#: :class:`Request <Request>`. The dictionary values may be lists for
#: representing multivalued query parameters.
self.params = {}
#: Stream response content default.
self.stream = False
#: SSL Verification default.
self.verify = True
#: SSL certificate default.
self.cert = None
#: Maximum number of redirects allowed. If the request exceeds this
#: limit, a :class:`TooManyRedirects` exception is raised.
self.max_redirects = DEFAULT_REDIRECT_LIMIT
#: Should we trust the environment?
self.trust_env = True
#: A CookieJar containing all currently outstanding cookies set on this
#: session. By default it is a
#: :class:`RequestsCookieJar <requests.cookies.RequestsCookieJar>`, but
#: may be any other ``cookielib.CookieJar`` compatible object.
self.cookies = cookiejar_from_dict({})
# Default connection adapters.
self.adapters = OrderedDict()
self.mount('https://', HTTPAdapter())
self.mount('http://', HTTPAdapter())
# Only store 1000 redirects to prevent using infinite memory
self.redirect_cache = RecentlyUsedContainer(REDIRECT_CACHE_SIZE)
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def prepare_request(self, request):
"""Constructs a :class:`PreparedRequest <PreparedRequest>` for
transmission and returns it. The :class:`PreparedRequest` has settings
merged from the :class:`Request <Request>` instance and those of the
:class:`Session`.
:param request: :class:`Request` instance to prepare with this
session's settings.
"""
cookies = request.cookies or {}
# Bootstrap CookieJar.
if not isinstance(cookies, cookielib.CookieJar):
cookies = cookiejar_from_dict(cookies)
# Merge with session cookies
merged_cookies = merge_cookies(
merge_cookies(RequestsCookieJar(), self.cookies), cookies)
# Set environment's basic authentication if not explicitly set.
auth = request.auth
if self.trust_env and not auth and not self.auth:
auth = get_netrc_auth(request.url)
p = PreparedRequest()
p.prepare(
method=request.method.upper(),
url=request.url,
files=request.files,
data=request.data,
json=request.json,
headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict),
params=merge_setting(request.params, self.params),
auth=merge_setting(auth, self.auth),
cookies=merged_cookies,
hooks=merge_hooks(request.hooks, self.hooks),
)
return p
def request(self, method, url,
params=None,
data=None,
headers=None,
cookies=None,
files=None,
auth=None,
timeout=None,
allow_redirects=True,
proxies=None,
hooks=None,
stream=None,
verify=None,
cert=None,
json=None):
"""Constructs a :class:`Request <Request>`, prepares it and sends it.
Returns :class:`Response <Response>` object.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query
string for the :class:`Request`.
:param data: (optional) Dictionary or bytes to send in the body of the
:class:`Request`.
:param json: (optional) json to send in the body of the
:class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
:class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the
:class:`Request`.
:param files: (optional) Dictionary of ``'filename': file-like-objects``
for multipart encoding upload.
:param auth: (optional) Auth tuple or callable to enable
Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Set to True by default.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of
the proxy.
:param stream: (optional) whether to immediately download the response
content. Defaults to ``False``.
:param verify: (optional) if ``True``, the SSL cert will be verified.
A CA_BUNDLE path can also be provided.
:param cert: (optional) if String, path to ssl client cert file (.pem).
If Tuple, ('cert', 'key') pair.
"""
method = to_native_string(method)
# Create the Request.
req = Request(
method = method.upper(),
url = url,
headers = headers,
files = files,
data = data or {},
json = json,
params = params or {},
auth = auth,
cookies = cookies,
hooks = hooks,
)
prep = self.prepare_request(req)
proxies = proxies or {}
settings = self.merge_environment_settings(
prep.url, proxies, stream, verify, cert
)
# Send the request.
send_kwargs = {
'timeout': timeout,
'allow_redirects': allow_redirects,
}
send_kwargs.update(settings)
resp = self.send(prep, **send_kwargs)
return resp
def get(self, url, **kwargs):
"""Sends a GET request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', True)
return self.request('GET', url, **kwargs)
def options(self, url, **kwargs):
"""Sends a OPTIONS request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', True)
return self.request('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
"""Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', False)
return self.request('HEAD', url, **kwargs)
def post(self, url, data=None, json=None, **kwargs):
"""Sends a POST request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('POST', url, data=data, json=json, **kwargs)
def put(self, url, data=None, **kwargs):
"""Sends a PUT request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
"""Sends a PATCH request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
"""Sends a DELETE request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('DELETE', url, **kwargs)
def send(self, request, **kwargs):
"""Send a given PreparedRequest."""
# Set defaults that the hooks can utilize to ensure they always have
# the correct parameters to reproduce the previous request.
kwargs.setdefault('stream', self.stream)
kwargs.setdefault('verify', self.verify)
kwargs.setdefault('cert', self.cert)
kwargs.setdefault('proxies', self.proxies)
# It's possible that users might accidentally send a Request object.
# Guard against that specific failure case.
if not isinstance(request, PreparedRequest):
raise ValueError('You can only send PreparedRequests.')
checked_urls = set()
while request.url in self.redirect_cache:
checked_urls.add(request.url)
new_url = self.redirect_cache.get(request.url)
if new_url in checked_urls:
break
request.url = new_url
# Set up variables needed for resolve_redirects and dispatching of hooks
allow_redirects = kwargs.pop('allow_redirects', True)
stream = kwargs.get('stream')
hooks = request.hooks
# Get the appropriate adapter to use
adapter = self.get_adapter(url=request.url)
# Start time (approximately) of the request
start = datetime.utcnow()
# Send the request
r = adapter.send(request, **kwargs)
# Total elapsed time of the request (approximately)
r.elapsed = datetime.utcnow() - start
# Response manipulation hooks
r = dispatch_hook('response', hooks, r, **kwargs)
# Persist cookies
if r.history:
# If the hooks create history then we want those cookies too
for resp in r.history:
extract_cookies_to_jar(self.cookies, resp.request, resp.raw)
extract_cookies_to_jar(self.cookies, request, r.raw)
# Redirect resolving generator.
gen = self.resolve_redirects(r, request, **kwargs)
# Resolve redirects if allowed.
history = [resp for resp in gen] if allow_redirects else []
# Shuffle things around if there's history.
if history:
# Insert the first (original) request at the start
history.insert(0, r)
# Get the last request made
r = history.pop()
r.history = history
if not stream:
r.content
return r
def merge_environment_settings(self, url, proxies, stream, verify, cert):
"""Check the environment and merge it with some settings."""
# Gather clues from the surrounding environment.
if self.trust_env:
# Set environment's proxies.
env_proxies = get_environ_proxies(url) or {}
for (k, v) in env_proxies.items():
proxies.setdefault(k, v)
# Look for requests environment configuration and be compatible
# with cURL.
if verify is True or verify is None:
verify = (os.environ.get('REQUESTS_CA_BUNDLE') or
os.environ.get('CURL_CA_BUNDLE'))
# Merge all the kwargs.
proxies = merge_setting(proxies, self.proxies)
stream = merge_setting(stream, self.stream)
verify = merge_setting(verify, self.verify)
cert = merge_setting(cert, self.cert)
return {'verify': verify, 'proxies': proxies, 'stream': stream,
'cert': cert}
def get_adapter(self, url):
"""Returns the appropriate connnection adapter for the given URL."""
for (prefix, adapter) in self.adapters.items():
if url.lower().startswith(prefix):
return adapter
# Nothing matches :-/
raise InvalidSchema("No connection adapters were found for '%s'" % url)
def close(self):
"""Closes all adapters and as such the session"""
for v in self.adapters.values():
v.close()
def mount(self, prefix, adapter):
"""Registers a connection adapter to a prefix.
Adapters are sorted in descending order by key length."""
self.adapters[prefix] = adapter
keys_to_move = [k for k in self.adapters if len(k) < len(prefix)]
for key in keys_to_move:
self.adapters[key] = self.adapters.pop(key)
def __getstate__(self):
state = dict((attr, getattr(self, attr, None)) for attr in self.__attrs__)
state['redirect_cache'] = dict(self.redirect_cache)
return state
def __setstate__(self, state):
redirect_cache = state.pop('redirect_cache', {})
for attr, value in state.items():
setattr(self, attr, value)
self.redirect_cache = RecentlyUsedContainer(REDIRECT_CACHE_SIZE)
for redirect, to in redirect_cache.items():
self.redirect_cache[redirect] = to
def session():
"""Returns a :class:`Session` for context-management."""
return Session()
| gpl-3.0 |
MSOpenTech/edx-platform | common/lib/xmodule/xmodule/modulestore/tests/test_contentstore.py | 87 | 8284 | """
Test contentstore.mongo functionality
"""
import logging
from uuid import uuid4
import unittest
import mimetypes
from tempfile import mkdtemp
import path
import shutil
from opaque_keys.edx.locator import CourseLocator, AssetLocator
from opaque_keys.edx.keys import AssetKey
from xmodule.tests import DATA_DIR
from xmodule.contentstore.mongo import MongoContentStore
from xmodule.contentstore.content import StaticContent
from xmodule.exceptions import NotFoundError
import ddt
from __builtin__ import delattr
from xmodule.modulestore.tests.mongo_connection import MONGO_PORT_NUM, MONGO_HOST
log = logging.getLogger(__name__)
HOST = MONGO_HOST
PORT = MONGO_PORT_NUM
DB = 'test_mongo_%s' % uuid4().hex[:5]
@ddt.ddt
class TestContentstore(unittest.TestCase):
"""
Test the methods in contentstore.mongo using deprecated and non-deprecated keys
"""
# don't use these 2 class vars as they restore behavior once the tests are done
asset_deprecated = None
ssck_deprecated = None
@classmethod
def tearDownClass(cls):
"""
Restores deprecated values
"""
if cls.asset_deprecated is not None:
setattr(AssetLocator, 'deprecated', cls.asset_deprecated)
else:
delattr(AssetLocator, 'deprecated')
if cls.ssck_deprecated is not None:
setattr(CourseLocator, 'deprecated', cls.ssck_deprecated)
else:
delattr(CourseLocator, 'deprecated')
return super(TestContentstore, cls).tearDownClass()
def set_up_assets(self, deprecated):
"""
Setup contentstore w/ proper overriding of deprecated.
"""
# since MongoModuleStore and MongoContentStore are basically assumed to be together, create this class
# as well
self.contentstore = MongoContentStore(HOST, DB, port=PORT)
self.addCleanup(self.contentstore._drop_database) # pylint: disable=protected-access
setattr(AssetLocator, 'deprecated', deprecated)
setattr(CourseLocator, 'deprecated', deprecated)
self.course1_key = CourseLocator('test', 'asset_test', '2014_07')
self.course2_key = CourseLocator('test', 'asset_test2', '2014_07')
self.course1_files = ['contains.sh', 'picture1.jpg', 'picture2.jpg']
self.course2_files = ['picture1.jpg', 'picture3.jpg', 'door_2.ogg']
def load_assets(course_key, files):
locked = False
for filename in files:
asset_key = course_key.make_asset_key('asset', filename)
self.save_asset(filename, asset_key, filename, locked)
locked = not locked
load_assets(self.course1_key, self.course1_files)
load_assets(self.course2_key, self.course2_files)
def save_asset(self, filename, asset_key, displayname, locked):
"""
Load and save the given file.
"""
with open("{}/static/{}".format(DATA_DIR, filename), "rb") as f:
content = StaticContent(
asset_key, displayname, mimetypes.guess_type(filename)[0], f.read(),
locked=locked
)
self.contentstore.save(content)
@ddt.data(True, False)
def test_delete(self, deprecated):
"""
Test that deleting assets works
"""
self.set_up_assets(deprecated)
asset_key = self.course1_key.make_asset_key('asset', self.course1_files[0])
self.contentstore.delete(asset_key)
with self.assertRaises(NotFoundError):
self.contentstore.find(asset_key)
# ensure deleting a non-existent file is a noop
self.contentstore.delete(asset_key)
@ddt.data(True, False)
def test_find(self, deprecated):
"""
Test using find
"""
self.set_up_assets(deprecated)
asset_key = self.course1_key.make_asset_key('asset', self.course1_files[0])
self.assertIsNotNone(self.contentstore.find(asset_key), "Could not find {}".format(asset_key))
self.assertIsNotNone(self.contentstore.find(asset_key, as_stream=True), "Could not find {}".format(asset_key))
unknown_asset = self.course1_key.make_asset_key('asset', 'no_such_file.gif')
with self.assertRaises(NotFoundError):
self.contentstore.find(unknown_asset)
self.assertIsNone(
self.contentstore.find(unknown_asset, throw_on_not_found=False),
"Found unknown asset {}".format(unknown_asset)
)
@ddt.data(True, False)
def test_export_for_course(self, deprecated):
"""
Test export
"""
self.set_up_assets(deprecated)
root_dir = path.path(mkdtemp())
try:
self.contentstore.export_all_for_course(
self.course1_key, root_dir,
path.path(root_dir / "policy.json"),
)
for filename in self.course1_files:
filepath = path.path(root_dir / filename)
self.assertTrue(filepath.isfile(), "{} is not a file".format(filepath))
for filename in self.course2_files:
if filename not in self.course1_files:
filepath = path.path(root_dir / filename)
self.assertFalse(filepath.isfile(), "{} is unexpected exported a file".format(filepath))
finally:
shutil.rmtree(root_dir)
@ddt.data(True, False)
def test_get_all_content(self, deprecated):
"""
Test get_all_content_for_course
"""
self.set_up_assets(deprecated)
course1_assets, count = self.contentstore.get_all_content_for_course(self.course1_key)
self.assertEqual(count, len(self.course1_files), course1_assets)
for asset in course1_assets:
parsed = AssetKey.from_string(asset['filename'])
self.assertIn(parsed.name, self.course1_files)
course1_assets, __ = self.contentstore.get_all_content_for_course(self.course1_key, 1, 1)
self.assertEqual(len(course1_assets), 1, course1_assets)
fake_course = CourseLocator('test', 'fake', 'non')
course_assets, count = self.contentstore.get_all_content_for_course(fake_course)
self.assertEqual(count, 0)
self.assertEqual(course_assets, [])
@ddt.data(True, False)
def test_attrs(self, deprecated):
"""
Test setting and getting attrs
"""
self.set_up_assets(deprecated)
for filename in self.course1_files:
asset_key = self.course1_key.make_asset_key('asset', filename)
prelocked = self.contentstore.get_attr(asset_key, 'locked', False)
self.contentstore.set_attr(asset_key, 'locked', not prelocked)
self.assertEqual(self.contentstore.get_attr(asset_key, 'locked', False), not prelocked)
@ddt.data(True, False)
def test_copy_assets(self, deprecated):
"""
copy_all_course_assets
"""
self.set_up_assets(deprecated)
dest_course = CourseLocator('test', 'destination', 'copy')
self.contentstore.copy_all_course_assets(self.course1_key, dest_course)
for filename in self.course1_files:
asset_key = self.course1_key.make_asset_key('asset', filename)
dest_key = dest_course.make_asset_key('asset', filename)
source = self.contentstore.find(asset_key)
copied = self.contentstore.find(dest_key)
for propname in ['name', 'content_type', 'length', 'locked']:
self.assertEqual(getattr(source, propname), getattr(copied, propname))
__, count = self.contentstore.get_all_content_for_course(dest_course)
self.assertEqual(count, len(self.course1_files))
@ddt.data(True, False)
def test_delete_assets(self, deprecated):
"""
delete_all_course_assets
"""
self.set_up_assets(deprecated)
self.contentstore.delete_all_course_assets(self.course1_key)
__, count = self.contentstore.get_all_content_for_course(self.course1_key)
self.assertEqual(count, 0)
# ensure it didn't remove any from other course
__, count = self.contentstore.get_all_content_for_course(self.course2_key)
self.assertEqual(count, len(self.course2_files))
| agpl-3.0 |
ai-ku/langvis | jython-2.1/Lib/gzip.py | 4 | 12370 | """Functions that read and write gzipped files.
The user of the file doesn't have to worry about the compression,
but random access is not allowed."""
# based on Andrew Kuchling's minigzip.py distributed with the zlib module
import struct, sys, time
import zlib
import __builtin__
__all__ = ["GzipFile","open"]
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16
READ, WRITE = 1, 2
def write32(output, value):
output.write(struct.pack("<l", value))
def write32u(output, value):
if value < 0:
value = value + 0x100000000L
output.write(struct.pack("<L", value))
def read32(input):
return struct.unpack("<l", input.read(4))[0]
def open(filename, mode="rb", compresslevel=9):
return GzipFile(filename, mode, compresslevel)
class GzipFile:
myfileobj = None
def __init__(self, filename=None, mode=None,
compresslevel=9, fileobj=None):
if fileobj is None:
fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb')
if filename is None:
if hasattr(fileobj, 'name'): filename = fileobj.name
else: filename = ''
if mode is None:
if hasattr(fileobj, 'mode'): mode = fileobj.mode
else: mode = 'rb'
if mode[0:1] == 'r':
self.mode = READ
# Set flag indicating start of a new member
self._new_member = 1
self.extrabuf = ""
self.extrasize = 0
self.filename = filename
elif mode[0:1] == 'w' or mode[0:1] == 'a':
self.mode = WRITE
self._init_write(filename)
self.compress = zlib.compressobj(compresslevel,
zlib.DEFLATED,
-zlib.MAX_WBITS,
zlib.DEF_MEM_LEVEL,
0)
else:
raise ValueError, "Mode " + mode + " not supported"
self.fileobj = fileobj
if self.mode == WRITE:
self._write_gzip_header()
def __repr__(self):
s = repr(self.fileobj)
return '<gzip ' + s[1:-1] + ' ' + hex(id(self)) + '>'
def _init_write(self, filename):
if filename[-3:] != '.gz':
filename = filename + '.gz'
self.filename = filename
self.crc = zlib.crc32("")
self.size = 0
self.writebuf = []
self.bufsize = 0
def _write_gzip_header(self):
self.fileobj.write('\037\213') # magic header
self.fileobj.write('\010') # compression method
fname = self.filename[:-3]
flags = 0
if fname:
flags = FNAME
self.fileobj.write(chr(flags))
write32u(self.fileobj, long(time.time()))
self.fileobj.write('\002')
self.fileobj.write('\377')
if fname:
self.fileobj.write(fname + '\000')
def _init_read(self):
self.crc = zlib.crc32("")
self.size = 0
def _read_gzip_header(self):
magic = self.fileobj.read(2)
if magic != '\037\213':
raise IOError, 'Not a gzipped file'
method = ord( self.fileobj.read(1) )
if method != 8:
raise IOError, 'Unknown compression method'
flag = ord( self.fileobj.read(1) )
# modtime = self.fileobj.read(4)
# extraflag = self.fileobj.read(1)
# os = self.fileobj.read(1)
self.fileobj.read(6)
if flag & FEXTRA:
# Read & discard the extra field, if present
xlen=ord(self.fileobj.read(1))
xlen=xlen+256*ord(self.fileobj.read(1))
self.fileobj.read(xlen)
if flag & FNAME:
# Read and discard a null-terminated string containing the filename
while (1):
s=self.fileobj.read(1)
if not s or s=='\000': break
if flag & FCOMMENT:
# Read and discard a null-terminated string containing a comment
while (1):
s=self.fileobj.read(1)
if not s or s=='\000': break
if flag & FHCRC:
self.fileobj.read(2) # Read & discard the 16-bit header CRC
def write(self,data):
if self.fileobj is None:
raise ValueError, "write() on closed GzipFile object"
if len(data) > 0:
self.size = self.size + len(data)
self.crc = zlib.crc32(data, self.crc)
self.fileobj.write( self.compress.compress(data) )
def writelines(self,lines):
self.write(" ".join(lines))
def read(self, size=-1):
if self.extrasize <= 0 and self.fileobj is None:
return ''
readsize = 1024
if size < 0: # get the whole thing
try:
while 1:
self._read(readsize)
readsize = readsize * 2
except EOFError:
size = self.extrasize
else: # just get some more of it
try:
while size > self.extrasize:
self._read(readsize)
readsize = readsize * 2
except EOFError:
if size > self.extrasize:
size = self.extrasize
chunk = self.extrabuf[:size]
self.extrabuf = self.extrabuf[size:]
self.extrasize = self.extrasize - size
return chunk
def _unread(self, buf):
self.extrabuf = buf + self.extrabuf
self.extrasize = len(buf) + self.extrasize
def _read(self, size=1024):
if self.fileobj is None: raise EOFError, "Reached EOF"
if self._new_member:
# If the _new_member flag is set, we have to
# jump to the next member, if there is one.
#
# First, check if we're at the end of the file;
# if so, it's time to stop; no more members to read.
pos = self.fileobj.tell() # Save current position
self.fileobj.seek(0, 2) # Seek to end of file
if pos == self.fileobj.tell():
self.fileobj = None
raise EOFError, "Reached EOF"
else:
self.fileobj.seek( pos ) # Return to original position
self._init_read()
self._read_gzip_header()
self.decompress = zlib.decompressobj(-zlib.MAX_WBITS)
self._new_member = 0
# Read a chunk of data from the file
buf = self.fileobj.read(size)
# If the EOF has been reached, flush the decompression object
# and mark this object as finished.
if buf == "":
uncompress = self.decompress.flush()
self._read_eof()
self.fileobj = None
self._add_read_data( uncompress )
raise EOFError, 'Reached EOF'
uncompress = self.decompress.decompress(buf)
self._add_read_data( uncompress )
if self.decompress.unused_data != "":
# Ending case: we've come to the end of a member in the file,
# so seek back to the start of the unused data, finish up
# this member, and read a new gzip header.
# (The number of bytes to seek back is the length of the unused
# data, minus 8 because _read_eof() will rewind a further 8 bytes)
self.fileobj.seek( -len(self.decompress.unused_data)+8, 1)
# Check the CRC and file size, and set the flag so we read
# a new member on the next call
self._read_eof()
self._new_member = 1
def _add_read_data(self, data):
self.crc = zlib.crc32(data, self.crc)
self.extrabuf = self.extrabuf + data
self.extrasize = self.extrasize + len(data)
self.size = self.size + len(data)
def _read_eof(self):
# We've read to the end of the file, so we have to rewind in order
# to reread the 8 bytes containing the CRC and the file size.
# We check the that the computed CRC and size of the
# uncompressed data matches the stored values.
self.fileobj.seek(-8, 1)
crc32 = read32(self.fileobj)
isize = read32(self.fileobj)
if crc32%0x100000000L != self.crc%0x100000000L:
raise ValueError, "CRC check failed"
elif isize != self.size:
raise ValueError, "Incorrect length of data produced"
def close(self):
if self.mode == WRITE:
self.fileobj.write(self.compress.flush())
write32(self.fileobj, self.crc)
write32(self.fileobj, self.size)
self.fileobj = None
elif self.mode == READ:
self.fileobj = None
if self.myfileobj:
self.myfileobj.close()
self.myfileobj = None
def __del__(self):
try:
if (self.myfileobj is None and
self.fileobj is None):
return
except AttributeError:
return
self.close()
def flush(self):
self.fileobj.flush()
def isatty(self):
return 0
def readline(self, size=-1):
if size < 0: size = sys.maxint
bufs = []
orig_size = size
readsize = min(100, size) # Read from the file in small chunks
while 1:
if size == 0:
return "".join(bufs) # Return resulting line
c = self.read(readsize)
i = c.find('\n')
if size is not None:
# We set i=size to break out of the loop under two
# conditions: 1) there's no newline, and the chunk is
# larger than size, or 2) there is a newline, but the
# resulting line would be longer than 'size'.
if i==-1 and len(c) > size: i=size-1
elif size <= i: i = size -1
if i >= 0 or c == '':
bufs.append(c[:i+1]) # Add portion of last chunk
self._unread(c[i+1:]) # Push back rest of chunk
return ''.join(bufs) # Return resulting line
# Append chunk to list, decrease 'size',
bufs.append(c)
size = size - len(c)
readsize = min(size, readsize * 2)
def readlines(self, sizehint=0):
# Negative numbers result in reading all the lines
if sizehint <= 0: sizehint = sys.maxint
L = []
while sizehint > 0:
line = self.readline()
if line == "": break
L.append( line )
sizehint = sizehint - len(line)
return L
def writelines(self, L):
for line in L:
self.write(line)
def _test():
# Act like gzip; with -d, act like gunzip.
# The input file is not deleted, however, nor are any other gzip
# options or features supported.
import sys
args = sys.argv[1:]
decompress = args and args[0] == "-d"
if decompress:
args = args[1:]
if not args:
args = ["-"]
for arg in args:
if decompress:
if arg == "-":
f = GzipFile(filename="", mode="rb", fileobj=sys.stdin)
g = sys.stdout
else:
if arg[-3:] != ".gz":
print "filename doesn't end in .gz:", `arg`
continue
f = open(arg, "rb")
g = __builtin__.open(arg[:-3], "wb")
else:
if arg == "-":
f = sys.stdin
g = GzipFile(filename="", mode="wb", fileobj=sys.stdout)
else:
f = __builtin__.open(arg, "rb")
g = open(arg + ".gz", "wb")
while 1:
chunk = f.read(1024)
if not chunk:
break
g.write(chunk)
if g is not sys.stdout:
g.close()
if f is not sys.stdin:
f.close()
if __name__ == '__main__':
_test()
| mit |
lambder/bigcouch | couchjs/scons/scons-local-2.0.1/SCons/Options/BoolOption.py | 61 | 2003 | #
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/Options/BoolOption.py 5134 2010/08/16 23:02:40 bdeegan"
__doc__ = """Place-holder for the old SCons.Options module hierarchy
This is for backwards compatibility. The new equivalent is the Variables/
class hierarchy. These will have deprecation warnings added (some day),
and will then be removed entirely (some day).
"""
import SCons.Variables
import SCons.Warnings
warned = False
def BoolOption(*args, **kw):
global warned
if not warned:
msg = "The BoolOption() function is deprecated; use the BoolVariable() function instead."
SCons.Warnings.warn(SCons.Warnings.DeprecatedOptionsWarning, msg)
warned = True
return SCons.Variables.BoolVariable(*args, **kw)
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
| apache-2.0 |
CyanogenMod/android_external_chromium_org | tools/android/adb_profile_chrome/profiler.py | 9 | 2949 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
from adb_profile_chrome import trace_packager
from adb_profile_chrome import ui
from pylib import constants
def _StartTracing(controllers, interval):
for controller in controllers:
controller.StartTracing(interval)
def _StopTracing(controllers):
for controller in controllers:
controller.StopTracing()
def _PullTraces(controllers, output, compress, write_json):
ui.PrintMessage('Downloading...', eol='')
trace_files = [controller.PullTrace() for controller in controllers]
trace_files = [trace for trace in trace_files if trace]
if not trace_files:
ui.PrintMessage('No results')
return []
result = trace_packager.PackageTraces(trace_files,
output=output,
compress=compress,
write_json=write_json)
ui.PrintMessage('done')
ui.PrintMessage('Trace written to file://%s' % os.path.abspath(result))
return result
def GetSupportedBrowsers():
"""Returns the package names of all supported browsers."""
# Add aliases for backwards compatibility.
supported_browsers = {
'stable': constants.PACKAGE_INFO['chrome_stable'],
'beta': constants.PACKAGE_INFO['chrome_beta'],
'dev': constants.PACKAGE_INFO['chrome_dev'],
'build': constants.PACKAGE_INFO['chrome'],
}
supported_browsers.update(constants.PACKAGE_INFO)
unsupported_browsers = ['content_browsertests', 'gtest', 'legacy_browser']
for browser in unsupported_browsers:
del supported_browsers[browser]
return supported_browsers
def CaptureProfile(controllers, interval, output=None, compress=False,
write_json=False):
"""Records a profiling trace saves the result to a file.
Args:
controllers: List of tracing controllers.
interval: Time interval to capture in seconds. An interval of None (or 0)
continues tracing until stopped by the user.
output: Output file name or None to use an automatically generated name.
compress: If True, the result will be compressed either with gzip or zip
depending on the number of captured subtraces.
write_json: If True, prefer JSON output over HTML.
Returns:
Path to saved profile.
"""
trace_type = ' + '.join(map(str, controllers))
try:
_StartTracing(controllers, interval)
if interval:
ui.PrintMessage('Capturing %d-second %s. Press Enter to stop early...' % \
(interval, trace_type), eol='')
ui.WaitForEnter(interval)
else:
ui.PrintMessage('Capturing %s. Press Enter to stop...' % \
trace_type, eol='')
raw_input()
finally:
_StopTracing(controllers)
if interval:
ui.PrintMessage('done')
return _PullTraces(controllers, output, compress, write_json)
| bsd-3-clause |
Zord13appdesa/python-for-android | python-modules/twisted/twisted/test/test_socks.py | 59 | 17748 | # Copyright (c) 2001-2010 Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Tests for L{twisted.protocol.socks}, an implementation of the SOCKSv4 and
SOCKSv4a protocols.
"""
import struct, socket
from twisted.trial import unittest
from twisted.test import proto_helpers
from twisted.internet import defer, address, reactor
from twisted.internet.error import DNSLookupError
from twisted.protocols import socks
class StringTCPTransport(proto_helpers.StringTransport):
stringTCPTransport_closing = False
peer = None
def getPeer(self):
return self.peer
def getHost(self):
return address.IPv4Address('TCP', '2.3.4.5', 42)
def loseConnection(self):
self.stringTCPTransport_closing = True
class FakeResolverReactor:
"""
Bare-bones reactor with deterministic behavior for the resolve method.
"""
def __init__(self, names):
"""
@type names: C{dict} containing C{str} keys and C{str} values.
@param names: A hostname to IP address mapping. The IP addresses are
stringified dotted quads.
"""
self.names = names
def resolve(self, hostname):
"""
Resolve a hostname by looking it up in the C{names} dictionary.
"""
try:
return defer.succeed(self.names[hostname])
except KeyError:
return defer.fail(
DNSLookupError("FakeResolverReactor couldn't find " + hostname))
class SOCKSv4Driver(socks.SOCKSv4):
# last SOCKSv4Outgoing instantiated
driver_outgoing = None
# last SOCKSv4IncomingFactory instantiated
driver_listen = None
def connectClass(self, host, port, klass, *args):
# fake it
proto = klass(*args)
proto.transport = StringTCPTransport()
proto.transport.peer = address.IPv4Address('TCP', host, port)
proto.connectionMade()
self.driver_outgoing = proto
return defer.succeed(proto)
def listenClass(self, port, klass, *args):
# fake it
factory = klass(*args)
self.driver_listen = factory
if port == 0:
port = 1234
return defer.succeed(('6.7.8.9', port))
class Connect(unittest.TestCase):
"""
Tests for SOCKS and SOCKSv4a connect requests using the L{SOCKSv4} protocol.
"""
def setUp(self):
self.sock = SOCKSv4Driver()
self.sock.transport = StringTCPTransport()
self.sock.connectionMade()
self.sock.reactor = FakeResolverReactor({"localhost":"127.0.0.1"})
def tearDown(self):
outgoing = self.sock.driver_outgoing
if outgoing is not None:
self.assert_(outgoing.transport.stringTCPTransport_closing,
"Outgoing SOCKS connections need to be closed.")
def test_simple(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 1, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 34)
+ socket.inet_aton('1.2.3.4'))
self.assert_(not self.sock.transport.stringTCPTransport_closing)
self.assert_(self.sock.driver_outgoing is not None)
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(self.sock.driver_outgoing.transport.value(),
'hello, world')
# the other way around
self.sock.driver_outgoing.dataReceived('hi there')
self.assertEqual(self.sock.transport.value(), 'hi there')
self.sock.connectionLost('fake reason')
def test_socks4aSuccessfulResolution(self):
"""
If the destination IP address has zeros for the first three octets and
non-zero for the fourth octet, the client is attempting a v4a
connection. A hostname is specified after the user ID string and the
server connects to the address that hostname resolves to.
@see: U{http://en.wikipedia.org/wiki/SOCKS#SOCKS_4a_protocol}
"""
# send the domain name "localhost" to be resolved
clientRequest = (
struct.pack('!BBH', 4, 1, 34)
+ socket.inet_aton('0.0.0.1')
+ 'fooBAZ\0'
+ 'localhost\0')
# Deliver the bytes one by one to exercise the protocol's buffering
# logic. FakeResolverReactor's resolve method is invoked to "resolve"
# the hostname.
for byte in clientRequest:
self.sock.dataReceived(byte)
sent = self.sock.transport.value()
self.sock.transport.clear()
# Verify that the server responded with the address which will be
# connected to.
self.assertEquals(
sent,
struct.pack('!BBH', 0, 90, 34) + socket.inet_aton('127.0.0.1'))
self.assertFalse(self.sock.transport.stringTCPTransport_closing)
self.assertNotIdentical(self.sock.driver_outgoing, None)
# Pass some data through and verify it is forwarded to the outgoing
# connection.
self.sock.dataReceived('hello, world')
self.assertEquals(
self.sock.driver_outgoing.transport.value(), 'hello, world')
# Deliver some data from the output connection and verify it is
# passed along to the incoming side.
self.sock.driver_outgoing.dataReceived('hi there')
self.assertEquals(self.sock.transport.value(), 'hi there')
self.sock.connectionLost('fake reason')
def test_socks4aFailedResolution(self):
"""
Failed hostname resolution on a SOCKSv4a packet results in a 91 error
response and the connection getting closed.
"""
# send the domain name "failinghost" to be resolved
clientRequest = (
struct.pack('!BBH', 4, 1, 34)
+ socket.inet_aton('0.0.0.1')
+ 'fooBAZ\0'
+ 'failinghost\0')
# Deliver the bytes one by one to exercise the protocol's buffering
# logic. FakeResolverReactor's resolve method is invoked to "resolve"
# the hostname.
for byte in clientRequest:
self.sock.dataReceived(byte)
# Verify that the server responds with a 91 error.
sent = self.sock.transport.value()
self.assertEquals(
sent,
struct.pack('!BBH', 0, 91, 0) + socket.inet_aton('0.0.0.0'))
# A failed resolution causes the transport to drop the connection.
self.assertTrue(self.sock.transport.stringTCPTransport_closing)
self.assertIdentical(self.sock.driver_outgoing, None)
def test_accessDenied(self):
self.sock.authorize = lambda code, server, port, user: 0
self.sock.dataReceived(
struct.pack('!BBH', 4, 1, 4242)
+ socket.inet_aton('10.2.3.4')
+ 'fooBAR'
+ '\0')
self.assertEqual(self.sock.transport.value(),
struct.pack('!BBH', 0, 91, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(self.sock.transport.stringTCPTransport_closing)
self.assertIdentical(self.sock.driver_outgoing, None)
def test_eofRemote(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 1, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(self.sock.driver_outgoing.transport.value(),
'hello, world')
# now close it from the server side
self.sock.driver_outgoing.transport.loseConnection()
self.sock.driver_outgoing.connectionLost('fake reason')
def test_eofLocal(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 1, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(self.sock.driver_outgoing.transport.value(),
'hello, world')
# now close it from the client side
self.sock.connectionLost('fake reason')
class Bind(unittest.TestCase):
"""
Tests for SOCKS and SOCKSv4a bind requests using the L{SOCKSv4} protocol.
"""
def setUp(self):
self.sock = SOCKSv4Driver()
self.sock.transport = StringTCPTransport()
self.sock.connectionMade()
self.sock.reactor = FakeResolverReactor({"localhost":"127.0.0.1"})
## def tearDown(self):
## # TODO ensure the listen port is closed
## listen = self.sock.driver_listen
## if listen is not None:
## self.assert_(incoming.transport.stringTCPTransport_closing,
## "Incoming SOCKS connections need to be closed.")
def test_simple(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 1234)
+ socket.inet_aton('6.7.8.9'))
self.assert_(not self.sock.transport.stringTCPTransport_closing)
self.assert_(self.sock.driver_listen is not None)
# connect
incoming = self.sock.driver_listen.buildProtocol(('1.2.3.4', 5345))
self.assertNotIdentical(incoming, None)
incoming.transport = StringTCPTransport()
incoming.connectionMade()
# now we should have the second reply packet
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(not self.sock.transport.stringTCPTransport_closing)
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(incoming.transport.value(),
'hello, world')
# the other way around
incoming.dataReceived('hi there')
self.assertEqual(self.sock.transport.value(), 'hi there')
self.sock.connectionLost('fake reason')
def test_socks4a(self):
"""
If the destination IP address has zeros for the first three octets and
non-zero for the fourth octet, the client is attempting a v4a
connection. A hostname is specified after the user ID string and the
server connects to the address that hostname resolves to.
@see: U{http://en.wikipedia.org/wiki/SOCKS#SOCKS_4a_protocol}
"""
# send the domain name "localhost" to be resolved
clientRequest = (
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('0.0.0.1')
+ 'fooBAZ\0'
+ 'localhost\0')
# Deliver the bytes one by one to exercise the protocol's buffering
# logic. FakeResolverReactor's resolve method is invoked to "resolve"
# the hostname.
for byte in clientRequest:
self.sock.dataReceived(byte)
sent = self.sock.transport.value()
self.sock.transport.clear()
# Verify that the server responded with the address which will be
# connected to.
self.assertEquals(
sent,
struct.pack('!BBH', 0, 90, 1234) + socket.inet_aton('6.7.8.9'))
self.assertFalse(self.sock.transport.stringTCPTransport_closing)
self.assertNotIdentical(self.sock.driver_listen, None)
# connect
incoming = self.sock.driver_listen.buildProtocol(('127.0.0.1', 5345))
self.assertNotIdentical(incoming, None)
incoming.transport = StringTCPTransport()
incoming.connectionMade()
# now we should have the second reply packet
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 0)
+ socket.inet_aton('0.0.0.0'))
self.assertNotIdentical(
self.sock.transport.stringTCPTransport_closing, None)
# Deliver some data from the output connection and verify it is
# passed along to the incoming side.
self.sock.dataReceived('hi there')
self.assertEquals(incoming.transport.value(), 'hi there')
# the other way around
incoming.dataReceived('hi there')
self.assertEqual(self.sock.transport.value(), 'hi there')
self.sock.connectionLost('fake reason')
def test_socks4aFailedResolution(self):
"""
Failed hostname resolution on a SOCKSv4a packet results in a 91 error
response and the connection getting closed.
"""
# send the domain name "failinghost" to be resolved
clientRequest = (
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('0.0.0.1')
+ 'fooBAZ\0'
+ 'failinghost\0')
# Deliver the bytes one by one to exercise the protocol's buffering
# logic. FakeResolverReactor's resolve method is invoked to "resolve"
# the hostname.
for byte in clientRequest:
self.sock.dataReceived(byte)
# Verify that the server responds with a 91 error.
sent = self.sock.transport.value()
self.assertEquals(
sent,
struct.pack('!BBH', 0, 91, 0) + socket.inet_aton('0.0.0.0'))
# A failed resolution causes the transport to drop the connection.
self.assertTrue(self.sock.transport.stringTCPTransport_closing)
self.assertIdentical(self.sock.driver_outgoing, None)
def test_accessDenied(self):
self.sock.authorize = lambda code, server, port, user: 0
self.sock.dataReceived(
struct.pack('!BBH', 4, 2, 4242)
+ socket.inet_aton('10.2.3.4')
+ 'fooBAR'
+ '\0')
self.assertEqual(self.sock.transport.value(),
struct.pack('!BBH', 0, 91, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(self.sock.transport.stringTCPTransport_closing)
self.assertIdentical(self.sock.driver_listen, None)
def test_eofRemote(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
# connect
incoming = self.sock.driver_listen.buildProtocol(('1.2.3.4', 5345))
self.assertNotIdentical(incoming, None)
incoming.transport = StringTCPTransport()
incoming.connectionMade()
# now we should have the second reply packet
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(not self.sock.transport.stringTCPTransport_closing)
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(incoming.transport.value(),
'hello, world')
# now close it from the server side
incoming.transport.loseConnection()
incoming.connectionLost('fake reason')
def test_eofLocal(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
# connect
incoming = self.sock.driver_listen.buildProtocol(('1.2.3.4', 5345))
self.assertNotIdentical(incoming, None)
incoming.transport = StringTCPTransport()
incoming.connectionMade()
# now we should have the second reply packet
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 90, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(not self.sock.transport.stringTCPTransport_closing)
# pass some data through
self.sock.dataReceived('hello, world')
self.assertEqual(incoming.transport.value(),
'hello, world')
# now close it from the client side
self.sock.connectionLost('fake reason')
def test_badSource(self):
self.sock.dataReceived(
struct.pack('!BBH', 4, 2, 34)
+ socket.inet_aton('1.2.3.4')
+ 'fooBAR'
+ '\0')
sent = self.sock.transport.value()
self.sock.transport.clear()
# connect from WRONG address
incoming = self.sock.driver_listen.buildProtocol(('1.6.6.6', 666))
self.assertIdentical(incoming, None)
# Now we should have the second reply packet and it should
# be a failure. The connection should be closing.
sent = self.sock.transport.value()
self.sock.transport.clear()
self.assertEqual(sent,
struct.pack('!BBH', 0, 91, 0)
+ socket.inet_aton('0.0.0.0'))
self.assert_(self.sock.transport.stringTCPTransport_closing)
| apache-2.0 |
abawchen/leetcode | solutions/147_insertion_sort_list.py | 1 | 26591 | # Sort a linked list using insertion sort.
# Definition for singly-linked list.
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
class Solution:
# @param {ListNode} head
# @return {ListNode}
def insertionSortList(self, head):
if not head:
return None
tail = head
cur = head.next
tail.next = None
while cur:
nxt = cur.next
if cur.val >= tail.val:
tail.next = cur
cur.next = None
tail = cur
elif cur.val <= head.val:
cur.next = head
head = cur
else:
node = head
pre = head
while node and cur.val > node.val:
pre = node
node = node.next
pre.next = cur
cur.next = node
cur = nxt
return head
# Time Limit Exceeded (for special case)
# sortedHead = ListNode(head.val)
# n1 = head.next
# while n1:
# node = ListNode(n1.val)
# n2 = sortedHead
# pre = n2
# while n2 and n1.val > n2.val:
# pre = n2
# n2 = n2.next
# if n2 == sortedHead:
# node.next = n2
# sortedHead = node
# else:
# pre.next = node
# node.next = n2
# n1 = n1.next
# return sortedHead
def printList(head):
print "====== List ======"
while head:
print head.val
head = head.next
# http://emn178.pixnet.net/blog/post/93791164-%E6%8F%92%E5%85%A5%E6%8E%92%E5%BA%8F%E6%B3%95%28insertion-sort%29
s = Solution()
n1 = ListNode(1)
n2 = ListNode(3)
n3 = ListNode(2)
n4 = ListNode(4)
# n1.next = n2
# n2.next = n3
# n3.next = n4
head = s.insertionSortList(n1)
printList(head)
# n1 = ListNode(8)
# n2 = ListNode(2)
# n3 = ListNode(5)
# n4 = ListNode(7)
# n5 = ListNode(4)
# n1.next = n2
# n2.next = n3
# n3.next = n4
# n4.next = n5
# head = s.insertionSortList(n1)
# printList(head)
# n1 = ListNode(6)
# n2 = ListNode(5)
# n3 = ListNode(3)
# n4 = ListNode(1)
# n5 = ListNode(8)
# n6 = ListNode(7)
# n7 = ListNode(2)
# n8 = ListNode(4)
# n1.next = n2
# n2.next = n3
# n3.next = n4
# n4.next = n5
# n5.next = n6
# n6.next = n7
# n7.next = n8
# head = s.insertionSortList(n1)
# printList(head)
# l = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,1021,1022,1023,1024,1025,1026,1027,1028,1029,1030,1031,1032,1033,1034,1035,1036,1037,1038,1039,1040,1041,1042,1043,1044,1045,1046,1047,1048,1049,1050,1051,1052,1053,1054,1055,1056,1057,1058,1059,1060,1061,1062,1063,1064,1065,1066,1067,1068,1069,1070,1071,1072,1073,1074,1075,1076,1077,1078,1079,1080,1081,1082,1083,1084,1085,1086,1087,1088,1089,1090,1091,1092,1093,1094,1095,1096,1097,1098,1099,1100,1101,1102,1103,1104,1105,1106,1107,1108,1109,1110,1111,1112,1113,1114,1115,1116,1117,1118,1119,1120,1121,1122,1123,1124,1125,1126,1127,1128,1129,1130,1131,1132,1133,1134,1135,1136,1137,1138,1139,1140,1141,1142,1143,1144,1145,1146,1147,1148,1149,1150,1151,1152,1153,1154,1155,1156,1157,1158,1159,1160,1161,1162,1163,1164,1165,1166,1167,1168,1169,1170,1171,1172,1173,1174,1175,1176,1177,1178,1179,1180,1181,1182,1183,1184,1185,1186,1187,1188,1189,1190,1191,1192,1193,1194,1195,1196,1197,1198,1199,1200,1201,1202,1203,1204,1205,1206,1207,1208,1209,1210,1211,1212,1213,1214,1215,1216,1217,1218,1219,1220,1221,1222,1223,1224,1225,1226,1227,1228,1229,1230,1231,1232,1233,1234,1235,1236,1237,1238,1239,1240,1241,1242,1243,1244,1245,1246,1247,1248,1249,1250,1251,1252,1253,1254,1255,1256,1257,1258,1259,1260,1261,1262,1263,1264,1265,1266,1267,1268,1269,1270,1271,1272,1273,1274,1275,1276,1277,1278,1279,1280,1281,1282,1283,1284,1285,1286,1287,1288,1289,1290,1291,1292,1293,1294,1295,1296,1297,1298,1299,1300,1301,1302,1303,1304,1305,1306,1307,1308,1309,1310,1311,1312,1313,1314,1315,1316,1317,1318,1319,1320,1321,1322,1323,1324,1325,1326,1327,1328,1329,1330,1331,1332,1333,1334,1335,1336,1337,1338,1339,1340,1341,1342,1343,1344,1345,1346,1347,1348,1349,1350,1351,1352,1353,1354,1355,1356,1357,1358,1359,1360,1361,1362,1363,1364,1365,1366,1367,1368,1369,1370,1371,1372,1373,1374,1375,1376,1377,1378,1379,1380,1381,1382,1383,1384,1385,1386,1387,1388,1389,1390,1391,1392,1393,1394,1395,1396,1397,1398,1399,1400,1401,1402,1403,1404,1405,1406,1407,1408,1409,1410,1411,1412,1413,1414,1415,1416,1417,1418,1419,1420,1421,1422,1423,1424,1425,1426,1427,1428,1429,1430,1431,1432,1433,1434,1435,1436,1437,1438,1439,1440,1441,1442,1443,1444,1445,1446,1447,1448,1449,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464,1465,1466,1467,1468,1469,1470,1471,1472,1473,1474,1475,1476,1477,1478,1479,1480,1481,1482,1483,1484,1485,1486,1487,1488,1489,1490,1491,1492,1493,1494,1495,1496,1497,1498,1499,1500,1501,1502,1503,1504,1505,1506,1507,1508,1509,1510,1511,1512,1513,1514,1515,1516,1517,1518,1519,1520,1521,1522,1523,1524,1525,1526,1527,1528,1529,1530,1531,1532,1533,1534,1535,1536,1537,1538,1539,1540,1541,1542,1543,1544,1545,1546,1547,1548,1549,1550,1551,1552,1553,1554,1555,1556,1557,1558,1559,1560,1561,1562,1563,1564,1565,1566,1567,1568,1569,1570,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1593,1594,1595,1596,1597,1598,1599,1600,1601,1602,1603,1604,1605,1606,1607,1608,1609,1610,1611,1612,1613,1614,1615,1616,1617,1618,1619,1620,1621,1622,1623,1624,1625,1626,1627,1628,1629,1630,1631,1632,1633,1634,1635,1636,1637,1638,1639,1640,1641,1642,1643,1644,1645,1646,1647,1648,1649,1650,1651,1652,1653,1654,1655,1656,1657,1658,1659,1660,1661,1662,1663,1664,1665,1666,1667,1668,1669,1670,1671,1672,1673,1674,1675,1676,1677,1678,1679,1680,1681,1682,1683,1684,1685,1686,1687,1688,1689,1690,1691,1692,1693,1694,1695,1696,1697,1698,1699,1700,1701,1702,1703,1704,1705,1706,1707,1708,1709,1710,1711,1712,1713,1714,1715,1716,1717,1718,1719,1720,1721,1722,1723,1724,1725,1726,1727,1728,1729,1730,1731,1732,1733,1734,1735,1736,1737,1738,1739,1740,1741,1742,1743,1744,1745,1746,1747,1748,1749,1750,1751,1752,1753,1754,1755,1756,1757,1758,1759,1760,1761,1762,1763,1764,1765,1766,1767,1768,1769,1770,1771,1772,1773,1774,1775,1776,1777,1778,1779,1780,1781,1782,1783,1784,1785,1786,1787,1788,1789,1790,1791,1792,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1811,1812,1813,1814,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1845,1846,1847,1848,1849,1850,1851,1852,1853,1854,1855,1856,1857,1858,1859,1860,1861,1862,1863,1864,1865,1866,1867,1868,1869,1870,1871,1872,1873,1874,1875,1876,1877,1878,1879,1880,1881,1882,1883,1884,1885,1886,1887,1888,1889,1890,1891,1892,1893,1894,1895,1896,1897,1898,1899,1900,1901,1902,1903,1904,1905,1906,1907,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1944,1945,1946,1947,1948,1949,1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022,2023,2024,2025,2026,2027,2028,2029,2030,2031,2032,2033,2034,2035,2036,2037,2038,2039,2040,2041,2042,2043,2044,2045,2046,2047,2048,2049,2050,2051,2052,2053,2054,2055,2056,2057,2058,2059,2060,2061,2062,2063,2064,2065,2066,2067,2068,2069,2070,2071,2072,2073,2074,2075,2076,2077,2078,2079,2080,2081,2082,2083,2084,2085,2086,2087,2088,2089,2090,2091,2092,2093,2094,2095,2096,2097,2098,2099,2100,2101,2102,2103,2104,2105,2106,2107,2108,2109,2110,2111,2112,2113,2114,2115,2116,2117,2118,2119,2120,2121,2122,2123,2124,2125,2126,2127,2128,2129,2130,2131,2132,2133,2134,2135,2136,2137,2138,2139,2140,2141,2142,2143,2144,2145,2146,2147,2148,2149,2150,2151,2152,2153,2154,2155,2156,2157,2158,2159,2160,2161,2162,2163,2164,2165,2166,2167,2168,2169,2170,2171,2172,2173,2174,2175,2176,2177,2178,2179,2180,2181,2182,2183,2184,2185,2186,2187,2188,2189,2190,2191,2192,2193,2194,2195,2196,2197,2198,2199,2200,2201,2202,2203,2204,2205,2206,2207,2208,2209,2210,2211,2212,2213,2214,2215,2216,2217,2218,2219,2220,2221,2222,2223,2224,2225,2226,2227,2228,2229,2230,2231,2232,2233,2234,2235,2236,2237,2238,2239,2240,2241,2242,2243,2244,2245,2246,2247,2248,2249,2250,2251,2252,2253,2254,2255,2256,2257,2258,2259,2260,2261,2262,2263,2264,2265,2266,2267,2268,2269,2270,2271,2272,2273,2274,2275,2276,2277,2278,2279,2280,2281,2282,2283,2284,2285,2286,2287,2288,2289,2290,2291,2292,2293,2294,2295,2296,2297,2298,2299,2300,2301,2302,2303,2304,2305,2306,2307,2308,2309,2310,2311,2312,2313,2314,2315,2316,2317,2318,2319,2320,2321,2322,2323,2324,2325,2326,2327,2328,2329,2330,2331,2332,2333,2334,2335,2336,2337,2338,2339,2340,2341,2342,2343,2344,2345,2346,2347,2348,2349,2350,2351,2352,2353,2354,2355,2356,2357,2358,2359,2360,2361,2362,2363,2364,2365,2366,2367,2368,2369,2370,2371,2372,2373,2374,2375,2376,2377,2378,2379,2380,2381,2382,2383,2384,2385,2386,2387,2388,2389,2390,2391,2392,2393,2394,2395,2396,2397,2398,2399,2400,2401,2402,2403,2404,2405,2406,2407,2408,2409,2410,2411,2412,2413,2414,2415,2416,2417,2418,2419,2420,2421,2422,2423,2424,2425,2426,2427,2428,2429,2430,2431,2432,2433,2434,2435,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2484,2485,2486,2487,2488,2489,2490,2491,2492,2493,2494,2495,2496,2497,2498,2499,2500,2501,2502,2503,2504,2505,2506,2507,2508,2509,2510,2511,2512,2513,2514,2515,2516,2517,2518,2519,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2576,2577,2578,2579,2580,2581,2582,2583,2584,2585,2586,2587,2588,2589,2590,2591,2592,2593,2594,2595,2596,2597,2598,2599,2600,2601,2602,2603,2604,2605,2606,2607,2608,2609,2610,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2633,2634,2635,2636,2637,2638,2639,2640,2641,2642,2643,2644,2645,2646,2647,2648,2649,2650,2651,2652,2653,2654,2655,2656,2657,2658,2659,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2678,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2701,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2731,2732,2733,2734,2735,2736,2737,2738,2739,2740,2741,2742,2743,2744,2745,2746,2747,2748,2749,2750,2751,2752,2753,2754,2755,2756,2757,2758,2759,2760,2761,2762,2763,2764,2765,2766,2767,2768,2769,2770,2771,2772,2773,2774,2775,2776,2777,2778,2779,2780,2781,2782,2783,2784,2785,2786,2787,2788,2789,2790,2791,2792,2793,2794,2795,2796,2797,2798,2799,2800,2801,2802,2803,2804,2805,2806,2807,2808,2809,2810,2811,2812,2813,2814,2815,2816,2817,2818,2819,2820,2821,2822,2823,2824,2825,2826,2827,2828,2829,2830,2831,2832,2833,2834,2835,2836,2837,2838,2839,2840,2841,2842,2843,2844,2845,2846,2847,2848,2849,2850,2851,2852,2853,2854,2855,2856,2857,2858,2859,2860,2861,2862,2863,2864,2865,2866,2867,2868,2869,2870,2871,2872,2873,2874,2875,2876,2877,2878,2879,2880,2881,2882,2883,2884,2885,2886,2887,2888,2889,2890,2891,2892,2893,2894,2895,2896,2897,2898,2899,2900,2901,2902,2903,2904,2905,2906,2907,2908,2909,2910,2911,2912,2913,2914,2915,2916,2917,2918,2919,2920,2921,2922,2923,2924,2925,2926,2927,2928,2929,2930,2931,2932,2933,2934,2935,2936,2937,2938,2939,2940,2941,2942,2943,2944,2945,2946,2947,2948,2949,2950,2951,2952,2953,2954,2955,2956,2957,2958,2959,2960,2961,2962,2963,2964,2965,2966,2967,2968,2969,2970,2971,2972,2973,2974,2975,2976,2977,2978,2979,2980,2981,2982,2983,2984,2985,2986,2987,2988,2989,2990,2991,2992,2993,2994,2995,2996,2997,2998,2999,3000,3001,3002,3003,3004,3005,3006,3007,3008,3009,3010,3011,3012,3013,3014,3015,3016,3017,3018,3019,3020,3021,3022,3023,3024,3025,3026,3027,3028,3029,3030,3031,3032,3033,3034,3035,3036,3037,3038,3039,3040,3041,3042,3043,3044,3045,3046,3047,3048,3049,3050,3051,3052,3053,3054,3055,3056,3057,3058,3059,3060,3061,3062,3063,3064,3065,3066,3067,3068,3069,3070,3071,3072,3073,3074,3075,3076,3077,3078,3079,3080,3081,3082,3083,3084,3085,3086,3087,3088,3089,3090,3091,3092,3093,3094,3095,3096,3097,3098,3099,3100,3101,3102,3103,3104,3105,3106,3107,3108,3109,3110,3111,3112,3113,3114,3115,3116,3117,3118,3119,3120,3121,3122,3123,3124,3125,3126,3127,3128,3129,3130,3131,3132,3133,3134,3135,3136,3137,3138,3139,3140,3141,3142,3143,3144,3145,3146,3147,3148,3149,3150,3151,3152,3153,3154,3155,3156,3157,3158,3159,3160,3161,3162,3163,3164,3165,3166,3167,3168,3169,3170,3171,3172,3173,3174,3175,3176,3177,3178,3179,3180,3181,3182,3183,3184,3185,3186,3187,3188,3189,3190,3191,3192,3193,3194,3195,3196,3197,3198,3199,3200,3201,3202,3203,3204,3205,3206,3207,3208,3209,3210,3211,3212,3213,3214,3215,3216,3217,3218,3219,3220,3221,3222,3223,3224,3225,3226,3227,3228,3229,3230,3231,3232,3233,3234,3235,3236,3237,3238,3239,3240,3241,3242,3243,3244,3245,3246,3247,3248,3249,3250,3251,3252,3253,3254,3255,3256,3257,3258,3259,3260,3261,3262,3263,3264,3265,3266,3267,3268,3269,3270,3271,3272,3273,3274,3275,3276,3277,3278,3279,3280,3281,3282,3283,3284,3285,3286,3287,3288,3289,3290,3291,3292,3293,3294,3295,3296,3297,3298,3299,3300,3301,3302,3303,3304,3305,3306,3307,3308,3309,3310,3311,3312,3313,3314,3315,3316,3317,3318,3319,3320,3321,3322,3323,3324,3325,3326,3327,3328,3329,3330,3331,3332,3333,3334,3335,3336,3337,3338,3339,3340,3341,3342,3343,3344,3345,3346,3347,3348,3349,3350,3351,3352,3353,3354,3355,3356,3357,3358,3359,3360,3361,3362,3363,3364,3365,3366,3367,3368,3369,3370,3371,3372,3373,3374,3375,3376,3377,3378,3379,3380,3381,3382,3383,3384,3385,3386,3387,3388,3389,3390,3391,3392,3393,3394,3395,3396,3397,3398,3399,3400,3401,3402,3403,3404,3405,3406,3407,3408,3409,3410,3411,3412,3413,3414,3415,3416,3417,3418,3419,3420,3421,3422,3423,3424,3425,3426,3427,3428,3429,3430,3431,3432,3433,3434,3435,3436,3437,3438,3439,3440,3441,3442,3443,3444,3445,3446,3447,3448,3449,3450,3451,3452,3453,3454,3455,3456,3457,3458,3459,3460,3461,3462,3463,3464,3465,3466,3467,3468,3469,3470,3471,3472,3473,3474,3475,3476,3477,3478,3479,3480,3481,3482,3483,3484,3485,3486,3487,3488,3489,3490,3491,3492,3493,3494,3495,3496,3497,3498,3499,3500,3501,3502,3503,3504,3505,3506,3507,3508,3509,3510,3511,3512,3513,3514,3515,3516,3517,3518,3519,3520,3521,3522,3523,3524,3525,3526,3527,3528,3529,3530,3531,3532,3533,3534,3535,3536,3537,3538,3539,3540,3541,3542,3543,3544,3545,3546,3547,3548,3549,3550,3551,3552,3553,3554,3555,3556,3557,3558,3559,3560,3561,3562,3563,3564,3565,3566,3567,3568,3569,3570,3571,3572,3573,3574,3575,3576,3577,3578,3579,3580,3581,3582,3583,3584,3585,3586,3587,3588,3589,3590,3591,3592,3593,3594,3595,3596,3597,3598,3599,3600,3601,3602,3603,3604,3605,3606,3607,3608,3609,3610,3611,3612,3613,3614,3615,3616,3617,3618,3619,3620,3621,3622,3623,3624,3625,3626,3627,3628,3629,3630,3631,3632,3633,3634,3635,3636,3637,3638,3639,3640,3641,3642,3643,3644,3645,3646,3647,3648,3649,3650,3651,3652,3653,3654,3655,3656,3657,3658,3659,3660,3661,3662,3663,3664,3665,3666,3667,3668,3669,3670,3671,3672,3673,3674,3675,3676,3677,3678,3679,3680,3681,3682,3683,3684,3685,3686,3687,3688,3689,3690,3691,3692,3693,3694,3695,3696,3697,3698,3699,3700,3701,3702,3703,3704,3705,3706,3707,3708,3709,3710,3711,3712,3713,3714,3715,3716,3717,3718,3719,3720,3721,3722,3723,3724,3725,3726,3727,3728,3729,3730,3731,3732,3733,3734,3735,3736,3737,3738,3739,3740,3741,3742,3743,3744,3745,3746,3747,3748,3749,3750,3751,3752,3753,3754,3755,3756,3757,3758,3759,3760,3761,3762,3763,3764,3765,3766,3767,3768,3769,3770,3771,3772,3773,3774,3775,3776,3777,3778,3779,3780,3781,3782,3783,3784,3785,3786,3787,3788,3789,3790,3791,3792,3793,3794,3795,3796,3797,3798,3799,3800,3801,3802,3803,3804,3805,3806,3807,3808,3809,3810,3811,3812,3813,3814,3815,3816,3817,3818,3819,3820,3821,3822,3823,3824,3825,3826,3827,3828,3829,3830,3831,3832,3833,3834,3835,3836,3837,3838,3839,3840,3841,3842,3843,3844,3845,3846,3847,3848,3849,3850,3851,3852,3853,3854,3855,3856,3857,3858,3859,3860,3861,3862,3863,3864,3865,3866,3867,3868,3869,3870,3871,3872,3873,3874,3875,3876,3877,3878,3879,3880,3881,3882,3883,3884,3885,3886,3887,3888,3889,3890,3891,3892,3893,3894,3895,3896,3897,3898,3899,3900,3901,3902,3903,3904,3905,3906,3907,3908,3909,3910,3911,3912,3913,3914,3915,3916,3917,3918,3919,3920,3921,3922,3923,3924,3925,3926,3927,3928,3929,3930,3931,3932,3933,3934,3935,3936,3937,3938,3939,3940,3941,3942,3943,3944,3945,3946,3947,3948,3949,3950,3951,3952,3953,3954,3955,3956,3957,3958,3959,3960,3961,3962,3963,3964,3965,3966,3967,3968,3969,3970,3971,3972,3973,3974,3975,3976,3977,3978,3979,3980,3981,3982,3983,3984,3985,3986,3987,3988,3989,3990,3991,3992,3993,3994,3995,3996,3997,3998,3999,4000,4001,4002,4003,4004,4005,4006,4007,4008,4009,4010,4011,4012,4013,4014,4015,4016,4017,4018,4019,4020,4021,4022,4023,4024,4025,4026,4027,4028,4029,4030,4031,4032,4033,4034,4035,4036,4037,4038,4039,4040,4041,4042,4043,4044,4045,4046,4047,4048,4049,4050,4051,4052,4053,4054,4055,4056,4057,4058,4059,4060,4061,4062,4063,4064,4065,4066,4067,4068,4069,4070,4071,4072,4073,4074,4075,4076,4077,4078,4079,4080,4081,4082,4083,4084,4085,4086,4087,4088,4089,4090,4091,4092,4093,4094,4095,4096,4097,4098,4099,4100,4101,4102,4103,4104,4105,4106,4107,4108,4109,4110,4111,4112,4113,4114,4115,4116,4117,4118,4119,4120,4121,4122,4123,4124,4125,4126,4127,4128,4129,4130,4131,4132,4133,4134,4135,4136,4137,4138,4139,4140,4141,4142,4143,4144,4145,4146,4147,4148,4149,4150,4151,4152,4153,4154,4155,4156,4157,4158,4159,4160,4161,4162,4163,4164,4165,4166,4167,4168,4169,4170,4171,4172,4173,4174,4175,4176,4177,4178,4179,4180,4181,4182,4183,4184,4185,4186,4187,4188,4189,4190,4191,4192,4193,4194,4195,4196,4197,4198,4199,4200,4201,4202,4203,4204,4205,4206,4207,4208,4209,4210,4211,4212,4213,4214,4215,4216,4217,4218,4219,4220,4221,4222,4223,4224,4225,4226,4227,4228,4229,4230,4231,4232,4233,4234,4235,4236,4237,4238,4239,4240,4241,4242,4243,4244,4245,4246,4247,4248,4249,4250,4251,4252,4253,4254,4255,4256,4257,4258,4259,4260,4261,4262,4263,4264,4265,4266,4267,4268,4269,4270,4271,4272,4273,4274,4275,4276,4277,4278,4279,4280,4281,4282,4283,4284,4285,4286,4287,4288,4289,4290,4291,4292,4293,4294,4295,4296,4297,4298,4299,4300,4301,4302,4303,4304,4305,4306,4307,4308,4309,4310,4311,4312,4313,4314,4315,4316,4317,4318,4319,4320,4321,4322,4323,4324,4325,4326,4327,4328,4329,4330,4331,4332,4333,4334,4335,4336,4337,4338,4339,4340,4341,4342,4343,4344,4345,4346,4347,4348,4349,4350,4351,4352,4353,4354,4355,4356,4357,4358,4359,4360,4361,4362,4363,4364,4365,4366,4367,4368,4369,4370,4371,4372,4373,4374,4375,4376,4377,4378,4379,4380,4381,4382,4383,4384,4385,4386,4387,4388,4389,4390,4391,4392,4393,4394,4395,4396,4397,4398,4399,4400,4401,4402,4403,4404,4405,4406,4407,4408,4409,4410,4411,4412,4413,4414,4415,4416,4417,4418,4419,4420,4421,4422,4423,4424,4425,4426,4427,4428,4429,4430,4431,4432,4433,4434,4435,4436,4437,4438,4439,4440,4441,4442,4443,4444,4445,4446,4447,4448,4449,4450,4451,4452,4453,4454,4455,4456,4457,4458,4459,4460,4461,4462,4463,4464,4465,4466,4467,4468,4469,4470,4471,4472,4473,4474,4475,4476,4477,4478,4479,4480,4481,4482,4483,4484,4485,4486,4487,4488,4489,4490,4491,4492,4493,4494,4495,4496,4497,4498,4499,4500,4501,4502,4503,4504,4505,4506,4507,4508,4509,4510,4511,4512,4513,4514,4515,4516,4517,4518,4519,4520,4521,4522,4523,4524,4525,4526,4527,4528,4529,4530,4531,4532,4533,4534,4535,4536,4537,4538,4539,4540,4541,4542,4543,4544,4545,4546,4547,4548,4549,4550,4551,4552,4553,4554,4555,4556,4557,4558,4559,4560,4561,4562,4563,4564,4565,4566,4567,4568,4569,4570,4571,4572,4573,4574,4575,4576,4577,4578,4579,4580,4581,4582,4583,4584,4585,4586,4587,4588,4589,4590,4591,4592,4593,4594,4595,4596,4597,4598,4599,4600,4601,4602,4603,4604,4605,4606,4607,4608,4609,4610,4611,4612,4613,4614,4615,4616,4617,4618,4619,4620,4621,4622,4623,4624,4625,4626,4627,4628,4629,4630,4631,4632,4633,4634,4635,4636,4637,4638,4639,4640,4641,4642,4643,4644,4645,4646,4647,4648,4649,4650,4651,4652,4653,4654,4655,4656,4657,4658,4659,4660,4661,4662,4663,4664,4665,4666,4667,4668,4669,4670,4671,4672,4673,4674,4675,4676,4677,4678,4679,4680,4681,4682,4683,4684,4685,4686,4687,4688,4689,4690,4691,4692,4693,4694,4695,4696,4697,4698,4699,4700,4701,4702,4703,4704,4705,4706,4707,4708,4709,4710,4711,4712,4713,4714,4715,4716,4717,4718,4719,4720,4721,4722,4723,4724,4725,4726,4727,4728,4729,4730,4731,4732,4733,4734,4735,4736,4737,4738,4739,4740,4741,4742,4743,4744,4745,4746,4747,4748,4749,4750,4751,4752,4753,4754,4755,4756,4757,4758,4759,4760,4761,4762,4763,4764,4765,4766,4767,4768,4769,4770,4771,4772,4773,4774,4775,4776,4777,4778,4779,4780,4781,4782,4783,4784,4785,4786,4787,4788,4789,4790,4791,4792,4793,4794,4795,4796,4797,4798,4799,4800,4801,4802,4803,4804,4805,4806,4807,4808,4809,4810,4811,4812,4813,4814,4815,4816,4817,4818,4819,4820,4821,4822,4823,4824,4825,4826,4827,4828,4829,4830,4831,4832,4833,4834,4835,4836,4837,4838,4839,4840,4841,4842,4843,4844,4845,4846,4847,4848,4849,4850,4851,4852,4853,4854,4855,4856,4857,4858,4859,4860,4861,4862,4863,4864,4865,4866,4867,4868,4869,4870,4871,4872,4873,4874,4875,4876,4877,4878,4879,4880,4881,4882,4883,4884,4885,4886,4887,4888,4889,4890,4891,4892,4893,4894,4895,4896,4897,4898,4899,4900,4901,4902,4903,4904,4905,4906,4907,4908,4909,4910,4911,4912,4913,4914,4915,4916,4917,4918,4919,4920,4921,4922,4923,4924,4925,4926,4927,4928,4929,4930,4931,4932,4933,4934,4935,4936,4937,4938,4939,4940,4941,4942,4943,4944,4945,4946,4947,4948,4949,4950,4951,4952,4953,4954,4955,4956,4957,4958,4959,4960,4961,4962,4963,4964,4965,4966,4967,4968,4969,4970,4971,4972,4973,4974,4975,4976,4977,4978,4979,4980,4981,4982,4983,4984,4985,4986,4987,4988,4989,4990,4991,4992,4993,4994,4995,4996,4997,4998,4999]
# l = list(reversed(l))
# n = ListNode(l[0])
# head = n
# for i in xrange(1, len(l)):
# node = ListNode(l[i])
# n.next = node
# n = node
# head = s.insertionSortList(head)
# printList(head)
| mit |
sushramesh/lwc | lib/python2.7/site-packages/pip/utils/outdated.py | 191 | 5555 | from __future__ import absolute_import
import datetime
import json
import logging
import os.path
import sys
from pip._vendor import lockfile
from pip._vendor.packaging import version as packaging_version
from pip.compat import total_seconds, WINDOWS
from pip.index import PyPI
from pip.locations import USER_CACHE_DIR, running_under_virtualenv
from pip.utils import ensure_dir, get_installed_version
from pip.utils.filesystem import check_path_owner
SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
logger = logging.getLogger(__name__)
class VirtualenvSelfCheckState(object):
def __init__(self):
self.statefile_path = os.path.join(sys.prefix, "pip-selfcheck.json")
# Load the existing state
try:
with open(self.statefile_path) as statefile:
self.state = json.load(statefile)
except (IOError, ValueError):
self.state = {}
def save(self, pypi_version, current_time):
# Attempt to write out our version check file
with open(self.statefile_path, "w") as statefile:
json.dump(
{
"last_check": current_time.strftime(SELFCHECK_DATE_FMT),
"pypi_version": pypi_version,
},
statefile,
sort_keys=True,
separators=(",", ":")
)
class GlobalSelfCheckState(object):
def __init__(self):
self.statefile_path = os.path.join(USER_CACHE_DIR, "selfcheck.json")
# Load the existing state
try:
with open(self.statefile_path) as statefile:
self.state = json.load(statefile)[sys.prefix]
except (IOError, ValueError, KeyError):
self.state = {}
def save(self, pypi_version, current_time):
# Check to make sure that we own the directory
if not check_path_owner(os.path.dirname(self.statefile_path)):
return
# Now that we've ensured the directory is owned by this user, we'll go
# ahead and make sure that all our directories are created.
ensure_dir(os.path.dirname(self.statefile_path))
# Attempt to write out our version check file
with lockfile.LockFile(self.statefile_path):
if os.path.exists(self.statefile_path):
with open(self.statefile_path) as statefile:
state = json.load(statefile)
else:
state = {}
state[sys.prefix] = {
"last_check": current_time.strftime(SELFCHECK_DATE_FMT),
"pypi_version": pypi_version,
}
with open(self.statefile_path, "w") as statefile:
json.dump(state, statefile, sort_keys=True,
separators=(",", ":"))
def load_selfcheck_statefile():
if running_under_virtualenv():
return VirtualenvSelfCheckState()
else:
return GlobalSelfCheckState()
def pip_version_check(session):
"""Check for an update for pip.
Limit the frequency of checks to once per week. State is stored either in
the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix
of the pip script path.
"""
installed_version = get_installed_version("pip")
if installed_version is None:
return
pip_version = packaging_version.parse(installed_version)
pypi_version = None
try:
state = load_selfcheck_statefile()
current_time = datetime.datetime.utcnow()
# Determine if we need to refresh the state
if "last_check" in state.state and "pypi_version" in state.state:
last_check = datetime.datetime.strptime(
state.state["last_check"],
SELFCHECK_DATE_FMT
)
if total_seconds(current_time - last_check) < 7 * 24 * 60 * 60:
pypi_version = state.state["pypi_version"]
# Refresh the version if we need to or just see if we need to warn
if pypi_version is None:
resp = session.get(
PyPI.pip_json_url,
headers={"Accept": "application/json"},
)
resp.raise_for_status()
pypi_version = [
v for v in sorted(
list(resp.json()["releases"]),
key=packaging_version.parse,
)
if not packaging_version.parse(v).is_prerelease
][-1]
# save that we've performed a check
state.save(pypi_version, current_time)
remote_version = packaging_version.parse(pypi_version)
# Determine if our pypi_version is older
if (pip_version < remote_version and
pip_version.base_version != remote_version.base_version):
# Advise "python -m pip" on Windows to avoid issues
# with overwriting pip.exe.
if WINDOWS:
pip_cmd = "python -m pip"
else:
pip_cmd = "pip"
logger.warning(
"You are using pip version %s, however version %s is "
"available.\nYou should consider upgrading via the "
"'%s install --upgrade pip' command." % (pip_version,
pypi_version,
pip_cmd)
)
except Exception:
logger.debug(
"There was an error checking the latest version of pip",
exc_info=True,
)
| mit |
certik/python-theora | examples/chop.py | 1 | 1424 | #! /usr/bin/env python
"""
Analog to the oggz-chop program.
Example:
examples/chop.py -o s.ogv -s 20 -e 30 video.ogv
See "./chop.py -h" for help.
"""
from optparse import OptionParser
from theora import Theora, TheoraEncoder
def convert(infile, outfile, start, end):
print "converting %s to %s, between the times %d:%d" % \
(infile, outfile, start, end)
a = Theora(infile)
b = TheoraEncoder(outfile, a.width, a.height, quality=63)
a.seek(time=start)
while a.read_frame() and a.time < end:
print "frame: %d, time=%f" % (a.frame, a.time)
A = a.get_frame_array()
b.write_frame_array(A)
usage = """\
%prog [options] file_in
Extract the part of a Theora video file between start and/or end times.
"""
def main():
parser = OptionParser(usage=usage)
parser.add_option("-o", "--output", dest="filename",
help="Specify output filename")
parser.add_option("-s", "--start", dest="start_time", type="int",
help="Specify start time")
parser.add_option("-e", "--end", dest="end_time", type="int",
help="Specify end time")
options, args = parser.parse_args()
if options.filename and options.start_time and options.end_time and \
len(args) == 1:
convert(args[0], options.filename, options.start_time, options.end_time)
else:
parser.print_help()
if __name__ == "__main__":
main()
| bsd-3-clause |
tiagoarasilva/django-boilerplate | project_name/lib/audit/middleware.py | 1 | 3075 | # Copyright (c) 2009 James Aylett <http://tartarus.org/james/computers/django/>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from django.db.models.signals import pre_save
import threading
stash = threading.local()
def get_current_user():
"""Get the user whose session resulted in the current code running. (Only valid during requests.)"""
return getattr(stash, 'current_user', None)
def set_current_user(user):
stash.current_user = user
def onanymodel_presave(sender, **kwargs):
current_user = get_current_user()
if current_user is None or current_user.is_anonymous():
# if there is no current user or we're an anonymous user (ie: guest) then
# don't do anything. The save() will fail if created_by or modified_by are
# null=False, and not otherwise; ie the behaviour is controlled by the
# modelos, as desired.
current_user = None
obj = kwargs['instance']
if hasattr(obj, 'modified_by_id') and current_user:
obj.modified_by = current_user
if not obj.pk and current_user and hasattr(obj, 'created_by_id'):
try:
if not obj.created_by:
obj.created_by = current_user
except obj.__class__.created_by.field.rel.to.DoesNotExist:
# FRAGILE: reliant on Django internals
# (django.db.modelos.fields.related.ReverseSingleRelatedObjectDescriptor and down)
#
# however will work if you don't use the django auth system, and make the created_by
# field a ForeignKey to whatever you use instead of django.contrib.auth.modelos.User.
obj.created_by = current_user
pre_save.connect(onanymodel_presave)
class AutoCreatedAndModifiedFields:
def process_request(self, request):
set_current_user(request.user)
def process_response(self, request, response):
set_current_user(None)
return response
def process_exception(self, request, exception):
set_current_user(None)
| mit |
jpush/jbox | Server/venv/lib/python3.5/site-packages/pip/_vendor/requests/packages/chardet/universaldetector.py | 1776 | 6840 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Universal charset detector code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 2001
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
# Shy Shalom - original C code
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from . import constants
import sys
import codecs
from .latin1prober import Latin1Prober # windows-1252
from .mbcsgroupprober import MBCSGroupProber # multi-byte character sets
from .sbcsgroupprober import SBCSGroupProber # single-byte character sets
from .escprober import EscCharSetProber # ISO-2122, etc.
import re
MINIMUM_THRESHOLD = 0.20
ePureAscii = 0
eEscAscii = 1
eHighbyte = 2
class UniversalDetector:
def __init__(self):
self._highBitDetector = re.compile(b'[\x80-\xFF]')
self._escDetector = re.compile(b'(\033|~{)')
self._mEscCharSetProber = None
self._mCharSetProbers = []
self.reset()
def reset(self):
self.result = {'encoding': None, 'confidence': 0.0}
self.done = False
self._mStart = True
self._mGotData = False
self._mInputState = ePureAscii
self._mLastChar = b''
if self._mEscCharSetProber:
self._mEscCharSetProber.reset()
for prober in self._mCharSetProbers:
prober.reset()
def feed(self, aBuf):
if self.done:
return
aLen = len(aBuf)
if not aLen:
return
if not self._mGotData:
# If the data starts with BOM, we know it is UTF
if aBuf[:3] == codecs.BOM_UTF8:
# EF BB BF UTF-8 with BOM
self.result = {'encoding': "UTF-8-SIG", 'confidence': 1.0}
elif aBuf[:4] == codecs.BOM_UTF32_LE:
# FF FE 00 00 UTF-32, little-endian BOM
self.result = {'encoding': "UTF-32LE", 'confidence': 1.0}
elif aBuf[:4] == codecs.BOM_UTF32_BE:
# 00 00 FE FF UTF-32, big-endian BOM
self.result = {'encoding': "UTF-32BE", 'confidence': 1.0}
elif aBuf[:4] == b'\xFE\xFF\x00\x00':
# FE FF 00 00 UCS-4, unusual octet order BOM (3412)
self.result = {
'encoding': "X-ISO-10646-UCS-4-3412",
'confidence': 1.0
}
elif aBuf[:4] == b'\x00\x00\xFF\xFE':
# 00 00 FF FE UCS-4, unusual octet order BOM (2143)
self.result = {
'encoding': "X-ISO-10646-UCS-4-2143",
'confidence': 1.0
}
elif aBuf[:2] == codecs.BOM_LE:
# FF FE UTF-16, little endian BOM
self.result = {'encoding': "UTF-16LE", 'confidence': 1.0}
elif aBuf[:2] == codecs.BOM_BE:
# FE FF UTF-16, big endian BOM
self.result = {'encoding': "UTF-16BE", 'confidence': 1.0}
self._mGotData = True
if self.result['encoding'] and (self.result['confidence'] > 0.0):
self.done = True
return
if self._mInputState == ePureAscii:
if self._highBitDetector.search(aBuf):
self._mInputState = eHighbyte
elif ((self._mInputState == ePureAscii) and
self._escDetector.search(self._mLastChar + aBuf)):
self._mInputState = eEscAscii
self._mLastChar = aBuf[-1:]
if self._mInputState == eEscAscii:
if not self._mEscCharSetProber:
self._mEscCharSetProber = EscCharSetProber()
if self._mEscCharSetProber.feed(aBuf) == constants.eFoundIt:
self.result = {'encoding': self._mEscCharSetProber.get_charset_name(),
'confidence': self._mEscCharSetProber.get_confidence()}
self.done = True
elif self._mInputState == eHighbyte:
if not self._mCharSetProbers:
self._mCharSetProbers = [MBCSGroupProber(), SBCSGroupProber(),
Latin1Prober()]
for prober in self._mCharSetProbers:
if prober.feed(aBuf) == constants.eFoundIt:
self.result = {'encoding': prober.get_charset_name(),
'confidence': prober.get_confidence()}
self.done = True
break
def close(self):
if self.done:
return
if not self._mGotData:
if constants._debug:
sys.stderr.write('no data received!\n')
return
self.done = True
if self._mInputState == ePureAscii:
self.result = {'encoding': 'ascii', 'confidence': 1.0}
return self.result
if self._mInputState == eHighbyte:
proberConfidence = None
maxProberConfidence = 0.0
maxProber = None
for prober in self._mCharSetProbers:
if not prober:
continue
proberConfidence = prober.get_confidence()
if proberConfidence > maxProberConfidence:
maxProberConfidence = proberConfidence
maxProber = prober
if maxProber and (maxProberConfidence > MINIMUM_THRESHOLD):
self.result = {'encoding': maxProber.get_charset_name(),
'confidence': maxProber.get_confidence()}
return self.result
if constants._debug:
sys.stderr.write('no probers hit minimum threshhold\n')
for prober in self._mCharSetProbers[0].mProbers:
if not prober:
continue
sys.stderr.write('%s confidence = %s\n' %
(prober.get_charset_name(),
prober.get_confidence()))
| mit |
xindus40223115/w16b_test | static/Brython3.1.0-20150301-090019/Lib/unittest/test/test_assertions.py | 738 | 15398 | import datetime
import warnings
import unittest
from itertools import product
class Test_Assertions(unittest.TestCase):
def test_AlmostEqual(self):
self.assertAlmostEqual(1.00000001, 1.0)
self.assertNotAlmostEqual(1.0000001, 1.0)
self.assertRaises(self.failureException,
self.assertAlmostEqual, 1.0000001, 1.0)
self.assertRaises(self.failureException,
self.assertNotAlmostEqual, 1.00000001, 1.0)
self.assertAlmostEqual(1.1, 1.0, places=0)
self.assertRaises(self.failureException,
self.assertAlmostEqual, 1.1, 1.0, places=1)
self.assertAlmostEqual(0, .1+.1j, places=0)
self.assertNotAlmostEqual(0, .1+.1j, places=1)
self.assertRaises(self.failureException,
self.assertAlmostEqual, 0, .1+.1j, places=1)
self.assertRaises(self.failureException,
self.assertNotAlmostEqual, 0, .1+.1j, places=0)
self.assertAlmostEqual(float('inf'), float('inf'))
self.assertRaises(self.failureException, self.assertNotAlmostEqual,
float('inf'), float('inf'))
def test_AmostEqualWithDelta(self):
self.assertAlmostEqual(1.1, 1.0, delta=0.5)
self.assertAlmostEqual(1.0, 1.1, delta=0.5)
self.assertNotAlmostEqual(1.1, 1.0, delta=0.05)
self.assertNotAlmostEqual(1.0, 1.1, delta=0.05)
self.assertRaises(self.failureException, self.assertAlmostEqual,
1.1, 1.0, delta=0.05)
self.assertRaises(self.failureException, self.assertNotAlmostEqual,
1.1, 1.0, delta=0.5)
self.assertRaises(TypeError, self.assertAlmostEqual,
1.1, 1.0, places=2, delta=2)
self.assertRaises(TypeError, self.assertNotAlmostEqual,
1.1, 1.0, places=2, delta=2)
first = datetime.datetime.now()
second = first + datetime.timedelta(seconds=10)
self.assertAlmostEqual(first, second,
delta=datetime.timedelta(seconds=20))
self.assertNotAlmostEqual(first, second,
delta=datetime.timedelta(seconds=5))
def test_assertRaises(self):
def _raise(e):
raise e
self.assertRaises(KeyError, _raise, KeyError)
self.assertRaises(KeyError, _raise, KeyError("key"))
try:
self.assertRaises(KeyError, lambda: None)
except self.failureException as e:
self.assertIn("KeyError not raised", str(e))
else:
self.fail("assertRaises() didn't fail")
try:
self.assertRaises(KeyError, _raise, ValueError)
except ValueError:
pass
else:
self.fail("assertRaises() didn't let exception pass through")
with self.assertRaises(KeyError) as cm:
try:
raise KeyError
except Exception as e:
exc = e
raise
self.assertIs(cm.exception, exc)
with self.assertRaises(KeyError):
raise KeyError("key")
try:
with self.assertRaises(KeyError):
pass
except self.failureException as e:
self.assertIn("KeyError not raised", str(e))
else:
self.fail("assertRaises() didn't fail")
try:
with self.assertRaises(KeyError):
raise ValueError
except ValueError:
pass
else:
self.fail("assertRaises() didn't let exception pass through")
def testAssertNotRegex(self):
self.assertNotRegex('Ala ma kota', r'r+')
try:
self.assertNotRegex('Ala ma kota', r'k.t', 'Message')
except self.failureException as e:
self.assertIn("'kot'", e.args[0])
self.assertIn('Message', e.args[0])
else:
self.fail('assertNotRegex should have failed.')
class TestLongMessage(unittest.TestCase):
"""Test that the individual asserts honour longMessage.
This actually tests all the message behaviour for
asserts that use longMessage."""
def setUp(self):
class TestableTestFalse(unittest.TestCase):
longMessage = False
failureException = self.failureException
def testTest(self):
pass
class TestableTestTrue(unittest.TestCase):
longMessage = True
failureException = self.failureException
def testTest(self):
pass
self.testableTrue = TestableTestTrue('testTest')
self.testableFalse = TestableTestFalse('testTest')
def testDefault(self):
self.assertTrue(unittest.TestCase.longMessage)
def test_formatMsg(self):
self.assertEqual(self.testableFalse._formatMessage(None, "foo"), "foo")
self.assertEqual(self.testableFalse._formatMessage("foo", "bar"), "foo")
self.assertEqual(self.testableTrue._formatMessage(None, "foo"), "foo")
self.assertEqual(self.testableTrue._formatMessage("foo", "bar"), "bar : foo")
# This blows up if _formatMessage uses string concatenation
self.testableTrue._formatMessage(object(), 'foo')
def test_formatMessage_unicode_error(self):
one = ''.join(chr(i) for i in range(255))
# this used to cause a UnicodeDecodeError constructing msg
self.testableTrue._formatMessage(one, '\uFFFD')
def assertMessages(self, methodName, args, errors):
"""
Check that methodName(*args) raises the correct error messages.
errors should be a list of 4 regex that match the error when:
1) longMessage = False and no msg passed;
2) longMessage = False and msg passed;
3) longMessage = True and no msg passed;
4) longMessage = True and msg passed;
"""
def getMethod(i):
useTestableFalse = i < 2
if useTestableFalse:
test = self.testableFalse
else:
test = self.testableTrue
return getattr(test, methodName)
for i, expected_regex in enumerate(errors):
testMethod = getMethod(i)
kwargs = {}
withMsg = i % 2
if withMsg:
kwargs = {"msg": "oops"}
with self.assertRaisesRegex(self.failureException,
expected_regex=expected_regex):
testMethod(*args, **kwargs)
def testAssertTrue(self):
self.assertMessages('assertTrue', (False,),
["^False is not true$", "^oops$", "^False is not true$",
"^False is not true : oops$"])
def testAssertFalse(self):
self.assertMessages('assertFalse', (True,),
["^True is not false$", "^oops$", "^True is not false$",
"^True is not false : oops$"])
def testNotEqual(self):
self.assertMessages('assertNotEqual', (1, 1),
["^1 == 1$", "^oops$", "^1 == 1$",
"^1 == 1 : oops$"])
def testAlmostEqual(self):
self.assertMessages('assertAlmostEqual', (1, 2),
["^1 != 2 within 7 places$", "^oops$",
"^1 != 2 within 7 places$", "^1 != 2 within 7 places : oops$"])
def testNotAlmostEqual(self):
self.assertMessages('assertNotAlmostEqual', (1, 1),
["^1 == 1 within 7 places$", "^oops$",
"^1 == 1 within 7 places$", "^1 == 1 within 7 places : oops$"])
def test_baseAssertEqual(self):
self.assertMessages('_baseAssertEqual', (1, 2),
["^1 != 2$", "^oops$", "^1 != 2$", "^1 != 2 : oops$"])
def testAssertSequenceEqual(self):
# Error messages are multiline so not testing on full message
# assertTupleEqual and assertListEqual delegate to this method
self.assertMessages('assertSequenceEqual', ([], [None]),
["\+ \[None\]$", "^oops$", r"\+ \[None\]$",
r"\+ \[None\] : oops$"])
def testAssertSetEqual(self):
self.assertMessages('assertSetEqual', (set(), set([None])),
["None$", "^oops$", "None$",
"None : oops$"])
def testAssertIn(self):
self.assertMessages('assertIn', (None, []),
['^None not found in \[\]$', "^oops$",
'^None not found in \[\]$',
'^None not found in \[\] : oops$'])
def testAssertNotIn(self):
self.assertMessages('assertNotIn', (None, [None]),
['^None unexpectedly found in \[None\]$', "^oops$",
'^None unexpectedly found in \[None\]$',
'^None unexpectedly found in \[None\] : oops$'])
def testAssertDictEqual(self):
self.assertMessages('assertDictEqual', ({}, {'key': 'value'}),
[r"\+ \{'key': 'value'\}$", "^oops$",
"\+ \{'key': 'value'\}$",
"\+ \{'key': 'value'\} : oops$"])
def testAssertDictContainsSubset(self):
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
self.assertMessages('assertDictContainsSubset', ({'key': 'value'}, {}),
["^Missing: 'key'$", "^oops$",
"^Missing: 'key'$",
"^Missing: 'key' : oops$"])
def testAssertMultiLineEqual(self):
self.assertMessages('assertMultiLineEqual', ("", "foo"),
[r"\+ foo$", "^oops$",
r"\+ foo$",
r"\+ foo : oops$"])
def testAssertLess(self):
self.assertMessages('assertLess', (2, 1),
["^2 not less than 1$", "^oops$",
"^2 not less than 1$", "^2 not less than 1 : oops$"])
def testAssertLessEqual(self):
self.assertMessages('assertLessEqual', (2, 1),
["^2 not less than or equal to 1$", "^oops$",
"^2 not less than or equal to 1$",
"^2 not less than or equal to 1 : oops$"])
def testAssertGreater(self):
self.assertMessages('assertGreater', (1, 2),
["^1 not greater than 2$", "^oops$",
"^1 not greater than 2$",
"^1 not greater than 2 : oops$"])
def testAssertGreaterEqual(self):
self.assertMessages('assertGreaterEqual', (1, 2),
["^1 not greater than or equal to 2$", "^oops$",
"^1 not greater than or equal to 2$",
"^1 not greater than or equal to 2 : oops$"])
def testAssertIsNone(self):
self.assertMessages('assertIsNone', ('not None',),
["^'not None' is not None$", "^oops$",
"^'not None' is not None$",
"^'not None' is not None : oops$"])
def testAssertIsNotNone(self):
self.assertMessages('assertIsNotNone', (None,),
["^unexpectedly None$", "^oops$",
"^unexpectedly None$",
"^unexpectedly None : oops$"])
def testAssertIs(self):
self.assertMessages('assertIs', (None, 'foo'),
["^None is not 'foo'$", "^oops$",
"^None is not 'foo'$",
"^None is not 'foo' : oops$"])
def testAssertIsNot(self):
self.assertMessages('assertIsNot', (None, None),
["^unexpectedly identical: None$", "^oops$",
"^unexpectedly identical: None$",
"^unexpectedly identical: None : oops$"])
def assertMessagesCM(self, methodName, args, func, errors):
"""
Check that the correct error messages are raised while executing:
with method(*args):
func()
*errors* should be a list of 4 regex that match the error when:
1) longMessage = False and no msg passed;
2) longMessage = False and msg passed;
3) longMessage = True and no msg passed;
4) longMessage = True and msg passed;
"""
p = product((self.testableFalse, self.testableTrue),
({}, {"msg": "oops"}))
for (cls, kwargs), err in zip(p, errors):
method = getattr(cls, methodName)
with self.assertRaisesRegex(cls.failureException, err):
with method(*args, **kwargs) as cm:
func()
def testAssertRaises(self):
self.assertMessagesCM('assertRaises', (TypeError,), lambda: None,
['^TypeError not raised$', '^oops$',
'^TypeError not raised$',
'^TypeError not raised : oops$'])
def testAssertRaisesRegex(self):
# test error not raised
self.assertMessagesCM('assertRaisesRegex', (TypeError, 'unused regex'),
lambda: None,
['^TypeError not raised$', '^oops$',
'^TypeError not raised$',
'^TypeError not raised : oops$'])
# test error raised but with wrong message
def raise_wrong_message():
raise TypeError('foo')
self.assertMessagesCM('assertRaisesRegex', (TypeError, 'regex'),
raise_wrong_message,
['^"regex" does not match "foo"$', '^oops$',
'^"regex" does not match "foo"$',
'^"regex" does not match "foo" : oops$'])
def testAssertWarns(self):
self.assertMessagesCM('assertWarns', (UserWarning,), lambda: None,
['^UserWarning not triggered$', '^oops$',
'^UserWarning not triggered$',
'^UserWarning not triggered : oops$'])
def testAssertWarnsRegex(self):
# test error not raised
self.assertMessagesCM('assertWarnsRegex', (UserWarning, 'unused regex'),
lambda: None,
['^UserWarning not triggered$', '^oops$',
'^UserWarning not triggered$',
'^UserWarning not triggered : oops$'])
# test warning raised but with wrong message
def raise_wrong_message():
warnings.warn('foo')
self.assertMessagesCM('assertWarnsRegex', (UserWarning, 'regex'),
raise_wrong_message,
['^"regex" does not match "foo"$', '^oops$',
'^"regex" does not match "foo"$',
'^"regex" does not match "foo" : oops$'])
| gpl-3.0 |
akosel/incubator-airflow | airflow/www_rbac/decorators.py | 9 | 4418 | # -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import gzip
import functools
import pendulum
from io import BytesIO as IO
from flask import after_this_request, redirect, request, url_for, g
from airflow import models, settings
def action_logging(f):
"""
Decorator to log user actions
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
session = settings.Session()
if g.user.is_anonymous():
user = 'anonymous'
else:
user = g.user.username
log = models.Log(
event=f.__name__,
task_instance=None,
owner=user,
extra=str(list(request.args.items())),
task_id=request.args.get('task_id'),
dag_id=request.args.get('dag_id'))
if 'execution_date' in request.args:
log.execution_date = pendulum.parse(
request.args.get('execution_date'))
session.add(log)
session.commit()
return f(*args, **kwargs)
return wrapper
def gzipped(f):
"""
Decorator to make a view compressed
"""
@functools.wraps(f)
def view_func(*args, **kwargs):
@after_this_request
def zipper(response):
accept_encoding = request.headers.get('Accept-Encoding', '')
if 'gzip' not in accept_encoding.lower():
return response
response.direct_passthrough = False
if (response.status_code < 200 or response.status_code >= 300 or
'Content-Encoding' in response.headers):
return response
gzip_buffer = IO()
gzip_file = gzip.GzipFile(mode='wb',
fileobj=gzip_buffer)
gzip_file.write(response.data)
gzip_file.close()
response.data = gzip_buffer.getvalue()
response.headers['Content-Encoding'] = 'gzip'
response.headers['Vary'] = 'Accept-Encoding'
response.headers['Content-Length'] = len(response.data)
return response
return f(*args, **kwargs)
return view_func
def has_dag_access(**dag_kwargs):
"""
Decorator to check whether the user has read / write permission on the dag.
"""
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
has_access = self.appbuilder.sm.has_access
dag_id = request.args.get('dag_id')
# if it is false, we need to check whether user has write access on the dag
can_dag_edit = dag_kwargs.get('can_dag_edit', False)
# 1. check whether the user has can_dag_edit permissions on all_dags
# 2. if 1 false, check whether the user
# has can_dag_edit permissions on the dag
# 3. if 2 false, check whether it is can_dag_read view,
# and whether user has the permissions
if (
has_access('can_dag_edit', 'all_dags') or
has_access('can_dag_edit', dag_id) or (not can_dag_edit and
(has_access('can_dag_read',
'all_dags') or
has_access('can_dag_read',
dag_id)))):
return f(self, *args, **kwargs)
else:
return redirect(url_for(self.appbuilder.sm.auth_view.
__class__.__name__ + ".login"))
return wrapper
return decorator
| apache-2.0 |
kuangrewawa/OnosFw | tools/test/topos/obelisk.py | 38 | 2612 | #!/usr/bin/env python
from mininet.topo import Topo
class ObeliskTopo( Topo ):
def __init__( self ):
Topo.__init__( self )
topSwitch = self.addSwitch('s1',dpid='1000'.zfill(16))
leftTopSwitch = self.addSwitch('s2',dpid='2000'.zfill(16))
rightTopSwitch = self.addSwitch('s5',dpid='5000'.zfill(16))
leftBotSwitch = self.addSwitch('s3',dpid='3000'.zfill(16))
rightBotSwitch = self.addSwitch('s6',dpid='6000'.zfill(16))
midBotSwitch = self.addSwitch('s28',dpid='2800'.zfill(16))
topHost = self.addHost( 'h1' )
leftTopHost = self.addHost('h2')
rightTopHost = self.addHost('h5')
leftBotHost = self.addHost('h3')
rightBotHost = self.addHost('h6')
midBotHost = self.addHost('h28')
self.addLink(topSwitch,topHost)
self.addLink(leftTopSwitch,leftTopHost)
self.addLink(rightTopSwitch,rightTopHost)
self.addLink(leftBotSwitch,leftBotHost)
self.addLink(rightBotSwitch,rightBotHost)
self.addLink(midBotSwitch,midBotHost)
self.addLink(leftTopSwitch,rightTopSwitch)
self.addLink(topSwitch,leftTopSwitch)
self.addLink(topSwitch,rightTopSwitch)
self.addLink(leftTopSwitch,leftBotSwitch)
self.addLink(rightTopSwitch,rightBotSwitch)
self.addLink(leftBotSwitch,midBotSwitch)
self.addLink(midBotSwitch,rightBotSwitch)
agg1Switch = self.addSwitch('s4',dpid = '3004'.zfill(16))
agg2Switch = self.addSwitch('s7',dpid = '6007'.zfill(16))
agg1Host = self.addHost('h4')
agg2Host = self.addHost('h7')
self.addLink(agg1Switch,agg1Host)
self.addLink(agg2Switch,agg2Host)
self.addLink(agg1Switch, leftBotSwitch)
self.addLink(agg2Switch, rightBotSwitch)
for i in range(10):
num = str(i+8)
switch = self.addSwitch('s'+num,dpid = ('30'+num.zfill(2)).zfill(16))
host = self.addHost('h'+num)
self.addLink(switch, host)
self.addLink(switch, agg1Switch)
for i in range(10):
num = str(i+18)
switch = self.addSwitch('s'+num,dpid = ('60'+num.zfill(2)).zfill(16))
host = self.addHost('h'+num)
self.addLink(switch, host)
self.addLink(switch, agg2Switch)
topos = { 'obelisk': (lambda: ObeliskTopo() ) }
def run():
topo = ObeliskTopo()
net = Mininet( topo=topo, controller=RemoteController, autoSetMacs=True )
net.start()
CLI( net )
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' )
run()
| apache-2.0 |
CeltonMcGrath/TACTIC | src/pyasm/biz/preference.py | 6 | 3572 | ###########################################################
#
# Copyright (c) 2005, Southpaw Technology
# All Rights Reserved
#
# PROPRIETARY INFORMATION. This software is proprietary to
# Southpaw Technology, and is not to be reproduced, transmitted,
# or disclosed in any way without written permission.
#
#
#
__all__ = ['PrefSetting', 'PrefList']
from pyasm.search import SObject, Search, DatabaseException
from pyasm.common import Container, TacticException, Environment
class PrefList(SObject):
'''Defines all of the pref settings in the Admin area'''
SEARCH_TYPE = "sthpw/pref_list"
def get_value_by_key(cls, key, search_type=None):
prod_setting = cls.get_by_key(key, search_type)
value = ""
if prod_setting:
value = prod_setting.get_value("options")
return value
get_value_by_key = classmethod(get_value_by_key)
def get_by_key(cls, key, search_type=None):
dict_key = '%s:%s' %(key, search_type)
cached = cls.get_cached_obj(dict_key)
if cached:
return cached
search = Search(cls.SEARCH_TYPE)
search.add_filter("key", key)
if search_type:
search.add_filter("search_type", search_type)
prod_setting = search.get_sobject()
dict = cls.get_cache_dict()
dict[dict_key] = prod_setting
return prod_setting
get_by_key = classmethod(get_by_key)
class PrefSetting(PrefList):
'''Defines all of the user settings for a given prodution'''
SEARCH_TYPE = "sthpw/pref_setting"
def get_value_by_key(cls, key, user=None):
''' get the value of this pref '''
#try:
# from pyasm.biz import SearchTypeCache
# cache = SearchTypeCache.get(cls.SEARCH_TYPE)
#except Exception:
# print "WARNING: Cache not enabled"
# protect against database connection issues (This is called at a very
# low level, so it needs this)
try:
pref_setting = cls.get_by_key(key,user)
value = ''
if pref_setting:
value = pref_setting.get_value("value")
except DatabaseException:
value = ''
return value
get_value_by_key = classmethod(get_value_by_key)
def get_by_key(cls, key, user=None):
if not user:
user = Environment.get_user_name()
# ignore the project_code column for now
dict_key = '%s:%s' %(cls.SEARCH_TYPE, user)
settings_dict = Container.get(dict_key)
# explicit check for None
if settings_dict == None:
settings_dict = {}
Container.put(dict_key, settings_dict)
search = Search(cls.SEARCH_TYPE)
search.add_filter("login", user)
# don't filter with the key in order to build a dict
pref_settings = search.get_sobjects()
for setting in pref_settings:
settings_dict[setting.get_value('key')] = setting
pref_setting = settings_dict.get(key)
return pref_setting
get_by_key = classmethod(get_by_key)
def create(cls, key, value):
setting = cls.get_by_key(key)
if not setting:
setting = PrefSetting.create_new()
setting.set_value("key", key)
user = Environment.get_user_name()
setting.set_value("login", user)
setting.set_value("value", value)
setting.commit()
return setting
create = classmethod(create)
| epl-1.0 |
AnuchitPrasertsang/robotframework-selenium2library | src/Selenium2Library/locators/tableelementfinder.py | 31 | 3986 | from selenium.common.exceptions import NoSuchElementException
from Selenium2Library import utils
from elementfinder import ElementFinder
class TableElementFinder(object):
def __init__(self, element_finder=None):
if not element_finder:
element_finder = ElementFinder()
self._element_finder = element_finder
self._locator_suffixes = {
('css', 'default'): [''],
('css', 'content'): [''],
('css', 'header'): [' th'],
('css', 'footer'): [' tfoot td'],
('css', 'row'): [' tr:nth-child(%s)'],
('css', 'col'): [' tr td:nth-child(%s)', ' tr th:nth-child(%s)'],
('jquery', 'default'): [''],
('jquery', 'content'): [''],
('jquery', 'header'): [' th'],
('jquery', 'footer'): [' tfoot td'],
('jquery', 'row'): [' tr:nth-child(%s)'],
('jquery', 'col'): [' tr td:nth-child(%s)', ' tr th:nth-child(%s)'],
('sizzle', 'default'): [''],
('sizzle', 'content'): [''],
('sizzle', 'header'): [' th'],
('sizzle', 'footer'): [' tfoot td'],
('sizzle', 'row'): [' tr:nth-child(%s)'],
('sizzle', 'col'): [' tr td:nth-child(%s)', ' tr th:nth-child(%s)'],
('xpath', 'default'): [''],
('xpath', 'content'): ['//*'],
('xpath', 'header'): ['//th'],
('xpath', 'footer'): ['//tfoot//td'],
('xpath', 'row'): ['//tr[%s]//*'],
('xpath', 'col'): ['//tr//*[self::td or self::th][%s]']
};
def find(self, browser, table_locator):
locators = self._parse_table_locator(table_locator, 'default')
return self._search_in_locators(browser, locators, None)
def find_by_content(self, browser, table_locator, content):
locators = self._parse_table_locator(table_locator, 'content')
return self._search_in_locators(browser, locators, content)
def find_by_header(self, browser, table_locator, content):
locators = self._parse_table_locator(table_locator, 'header')
return self._search_in_locators(browser, locators, content)
def find_by_footer(self, browser, table_locator, content):
locators = self._parse_table_locator(table_locator, 'footer')
return self._search_in_locators(browser, locators, content)
def find_by_row(self, browser, table_locator, col, content):
locators = self._parse_table_locator(table_locator, 'row')
locators = [locator % str(col) for locator in locators]
return self._search_in_locators(browser, locators, content)
def find_by_col(self, browser, table_locator, col, content):
locators = self._parse_table_locator(table_locator, 'col')
locators = [locator % str(col) for locator in locators]
return self._search_in_locators(browser, locators, content)
def _parse_table_locator(self, table_locator, location_method):
if table_locator.startswith('xpath='):
table_locator_type = 'xpath'
elif table_locator.startswith('jquery=') or table_locator.startswith('sizzle='):
table_locator_type = 'sizzle'
else:
if not table_locator.startswith('css='):
table_locator = "css=table#%s" % table_locator
table_locator_type = 'css'
locator_suffixes = self._locator_suffixes[(table_locator_type, location_method)]
return map(
lambda locator_suffix: table_locator + locator_suffix,
locator_suffixes)
def _search_in_locators(self, browser, locators, content):
for locator in locators:
elements = self._element_finder.find(browser, locator)
for element in elements:
if content is None: return element
element_text = element.text
if element_text and content in element_text:
return element
return None
| apache-2.0 |
Ldpe2G/mxnet | python/mxnet/rtc.py | 15 | 4121 | """Interface to runtime cuda kernel compile module."""
from __future__ import absolute_import
import ctypes
from .base import _LIB, NDArrayHandle, RtcHandle, mx_uint, c_array, check_call
class Rtc(object):
"""MXRtc object in mxnet.
This class allow you to write CUDA kernels in Python
and call them with NDArray.
Parameters
----------
name : str
Name of the kernel.
inputs : tuple of (str, mxnet.ndarray)
List of input names and ndarray.
outputs : tuple of (str, mxnet.ndarray)
List of output names and ndarray.
kernel : str
The actual kernel code.
Note that this is only the body of the kernel, i.e.
after { and before }. Rtc will decorate the kernel.
For example, if ``name = "mykernel"`` and
inputs = [('x', mx.nd.zeros((10,)))]
outputs = [('y', mx.nd.zeros((10,)))]
kernel = "y[threadIdx.x] = x[threadIdx.x];",
then the compiled kernel will be:
extern "C" __global__ mykernel(float *x, float *y) {
const int x_ndim = 1;
const int x_dims = { 10 };
const int y_ndim = 1;
const int y_dims = { 10 };
y[threadIdx.x] = x[threadIdx.x];
}
"""
def __init__(self, name, inputs, outputs, kernel):
self.handle = RtcHandle()
input_names = ctypes.cast(c_array(ctypes.c_char_p, [i[0] for i in inputs]),
ctypes.POINTER(ctypes.c_char_p))
output_names = ctypes.cast(c_array(ctypes.c_char_p, [i[0] for i in outputs]),
ctypes.POINTER(ctypes.c_char_p))
input_nds = ctypes.cast(c_array(NDArrayHandle, [i[1].handle for i in inputs]),
ctypes.POINTER(NDArrayHandle))
output_nds = ctypes.cast(c_array(NDArrayHandle, [i[1].handle for i in outputs]),
ctypes.POINTER(NDArrayHandle))
check_call(_LIB.MXRtcCreate(ctypes.c_char_p(name),
mx_uint(len(inputs)),
mx_uint(len(outputs)),
input_names,
output_names,
input_nds,
output_nds,
ctypes.c_char_p(kernel),
ctypes.byref(self.handle)))
def __del__(self):
check_call(_LIB.MXRtcFree(self.handle))
def push(self, inputs, outputs, grid_dims, block_dims):
"""Run the kernel.
Parameters
----------
inputs : list of NDArray
List of inputs. Can contain different NDArrays than those used for the constructor,
but its elements must have the same shapes and appear in the same order.
outputs : list of NDArray
List of outputs. Can contain different ndarrays than used for the constructor,
but must have the same shapes and appear in the same order.
grid_dims : tuple of 3 uint
Grid dimension for kernel launch.
block_dims : tuple of 3 uint
Block dimension for kernel launch.
"""
input_nds = ctypes.cast(c_array(NDArrayHandle, [i.handle for i in inputs]),
ctypes.POINTER(NDArrayHandle))
output_nds = ctypes.cast(c_array(NDArrayHandle, [i.handle for i in outputs]),
ctypes.POINTER(NDArrayHandle))
check_call(_LIB.MXRtcPush(self.handle,
mx_uint(len(inputs)),
mx_uint(len(outputs)),
input_nds,
output_nds,
mx_uint(grid_dims[0]),
mx_uint(grid_dims[1]),
mx_uint(grid_dims[2]),
mx_uint(block_dims[0]),
mx_uint(block_dims[1]),
mx_uint(block_dims[2])))
| apache-2.0 |
idem2lyon/persomov | libs/cache/__init__.py | 99 | 8343 | """
copied from
werkzeug.contrib.cache
~~~~~~~~~~~~~~~~~~~~~~
:copyright: (c) 2011 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from cache.posixemulation import rename
from itertools import izip
from time import time
import os
import re
import tempfile
try:
from hashlib import md5
except ImportError:
from md5 import new as md5
try:
import cPickle as pickle
except ImportError:
import pickle
def _items(mappingorseq):
"""Wrapper for efficient iteration over mappings represented by dicts
or sequences::
>>> for k, v in _items((i, i*i) for i in xrange(5)):
... assert k*k == v
>>> for k, v in _items(dict((i, i*i) for i in xrange(5))):
... assert k*k == v
"""
return mappingorseq.iteritems() if hasattr(mappingorseq, 'iteritems') \
else mappingorseq
class BaseCache(object):
"""Baseclass for the cache systems. All the cache systems implement this
API or a superset of it.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`set`.
"""
def __init__(self, default_timeout = 300):
self.default_timeout = default_timeout
def delete(self, key):
"""Deletes `key` from the cache. If it does not exist in the cache
nothing happens.
:param key: the key to delete.
"""
pass
def get_many(self, *keys):
"""Returns a list of values for the given keys.
For each key a item in the list is created. Example::
foo, bar = cache.get_many("foo", "bar")
If a key can't be looked up `None` is returned for that key
instead.
:param keys: The function accepts multiple keys as positional
arguments.
"""
return map(self.get, keys)
def get_dict(self, *keys):
"""Works like :meth:`get_many` but returns a dict::
d = cache.get_dict("foo", "bar")
foo = d["foo"]
bar = d["bar"]
:param keys: The function accepts multiple keys as positional
arguments.
"""
return dict(izip(keys, self.get_many(*keys)))
def set(self, key, value, timeout = None):
"""Adds a new key/value to the cache (overwrites value, if key already
exists in the cache).
:param key: the key to set
:param value: the value for the key
:param timeout: the cache timeout for the key (if not specified,
it uses the default timeout).
"""
pass
def add(self, key, value, timeout = None):
"""Works like :meth:`set` but does not overwrite the values of already
existing keys.
:param key: the key to set
:param value: the value for the key
:param timeout: the cache timeout for the key or the default
timeout if not specified.
"""
pass
def set_many(self, mapping, timeout = None):
"""Sets multiple keys and values from a mapping.
:param mapping: a mapping with the keys/values to set.
:param timeout: the cache timeout for the key (if not specified,
it uses the default timeout).
"""
for key, value in _items(mapping):
self.set(key, value, timeout)
def delete_many(self, *keys):
"""Deletes multiple keys at once.
:param keys: The function accepts multiple keys as positional
arguments.
"""
for key in keys:
self.delete(key)
def clear(self):
"""Clears the cache. Keep in mind that not all caches support
completely clearing the cache.
"""
pass
def inc(self, key, delta = 1):
"""Increments the value of a key by `delta`. If the key does
not yet exist it is initialized with `delta`.
For supporting caches this is an atomic operation.
:param key: the key to increment.
:param delta: the delta to add.
"""
self.set(key, (self.get(key) or 0) + delta)
def dec(self, key, delta = 1):
"""Decrements the value of a key by `delta`. If the key does
not yet exist it is initialized with `-delta`.
For supporting caches this is an atomic operation.
:param key: the key to increment.
:param delta: the delta to subtract.
"""
self.set(key, (self.get(key) or 0) - delta)
class FileSystemCache(BaseCache):
"""A cache that stores the items on the file system. This cache depends
on being the only user of the `cache_dir`. Make absolutely sure that
nobody but this cache stores files there or otherwise the cache will
randomly delete files therein.
:param cache_dir: the directory where cache files are stored.
:param threshold: the maximum number of items the cache stores before
it starts deleting some.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`~BaseCache.set`.
:param mode: the file mode wanted for the cache files, default 0600
"""
#: used for temporary files by the FileSystemCache
_fs_transaction_suffix = '.__wz_cache'
def __init__(self, cache_dir, threshold = 500, default_timeout = 300, mode = 0600):
BaseCache.__init__(self, default_timeout)
self._path = cache_dir
self._threshold = threshold
self._mode = mode
if not os.path.exists(self._path):
os.makedirs(self._path)
def _list_dir(self):
"""return a list of (fully qualified) cache filenames
"""
return [os.path.join(self._path, fn) for fn in os.listdir(self._path)
if not fn.endswith(self._fs_transaction_suffix)]
def _prune(self):
entries = self._list_dir()
if len(entries) > self._threshold:
now = time()
for idx, fname in enumerate(entries):
remove = False
f = None
try:
try:
f = open(fname, 'rb')
expires = pickle.load(f)
remove = expires <= now or idx % 3 == 0
finally:
if f is not None:
f.close()
except Exception:
pass
if remove:
try:
os.remove(fname)
except (IOError, OSError):
pass
def clear(self):
for fname in self._list_dir():
try:
os.remove(fname)
except (IOError, OSError):
pass
def _get_filename(self, key):
hash = md5(key).hexdigest()
return os.path.join(self._path, hash)
def get(self, key):
filename = self._get_filename(key)
try:
f = open(filename, 'rb')
try:
if pickle.load(f) >= time():
return pickle.load(f)
finally:
f.close()
os.remove(filename)
except Exception:
return None
def add(self, key, value, timeout = None):
filename = self._get_filename(key)
if not os.path.exists(filename):
self.set(key, value, timeout)
def set(self, key, value, timeout = None):
if timeout is None:
timeout = self.default_timeout
filename = self._get_filename(key)
self._prune()
try:
fd, tmp = tempfile.mkstemp(suffix = self._fs_transaction_suffix,
dir = self._path)
f = os.fdopen(fd, 'wb')
try:
pickle.dump(int(time() + timeout), f, 1)
pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
finally:
f.close()
rename(tmp, filename)
os.chmod(filename, self._mode)
except (IOError, OSError):
pass
def delete(self, key):
try:
os.remove(self._get_filename(key))
except (IOError, OSError):
pass
| gpl-3.0 |
tkjone/guides-django | series_2/p_03/myproject/server/config/settings/base.py | 4 | 8713 | # -*- coding: utf-8 -*-
"""
Django settings for myproject project.
For more information on this file, see
https://docs.djangoproject.com/en/1.9/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.9/ref/settings/
"""
from __future__ import absolute_import, unicode_literals
import environ
ROOT_DIR = environ.Path(__file__) - 4 # (/a/b/myfile.py - 3 = /)
APPS_DIR = ROOT_DIR.path('server')
env = environ.Env()
# ------------------------------------------------------------------------------
# SECRET CONFIGURATION
# ------------------------------------------------------------------------------
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env("DJANGO_SECRET_KEY", default='CHANGEME!!!')
# ------------------------------------------------------------------------------
# APP CONFIGURATION
# ------------------------------------------------------------------------------
DJANGO_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
THIRD_PARTY_APPS = (
# wagtail dependencies
'compressor',
'taggit',
'modelcluster',
# wagtail
'wagtail.wagtailcore',
'wagtail.wagtailadmin',
'wagtail.wagtaildocs',
'wagtail.wagtailsnippets',
'wagtail.wagtailusers',
'wagtail.wagtailimages',
'wagtail.wagtailsearch',
'wagtail.wagtailsites',
'wagtail.wagtailredirects',
'wagtail.wagtailforms',
)
LOCAL_APPS = (
'apps.wagtail.pages',
)
# See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS
# ------------------------------------------------------------------------------
# MIDDLEWARE CONFIGURATION
# ------------------------------------------------------------------------------
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# Wagtail Middleware
'wagtail.wagtailcore.middleware.SiteMiddleware',
'wagtail.wagtailredirects.middleware.RedirectMiddleware',
]
# ------------------------------------------------------------------------------
# DEBUG
# ------------------------------------------------------------------------------
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DJANGO_DEBUG", default=True)
# ------------------------------------------------------------------------------
# DATABASE CONFIGURATION
# ------------------------------------------------------------------------------
# Database
# https://docs.djangoproject.com/en/1.9/ref/settings/#databases
DATABASES = {
'default': env.db("DATABASE_URL", default="postgres://dev:dev@localhost/myproject")
}
# ------------------------------------------------------------------------------
# GENERAL CONFIGURATION
# ------------------------------------------------------------------------------
# Internationalization
# https://docs.djangoproject.com/en/1.9/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
ALLOWED_HOSTS = []
# ------------------------------------------------------------------------------
# TEMPLATE CONFIGURATION
# ------------------------------------------------------------------------------
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
str(APPS_DIR('templates'))
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
# ------------------------------------------------------------------------------
# STATIC FILE CONFIGURATION
# ------------------------------------------------------------------------------
STATIC_ROOT = str(ROOT_DIR.path('server/staticfiles'))
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = (
str(APPS_DIR.path('static')),
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'compressor.finders.CompressorFinder',
)
# ------------------------------------------------------------------------------
# MEDIA CONFIGURATION
# ------------------------------------------------------------------------------
# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root
MEDIA_ROOT = str(ROOT_DIR('server/media'))
# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url
MEDIA_URL = '/media/'
# ------------------------------------------------------------------------------
# URL Configuration
# ------------------------------------------------------------------------------
ROOT_URLCONF = 'config.urls'
# ------------------------------------------------------------------------------
# OTHER Configuration
# ------------------------------------------------------------------------------
ADMINS = (
("""{{cookiecutter.author_name}}""", '{{cookiecutter.email}}'),
)
# See: https://docs.djangoproject.com/en/dev/ref/settings/#managers
MANAGERS = ADMINS
WSGI_APPLICATION = 'config.wsgi.application'
# ------------------------------------------------------------------------------
# LOGGIN INFORMATION
# ------------------------------------------------------------------------------
LOG_DIR = env("LOG_DIR", default=str(ROOT_DIR('logs')))
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'formatters': {
'verbose': {
'format': "[%(levelname)s] -- %(asctime)s -- %(module)s:%(lineno)s ___ %(message)s >>> "
"{ process: %(process)d | thread: %(thread)d }",
'datefmt': "%b %e, %I:%M:%S %p"
},
'simple': {
'format': '[%(levelname)s] -- %(message)s'
},
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false', ],
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
'file_error': {
'level': 'ERROR',
'class': 'logging.FileHandler',
'class': 'logging.handlers.RotatingFileHandler',
'filename': LOG_DIR + '/server/django.log',
'maxBytes': 20 * 1024 * 1024,
'formatter': 'verbose'
},
},
'loggers': {
'django.request': {
'handlers': ['file_error', 'mail_admins', ],
'level': 'ERROR',
'propagate': True
},
'django.security.DisallowedHost': {
'level': 'ERROR',
'handlers': ['file_error', 'console', 'mail_admins', ],
'propagate': True
},
'development': {
'handlers': ['console', ],
'level': 'DEBUG',
'propagate': True
},
},
}
# Password validation
# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# ------------------------------------------------------------------------------
# WAGTAIL SETTINGS
# ------------------------------------------------------------------------------
WAGTAIL_SITE_NAME = 'myproject'
WAGTAILADMIN_NOTIFICATION_FROM_EMAIL = True
TAGGIT_CASE_INSENSITIVE = True
| mit |
gangadhar-kadam/smrterp | erpnext/buying/doctype/purchase_order/purchase_order.py | 3 | 9024 | # Copyright (c) 2013, Web Notes Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
from frappe.utils import cstr, flt
from frappe import msgprint, _, throw
from frappe.model.mapper import get_mapped_doc
from erpnext.controllers.buying_controller import BuyingController
class PurchaseOrder(BuyingController):
tname = 'Purchase Order Item'
fname = 'po_details'
def __init__(self, arg1, arg2=None):
super(PurchaseOrder, self).__init__(arg1, arg2)
self.status_updater = [{
'source_dt': 'Purchase Order Item',
'target_dt': 'Material Request Item',
'join_field': 'prevdoc_detail_docname',
'target_field': 'ordered_qty',
'target_parent_dt': 'Material Request',
'target_parent_field': 'per_ordered',
'target_ref_field': 'qty',
'source_field': 'qty',
'percent_join_field': 'prevdoc_docname',
'overflow_type': 'order'
}]
def validate(self):
super(PurchaseOrder, self).validate()
if not self.status:
self.status = "Draft"
from erpnext.utilities import validate_status
validate_status(self.status, ["Draft", "Submitted", "Stopped",
"Cancelled"])
pc_obj = frappe.get_doc('Purchase Common')
pc_obj.validate_for_items(self)
self.check_for_stopped_status(pc_obj)
self.validate_uom_is_integer("uom", "qty")
self.validate_uom_is_integer("stock_uom", ["qty", "required_qty"])
self.validate_with_previous_doc()
self.validate_for_subcontracting()
self.validate_minimum_order_qty()
self.create_raw_materials_supplied("po_raw_material_details")
def validate_with_previous_doc(self):
super(PurchaseOrder, self).validate_with_previous_doc(self.tname, {
"Supplier Quotation": {
"ref_dn_field": "supplier_quotation",
"compare_fields": [["supplier", "="], ["company", "="], ["currency", "="]],
},
"Supplier Quotation Item": {
"ref_dn_field": "supplier_quotation_item",
"compare_fields": [["rate", "="], ["project_name", "="], ["item_code", "="],
["uom", "="]],
"is_child_table": True
}
})
def validate_minimum_order_qty(self):
itemwise_min_order_qty = frappe._dict(frappe.db.sql("select name, min_order_qty from tabItem"))
for d in self.get("po_details"):
if flt(d.stock_qty) < flt(itemwise_min_order_qty.get(d.item_code)):
frappe.throw(_("Row #{0}: Ordered qty can not less than item's minimum order qty (defined in item master).").format(d.idx))
def get_schedule_dates(self):
for d in self.get('po_details'):
if d.prevdoc_detail_docname and not d.schedule_date:
d.schedule_date = frappe.db.get_value("Material Request Item",
d.prevdoc_detail_docname, "schedule_date")
def get_last_purchase_rate(self):
frappe.get_doc('Purchase Common').get_last_purchase_rate(self)
# Check for Stopped status
def check_for_stopped_status(self, pc_obj):
check_list =[]
for d in self.get('po_details'):
if d.meta.get_field('prevdoc_docname') and d.prevdoc_docname and d.prevdoc_docname not in check_list:
check_list.append(d.prevdoc_docname)
pc_obj.check_for_stopped_status( d.prevdoc_doctype, d.prevdoc_docname)
def update_bin(self, is_submit, is_stopped = 0):
from erpnext.stock.utils import update_bin
pc_obj = frappe.get_doc('Purchase Common')
for d in self.get('po_details'):
#1. Check if is_stock_item == 'Yes'
if frappe.db.get_value("Item", d.item_code, "is_stock_item") == "Yes":
# this happens when item is changed from non-stock to stock item
if not d.warehouse:
continue
ind_qty, po_qty = 0, flt(d.qty) * flt(d.conversion_factor)
if is_stopped:
po_qty = flt(d.qty) > flt(d.received_qty) and \
flt( flt(flt(d.qty) - flt(d.received_qty))*flt(d.conversion_factor)) or 0
# No updates in Material Request on Stop / Unstop
if cstr(d.prevdoc_doctype) == 'Material Request' and not is_stopped:
# get qty and pending_qty of prevdoc
curr_ref_qty = pc_obj.get_qty(d.doctype, 'prevdoc_detail_docname',
d.prevdoc_detail_docname, 'Material Request Item',
'Material Request - Purchase Order', self.name)
max_qty, qty, curr_qty = flt(curr_ref_qty.split('~~~')[1]), \
flt(curr_ref_qty.split('~~~')[0]), 0
if flt(qty) + flt(po_qty) > flt(max_qty):
curr_qty = flt(max_qty) - flt(qty)
# special case as there is no restriction
# for Material Request - Purchase Order
curr_qty = curr_qty > 0 and curr_qty or 0
else:
curr_qty = flt(po_qty)
ind_qty = -flt(curr_qty)
# Update ordered_qty and indented_qty in bin
args = {
"item_code": d.item_code,
"warehouse": d.warehouse,
"ordered_qty": (is_submit and 1 or -1) * flt(po_qty),
"indented_qty": (is_submit and 1 or -1) * flt(ind_qty),
"posting_date": self.transaction_date
}
update_bin(args)
def check_modified_date(self):
mod_db = frappe.db.sql("select modified from `tabPurchase Order` where name = %s",
self.name)
date_diff = frappe.db.sql("select TIMEDIFF('%s', '%s')" % ( mod_db[0][0],cstr(self.modified)))
if date_diff and date_diff[0][0]:
msgprint(_("{0} {1} has been modified. Please refresh.").format(self.doctype, self.name),
raise_exception=True)
def update_status(self, status):
self.check_modified_date()
# step 1:=> Set Status
frappe.db.set(self,'status',cstr(status))
# step 2:=> Update Bin
self.update_bin(is_submit = (status == 'Submitted') and 1 or 0, is_stopped = 1)
# step 3:=> Acknowledge user
msgprint(_("Status of {0} {1} is now {2}").format(self.doctype, self.name, status))
def on_submit(self):
purchase_controller = frappe.get_doc("Purchase Common")
self.update_prevdoc_status()
self.update_bin(is_submit = 1, is_stopped = 0)
frappe.get_doc('Authorization Control').validate_approving_authority(self.doctype,
self.company, self.grand_total)
purchase_controller.update_last_purchase_rate(self, is_submit = 1)
frappe.db.set(self,'status','Submitted')
def on_cancel(self):
pc_obj = frappe.get_doc('Purchase Common')
self.check_for_stopped_status(pc_obj)
# Check if Purchase Receipt has been submitted against current Purchase Order
pc_obj.check_docstatus(check = 'Next', doctype = 'Purchase Receipt', docname = self.name, detail_doctype = 'Purchase Receipt Item')
# Check if Purchase Invoice has been submitted against current Purchase Order
submitted = frappe.db.sql_list("""select t1.name
from `tabPurchase Invoice` t1,`tabPurchase Invoice Item` t2
where t1.name = t2.parent and t2.purchase_order = %s and t1.docstatus = 1""",
self.name)
if submitted:
throw(_("Purchase Invoice {0} is already submitted").format(", ".join(submitted)))
frappe.db.set(self,'status','Cancelled')
self.update_prevdoc_status()
self.update_bin( is_submit = 0, is_stopped = 0)
pc_obj.update_last_purchase_rate(self, is_submit = 0)
def on_update(self):
pass
def set_missing_values(source, target):
target.ignore_pricing_rule = 1
target.run_method("set_missing_values")
target.run_method("calculate_taxes_and_totals")
@frappe.whitelist()
def make_purchase_receipt(source_name, target_doc=None):
def update_item(obj, target, source_parent):
target.qty = flt(obj.qty) - flt(obj.received_qty)
target.stock_qty = (flt(obj.qty) - flt(obj.received_qty)) * flt(obj.conversion_factor)
target.amount = (flt(obj.qty) - flt(obj.received_qty)) * flt(obj.rate)
target.base_amount = (flt(obj.qty) - flt(obj.received_qty)) * flt(obj.base_rate)
doc = get_mapped_doc("Purchase Order", source_name, {
"Purchase Order": {
"doctype": "Purchase Receipt",
"validation": {
"docstatus": ["=", 1],
}
},
"Purchase Order Item": {
"doctype": "Purchase Receipt Item",
"field_map": {
"name": "prevdoc_detail_docname",
"parent": "prevdoc_docname",
"parenttype": "prevdoc_doctype",
},
"postprocess": update_item,
"condition": lambda doc: doc.received_qty < doc.qty
},
"Purchase Taxes and Charges": {
"doctype": "Purchase Taxes and Charges",
"add_if_empty": True
}
}, target_doc, set_missing_values)
return doc
@frappe.whitelist()
def make_purchase_invoice(source_name, target_doc=None):
def update_item(obj, target, source_parent):
target.amount = flt(obj.amount) - flt(obj.billed_amt)
target.base_amount = target.amount * flt(source_parent.conversion_rate)
if flt(obj.base_rate):
target.qty = target.base_amount / flt(obj.base_rate)
doc = get_mapped_doc("Purchase Order", source_name, {
"Purchase Order": {
"doctype": "Purchase Invoice",
"validation": {
"docstatus": ["=", 1],
}
},
"Purchase Order Item": {
"doctype": "Purchase Invoice Item",
"field_map": {
"name": "po_detail",
"parent": "purchase_order",
},
"postprocess": update_item,
"condition": lambda doc: doc.base_amount==0 or doc.billed_amt < doc.amount
},
"Purchase Taxes and Charges": {
"doctype": "Purchase Taxes and Charges",
"add_if_empty": True
}
}, target_doc, set_missing_values)
return doc
| agpl-3.0 |
ericmckean/syzygy | syzygy/build/generate_coverage.py | 4 | 15218 | #!python
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A utility script to perform code coverage analysis."""
import glob
import logging
import optparse
import os
import shutil
import subprocess
import sys
import tempfile
# The list of DLLs we want to instrument in addition to _unittests executables.
_DLLS_TO_INSTRUMENT = [
'basic_block_entry_client.dll',
'call_trace_client.dll',
'coverage_client.dll',
'kasko.dll',
'profile_client.dll',
'syzyasan_rtl.dll',
]
# The list of file patterns to copy to the staging/coverage area.
_FILE_PATTERNS_TO_COPY = [
'*_harness.exe',
'*_tests.exe',
'*_unittests.exe',
'*.dll',
'*.pdb',
'agent_logger.exe',
'call_trace_service.exe',
'test_data',
]
_SYZYGY_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
# This is hardcoded to the Visual Studio default install location.
_PERF_TOOLS_DIR = ('C:/Program Files (x86)/Microsoft Visual Studio 9.0/'
'Team Tools/Performance Tools')
_COVERAGE_ANALYZER_DIR = os.path.normpath(
os.path.join(_SYZYGY_DIR, '../third_party/coverage_analyzer/bin'))
_LOGGER = logging.getLogger(os.path.basename(__file__))
def _Subprocess(command, failure_msg, **kw):
_LOGGER.info('Executing command line %s.', command)
ret = subprocess.call(command, **kw)
if ret != 0:
_LOGGER.error(failure_msg)
raise RuntimeError(failure_msg)
class _ScopedTempDir(object):
"""A utility class for creating a scoped temporary directory."""
def __init__(self):
self._path = None
def Create(self):
self._path = tempfile.mkdtemp()
def path(self):
return self._path
def __del__(self):
if self._path:
shutil.rmtree(self._path)
class _CodeCoverageRunnerBase(object):
"""A worker class to take care of running through instrumentation,
profiling and coverage generation. This base class expects derived
classes to implement the following (see class definition for details):
_InstrumentOneFile(self, file_path)
_StartCoverageCapture(self)
_StopCoverageCapture(self)
_ProcessCoverage(self, output_path)
"""
_COVERAGE_FILE = 'unittests'
def __init__(self, build_dir, keep_work_dir):
build_dir = os.path.abspath(build_dir)
self._build_dir = build_dir
self._keep_work_dir = keep_work_dir
self._work_dir = None
self._html_dir = os.path.join(self._build_dir, 'cov')
def __del__(self):
self._CleanupWorkdir()
def Run(self):
"""Performs the code coverage capture for all unittests."""
self._CreateWorkdir()
try:
self._CaptureCoverage()
finally:
self._CleanupWorkdir()
def _InstrumentOneFile(self, file_path):
"""Instruments the provided module for coverage, in place.
Args:
file_path: The path of the module to be instrumented.
"""
raise NotImplementedError()
def _StartCoverageCapture(self):
"""Starts the coverage capture process."""
raise NotImplementedError()
def _StopCoverageCapture(self):
"""Stops the coverage capture process."""
raise NotImplementedError()
def _ProcessCoverage(self, output_path):
"""Processes coverage results and produces an GCOV/LCOV formatted
coverage results file in |output_path|.
Args:
output_path: The path of the output file to produce.
"""
raise NotImplementedError()
def _CreateWorkdir(self):
assert(self._work_dir == None)
# The work dir must be a sibling to build_dir, as unittests refer
# to test data through relative paths from their own executable.
work_parent = os.path.abspath(os.path.join(self._build_dir, '..'))
self._work_dir = tempfile.mkdtemp(prefix='instr-', dir=work_parent)
_LOGGER.info('Created working directory "%s".', self._work_dir)
def _CleanupWorkdir(self):
# Clean up our working directory if it still exists.
work_dir = self._work_dir
self._work_dir = None
if not work_dir:
return
if self._keep_work_dir:
_LOGGER.info('Keeping working directory "%s".', work_dir)
else:
_LOGGER.info('Removing working directory "%s".', work_dir)
shutil.rmtree(work_dir, ignore_errors=True)
def _InstrumentExecutables(self):
build_dir = self._build_dir
work_dir = self._work_dir
_LOGGER.info('Build dir "%s".', build_dir)
# Copy all unittest related files to work_dir.
for pattern in _FILE_PATTERNS_TO_COPY:
files = glob.glob(os.path.join(build_dir, pattern))
for path in files:
_LOGGER.info('Copying "%s" to "%s".', path, work_dir)
if os.path.isdir(path):
# If the source file is a directory, do a recursive copy.
dst = os.path.join(work_dir, os.path.basename(path))
shutil.copytree(path, dst)
else:
shutil.copy(path, work_dir)
# Instrument all EXEs in the work dir.
for exe in glob.glob(os.path.join(work_dir, '*.exe')):
self._InstrumentOneFile(exe)
# And the DLLs we've specified.
for dll in _DLLS_TO_INSTRUMENT:
self._InstrumentOneFile(os.path.join(work_dir, dll))
def _RunUnittests(self):
unittests = (glob.glob(os.path.join(self._work_dir, '*_unittests.exe')) +
glob.glob(os.path.join(self._work_dir, '*_tests.exe')))
for unittest in unittests:
_LOGGER.info('Running unittest "%s".', unittest)
# Run single threaded, and with a 5 minute (in ms) timeout. This
# conserves existing buildbot behaviour with the new sharded tests.
_Subprocess([unittest,
'--single-process-tests',
'--test-launcher-timeout=300000'],
'Unittests "%s" failed.' % os.path.basename(unittest))
def _GenerateHtml(self, input_path):
croc = os.path.abspath(
os.path.join(_SYZYGY_DIR, '../tools/code_coverage/croc.py'))
config = os.path.join(_SYZYGY_DIR, 'build/syzygy.croc')
# The HTML directory is already deleted. Create it now.
os.makedirs(self._html_dir)
cmd = [sys.executable, croc,
'--tree',
'--config', config,
'--input', input_path,
'--html', self._html_dir]
# The coverage html generator wants to run in the directory
# containing our src root.
cwd = os.path.abspath(os.path.join(_SYZYGY_DIR, '../..'))
_LOGGER.info('Generating HTML report')
_Subprocess(cmd, 'Failed to generate HTML coverage report.', cwd=cwd)
def _CaptureCoverage(self):
# Clean up old coverage results. We do this immediately so that previous
# coverage results won't still be around if this script fails.
shutil.rmtree(self._html_dir, ignore_errors=True)
self._InstrumentExecutables()
self._StartCoverageCapture()
try:
self._RunUnittests()
finally:
self._StopCoverageCapture()
output_path = os.path.join(self._work_dir,
'%s.coverage.lcov' % self._COVERAGE_FILE)
self._ProcessCoverage(output_path)
self._GenerateHtml(output_path)
class _CodeCoverageRunnerVS(_CodeCoverageRunnerBase):
"""Code coverage runner that uses the Microsoft Visual Studio Team Tools
instrumenter.
"""
def __init__(self, build_dir, perf_tools_dir, coverage_analyzer_dir,
keep_work_dir):
super(_CodeCoverageRunnerVS, self).__init__(build_dir, keep_work_dir)
self._perf_tools_dir = os.path.abspath(perf_tools_dir)
self._coverage_analyzer_dir = os.path.abspath(coverage_analyzer_dir)
def _InstrumentOneFile(self, file_path):
cmd = [os.path.join(self._perf_tools_dir, 'vsinstr.exe'),
'/coverage',
'/verbose',
file_path]
_LOGGER.info('Instrumenting "%s".', file_path)
_Subprocess(cmd, 'Failed to instrument "%s"' % file_path)
def _StartCoverageCapture(self):
cmd = [os.path.join(self._perf_tools_dir, 'vsperfcmd.exe'),
'/start:coverage',
'/output:"%s"' % os.path.join(self._work_dir, self._COVERAGE_FILE)]
_LOGGER.info('Starting coverage capture.')
_Subprocess(cmd, 'Failed to start coverage capture.')
def _StopCoverageCapture(self):
cmd = [os.path.join(self._perf_tools_dir, 'vsperfcmd.exe'), '/shutdown']
_LOGGER.info('Halting coverage capture.')
_Subprocess(cmd, 'Failed to stop coverage capture.')
def _ProcessCoverage(self, output_path):
# The vsperf tool creates an output with suffix '.coverage'.
input_path = os.path.join(self._work_dir,
'%s.coverage' % self._COVERAGE_FILE)
# Coverage analyzer will go ahead and place its output in
# input_file + '.lcov'.
default_output_path = input_path + '.lcov'
cmd = [os.path.join(self._coverage_analyzer_dir, 'coverage_analyzer.exe'),
'-noxml', '-sym_path=%s' % self._work_dir,
input_path]
_LOGGER.info('Generating LCOV file.')
_Subprocess(cmd, 'LCOV generation failed.')
# Move the default output location if necessary.
if default_output_path != output_path:
shutil.move(default_output_path, output_path)
class _CodeCoverageRunnerSyzygy(_CodeCoverageRunnerBase):
"""Code coverage runner that uses the Syzygy code coverage client."""
_SYZYCOVER = 'syzycover'
def __init__(self, build_dir, keep_work_dir):
super(_CodeCoverageRunnerSyzygy, self).__init__(build_dir, keep_work_dir)
self._temp_dir = _ScopedTempDir()
self._temp_dir.Create()
def _InstrumentOneFile(self, file_path):
temp_path = os.path.join(self._temp_dir.path(),
os.path.basename(file_path))
shutil.copy(file_path, temp_path)
cmd = [os.path.join(self._build_dir, 'instrument.exe'),
'--mode=COVERAGE',
'--agent=%s.dll' % self._SYZYCOVER,
'--input-image=%s' % temp_path,
'--output-image=%s' % file_path,
'--no-augment-pdb',
'--overwrite']
_LOGGER.info('Instrumenting "%s".', file_path)
_Subprocess(cmd, 'Failed to instrument "%s"' % file_path)
def _StartCoverageCapture(self):
# Grab a copy of the coverage client and place it in the work directory.
# We give it a different name so that it doesn't conflict with the
# instrumented coverage_client.dll.
syzycover = os.path.abspath(os.path.join(
self._work_dir, '%s.dll' % self._SYZYCOVER))
shutil.copy(os.path.join(self._build_dir, 'coverage_client.dll'),
syzycover)
# Set up the environment so that the coverage client will connect to
# the appropriate call trace client. Also make it so that it will crash if
# the RPC connection is unable to be made.
os.environ['SYZYGY_RPC_INSTANCE_ID'] = '%s,%s' % (syzycover,
self._SYZYCOVER)
os.environ['SYZYGY_RPC_SESSION_MANDATORY'] = '%s,1' % (syzycover)
# Start an instance of the call-trace service in the background.
cmd = [os.path.join(self._build_dir, 'call_trace_service.exe'),
'spawn',
'--instance-id=%s' % self._SYZYCOVER,
'--trace-dir=%s' % self._work_dir]
_LOGGER.info('Starting coverage capture.')
_Subprocess(cmd, 'Failed to start coverage capture.')
def _StopCoverageCapture(self):
cmd = [os.path.join(self._build_dir, 'call_trace_service.exe'),
'stop',
'--instance-id=%s' % self._SYZYCOVER]
_LOGGER.info('Halting coverage capture.')
_Subprocess(cmd, 'Failed to stop coverage capture.')
def _ProcessCoverage(self, output_path):
_LOGGER.info('Generating LCOV file.')
cmd = [os.path.join(self._build_dir, 'grinder.exe'),
'--mode=coverage',
'--output-file=%s' % output_path,
os.path.join(self._work_dir, 'trace-*.bin')]
_Subprocess(cmd, 'LCOV generation failed.')
_USAGE = """\
%prog [options]
Generates a code coverage report for unittests in a given build directory.
On a successful run, the HTML report will be produced in a subdirectory
of the given build directory named "cov".
"""
def _ParseArguments():
parser = optparse.OptionParser()
parser.add_option('-v', '--verbose', dest='verbose',
action='store_true', default=False,
help='Enable verbose logging.')
parser.add_option('--build-dir', dest='build_dir',
help='The directory where build output is placed.')
parser.add_option('--target', dest='target',
help='The build profile for which coverage is being '
'generated. If not specified, default to None. '
'Will be appended to --build-dir to generate the '
'name of the directory containing the binaries '
'to analyze.')
parser.add_option('--perf-tools-dir', dest='perf_tools_dir',
default=_PERF_TOOLS_DIR,
help='The directory where the VS performance tools, '
'"vsinstr.exe" and "vsperfcmd.exe" are found. '
'Ignored if --syzygy is specified.')
parser.add_option('--coverage-analyzer-dir', dest='coverage_analyzer_dir',
default=_COVERAGE_ANALYZER_DIR,
help='The directory where "coverage_analyzer.exe" '
'is found. Ignored if --syzygy is specified.')
parser.add_option('--keep-work-dir', action='store_true', default=False,
help='Keep temporary directory after run.')
parser.add_option('--syzygy', action='store_true', default=False,
help='Use Syzygy coverage tools.')
(opts, args) = parser.parse_args()
if args:
parser.error('This script does not accept any arguments.')
if not opts.build_dir:
parser.error('You must provide a build directory.')
opts.build_dir = os.path.abspath(opts.build_dir)
# If a target name was specified, then refine the build path with that.
if opts.target:
opts.build_dir = os.path.abspath(os.path.join(opts.build_dir, opts.target))
if not os.path.isdir(opts.build_dir):
parser.error('Path does not exist: %s' % opts.build_dir)
if opts.verbose:
logging.basicConfig(level=logging.INFO)
else:
logging.basicConfig(level=logging.ERROR)
return opts
def main():
opts = _ParseArguments()
if opts.syzygy:
runner = _CodeCoverageRunnerSyzygy(opts.build_dir,
opts.keep_work_dir)
else:
runner = _CodeCoverageRunnerVS(opts.build_dir,
opts.perf_tools_dir,
opts.coverage_analyzer_dir,
opts.keep_work_dir)
runner.Run()
if __name__ == '__main__':
sys.exit(main())
| apache-2.0 |
rushiagr/keystone | keystone/tests/unit/test_contrib_s3_core.py | 10 | 2130 | # Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from keystone.contrib import s3
from keystone import exception
from keystone.tests import unit as tests
class S3ContribCore(tests.TestCase):
def setUp(self):
super(S3ContribCore, self).setUp()
self.load_backends()
self.controller = s3.S3Controller()
def test_good_signature(self):
creds_ref = {'secret':
'b121dd41cdcc42fe9f70e572e84295aa'}
credentials = {'token':
'UFVUCjFCMk0yWThBc2dUcGdBbVk3UGhDZmc9PQphcHB'
'saWNhdGlvbi9vY3RldC1zdHJlYW0KVHVlLCAxMSBEZWMgMjAxM'
'iAyMTo0MTo0MSBHTVQKL2NvbnRfczMvdXBsb2FkZWRfZnJ'
'vbV9zMy50eHQ=',
'signature': 'IL4QLcLVaYgylF9iHj6Wb8BGZsw='}
self.assertIsNone(self.controller.check_signature(creds_ref,
credentials))
def test_bad_signature(self):
creds_ref = {'secret':
'b121dd41cdcc42fe9f70e572e84295aa'}
credentials = {'token':
'UFVUCjFCMk0yWThBc2dUcGdBbVk3UGhDZmc9PQphcHB'
'saWNhdGlvbi9vY3RldC1zdHJlYW0KVHVlLCAxMSBEZWMgMjAxM'
'iAyMTo0MTo0MSBHTVQKL2NvbnRfczMvdXBsb2FkZWRfZnJ'
'vbV9zMy50eHQ=',
'signature': uuid.uuid4().hex}
self.assertRaises(exception.Unauthorized,
self.controller.check_signature,
creds_ref, credentials)
| apache-2.0 |
molobrakos/home-assistant | homeassistant/components/nest/binary_sensor.py | 7 | 5052 | """Support for Nest Thermostat binary sensors."""
from itertools import chain
import logging
from homeassistant.components.binary_sensor import BinarySensorDevice
from homeassistant.const import CONF_MONITORED_CONDITIONS
from . import (
CONF_BINARY_SENSORS, DATA_NEST, DATA_NEST_CONFIG, NestSensorDevice)
_LOGGER = logging.getLogger(__name__)
BINARY_TYPES = {'online': 'connectivity'}
CLIMATE_BINARY_TYPES = {
'fan': None,
'is_using_emergency_heat': 'heat',
'is_locked': None,
'has_leaf': None,
}
CAMERA_BINARY_TYPES = {
'motion_detected': 'motion',
'sound_detected': 'sound',
'person_detected': 'occupancy',
}
STRUCTURE_BINARY_TYPES = {
'away': None,
}
STRUCTURE_BINARY_STATE_MAP = {
'away': {'away': True, 'home': False},
}
_BINARY_TYPES_DEPRECATED = [
'hvac_ac_state',
'hvac_aux_heater_state',
'hvac_heater_state',
'hvac_heat_x2_state',
'hvac_heat_x3_state',
'hvac_alt_heat_state',
'hvac_alt_heat_x2_state',
'hvac_emer_heat_state',
]
_VALID_BINARY_SENSOR_TYPES = {
**BINARY_TYPES,
**CLIMATE_BINARY_TYPES,
**CAMERA_BINARY_TYPES,
**STRUCTURE_BINARY_TYPES,
}
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the Nest binary sensors.
No longer used.
"""
async def async_setup_entry(hass, entry, async_add_entities):
"""Set up a Nest binary sensor based on a config entry."""
nest = hass.data[DATA_NEST]
discovery_info = \
hass.data.get(DATA_NEST_CONFIG, {}).get(CONF_BINARY_SENSORS, {})
# Add all available binary sensors if no Nest binary sensor config is set
if discovery_info == {}:
conditions = _VALID_BINARY_SENSOR_TYPES
else:
conditions = discovery_info.get(CONF_MONITORED_CONDITIONS, {})
for variable in conditions:
if variable in _BINARY_TYPES_DEPRECATED:
wstr = (variable + " is no a longer supported "
"monitored_conditions. See "
"https://home-assistant.io/components/binary_sensor.nest/ "
"for valid options.")
_LOGGER.error(wstr)
def get_binary_sensors():
"""Get the Nest binary sensors."""
sensors = []
for structure in nest.structures():
sensors += [NestBinarySensor(structure, None, variable)
for variable in conditions
if variable in STRUCTURE_BINARY_TYPES]
device_chain = chain(
nest.thermostats(), nest.smoke_co_alarms(), nest.cameras())
for structure, device in device_chain:
sensors += [NestBinarySensor(structure, device, variable)
for variable in conditions
if variable in BINARY_TYPES]
sensors += [NestBinarySensor(structure, device, variable)
for variable in conditions
if variable in CLIMATE_BINARY_TYPES
and device.is_thermostat]
if device.is_camera:
sensors += [NestBinarySensor(structure, device, variable)
for variable in conditions
if variable in CAMERA_BINARY_TYPES]
for activity_zone in device.activity_zones:
sensors += [NestActivityZoneSensor(
structure, device, activity_zone)]
return sensors
async_add_entities(await hass.async_add_job(get_binary_sensors), True)
class NestBinarySensor(NestSensorDevice, BinarySensorDevice):
"""Represents a Nest binary sensor."""
@property
def is_on(self):
"""Return true if the binary sensor is on."""
return self._state
@property
def device_class(self):
"""Return the device class of the binary sensor."""
return _VALID_BINARY_SENSOR_TYPES.get(self.variable)
def update(self):
"""Retrieve latest state."""
value = getattr(self.device, self.variable)
if self.variable in STRUCTURE_BINARY_TYPES:
self._state = bool(STRUCTURE_BINARY_STATE_MAP
[self.variable].get(value))
else:
self._state = bool(value)
class NestActivityZoneSensor(NestBinarySensor):
"""Represents a Nest binary sensor for activity in a zone."""
def __init__(self, structure, device, zone):
"""Initialize the sensor."""
super(NestActivityZoneSensor, self).__init__(structure, device, "")
self.zone = zone
self._name = "{} {} activity".format(self._name, self.zone.name)
@property
def unique_id(self):
"""Return unique id based on camera serial and zone id."""
return "{}-{}".format(self.device.serial, self.zone.zone_id)
@property
def device_class(self):
"""Return the device class of the binary sensor."""
return 'motion'
def update(self):
"""Retrieve latest state."""
self._state = self.device.has_ongoing_motion_in_zone(self.zone.zone_id)
| apache-2.0 |
JohnOrlando/gnuradio-bitshark | gnuradio-examples/python/usrp/usrp_benchmark_usb.py | 11 | 3302 | #!/usr/bin/env python
#
# Copyright 2004,2005 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
"""
Benchmark the USB/USRP throughput. Finds the maximum full-duplex speed
the USRP/USB combination can sustain without errors.
This program does not currently give reliable results. Sorry about that...
"""
from gnuradio import gr
from gnuradio import usrp
from gnuradio import eng_notation
import sys
def run_test (usb_throughput, verbose):
# usb_throughput is in bytes/sec.
#
# Returns True or False
nsec = 1
stream_length = int (usb_throughput/2 * nsec) # length of stream to examine
adc_freq = 64e6
dac_freq = 128e6
sizeof_sample = 2 * gr.sizeof_short
usb_throughput_in_samples = usb_throughput / sizeof_sample
# allocate usb throughput 50/50 between Tx and Rx
tx_interp = int (dac_freq) / int (usb_throughput_in_samples / 2)
rx_decim = int (adc_freq) / int (usb_throughput_in_samples / 2)
# print "tx_interp =", tx_interp, "rx_decim =", rx_decim
assert (tx_interp == 2 * rx_decim)
tb = gr.top_block ()
# Build the Tx pipeline
data_src = gr.lfsr_32k_source_s ()
src_head = gr.head (gr.sizeof_short, int (stream_length * 2))
usrp_tx = usrp.sink_s (0, tx_interp)
tb.connect (data_src, src_head, usrp_tx)
# and the Rx pipeline
usrp_rx = usrp.source_s (0, rx_decim, 1, 0x32103210, usrp.FPGA_MODE_LOOPBACK)
head = gr.head (gr.sizeof_short, stream_length)
check = gr.check_lfsr_32k_s ()
tb.connect (usrp_rx, head, check)
tb.run ()
ntotal = check.ntotal ()
nright = check.nright ()
runlength = check.runlength ()
if verbose:
print "usb_throughput =", eng_notation.num_to_str (usb_throughput)
print "ntotal =", ntotal
print "nright =", nright
print "runlength =", runlength
print "delta =", ntotal - runlength
return runlength >= stream_length - 80000
def main ():
verbose = True
best_rate = 0
usb_rate = [ 2e6, 4e6, 8e6, 16e6, 32e6 ]
#usb_rate = [ 32e6, 32e6, 32e6, 32e6, 32e6 ]
# usb_rate.reverse ()
for rate in usb_rate:
sys.stdout.write ("Testing %sB/sec... " % (eng_notation.num_to_str (rate)))
sys.stdout.flush ()
ok = run_test (rate, verbose)
if ok:
best_rate = max (best_rate, rate)
sys.stdout.write ("OK\n")
else:
sys.stdout.write ("FAILED\n")
print "Max USB/USRP throughput = %sB/sec" % (eng_notation.num_to_str (best_rate),)
if __name__ == '__main__':
main ()
| gpl-3.0 |
ioram7/keystone-federado-pgid2013 | build/paste/build/lib.linux-x86_64-2.7/paste/debug/testserver.py | 28 | 3385 | # (c) 2005 Clark C. Evans
# This module is part of the Python Paste Project and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
# This code was written with funding by http://prometheusresearch.com
"""
WSGI Test Server
This builds upon paste.util.baseserver to customize it for regressions
where using raw_interactive won't do.
"""
import time
from paste.httpserver import *
class WSGIRegressionServer(WSGIServer):
"""
A threaded WSGIServer for use in regression testing. To use this
module, call serve(application, regression=True), and then call
server.accept() to let it handle one request. When finished, use
server.stop() to shutdown the server. Note that all pending requests
are processed before the server shuts down.
"""
defaulttimeout = 10
def __init__ (self, *args, **kwargs):
WSGIServer.__init__(self, *args, **kwargs)
self.stopping = []
self.pending = []
self.timeout = self.defaulttimeout
# this is a local connection, be quick
self.socket.settimeout(2)
def serve_forever(self):
from threading import Thread
thread = Thread(target=self.serve_pending)
thread.start()
def reset_expires(self):
if self.timeout:
self.expires = time.time() + self.timeout
def close_request(self, *args, **kwargs):
WSGIServer.close_request(self, *args, **kwargs)
self.pending.pop()
self.reset_expires()
def serve_pending(self):
self.reset_expires()
while not self.stopping or self.pending:
now = time.time()
if now > self.expires and self.timeout:
# note regression test doesn't handle exceptions in
# threads very well; so we just print and exit
print "\nWARNING: WSGIRegressionServer timeout exceeded\n"
break
if self.pending:
self.handle_request()
time.sleep(.1)
def stop(self):
""" stop the server (called from tester's thread) """
self.stopping.append(True)
def accept(self, count = 1):
""" accept another request (called from tester's thread) """
assert not self.stopping
[self.pending.append(True) for x in range(count)]
def serve(application, host=None, port=None, handler=None):
server = WSGIRegressionServer(application, host, port, handler)
print "serving on %s:%s" % server.server_address
server.serve_forever()
return server
if __name__ == '__main__':
import urllib
from paste.wsgilib import dump_environ
server = serve(dump_environ)
baseuri = ("http://%s:%s" % server.server_address)
def fetch(path):
# tell the server to humor exactly one more request
server.accept(1)
# not needed; but this is what you do if the server
# may not respond in a resonable time period
import socket
socket.setdefaulttimeout(5)
# build a uri, fetch and return
return urllib.urlopen(baseuri + path).read()
assert "PATH_INFO: /foo" in fetch("/foo")
assert "PATH_INFO: /womble" in fetch("/womble")
# ok, let's make one more final request...
server.accept(1)
# and then schedule a stop()
server.stop()
# and then... fetch it...
urllib.urlopen(baseuri)
| apache-2.0 |
bjori/grpc | src/python/grpcio_test/grpc_test/framework/foundation/_later_test.py | 35 | 5102 | # Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Tests of the later module."""
import threading
import time
import unittest
from grpc.framework.foundation import later
TICK = 0.1
class LaterTest(unittest.TestCase):
def test_simple_delay(self):
lock = threading.Lock()
cell = [0]
return_value = object()
def computation():
with lock:
cell[0] += 1
return return_value
computation_future = later.later(TICK * 2, computation)
self.assertFalse(computation_future.done())
self.assertFalse(computation_future.cancelled())
time.sleep(TICK)
self.assertFalse(computation_future.done())
self.assertFalse(computation_future.cancelled())
with lock:
self.assertEqual(0, cell[0])
time.sleep(TICK * 2)
self.assertTrue(computation_future.done())
self.assertFalse(computation_future.cancelled())
with lock:
self.assertEqual(1, cell[0])
self.assertEqual(return_value, computation_future.result())
def test_callback(self):
lock = threading.Lock()
cell = [0]
callback_called = [False]
future_passed_to_callback = [None]
def computation():
with lock:
cell[0] += 1
computation_future = later.later(TICK * 2, computation)
def callback(outcome):
with lock:
callback_called[0] = True
future_passed_to_callback[0] = outcome
computation_future.add_done_callback(callback)
time.sleep(TICK)
with lock:
self.assertFalse(callback_called[0])
time.sleep(TICK * 2)
with lock:
self.assertTrue(callback_called[0])
self.assertTrue(future_passed_to_callback[0].done())
callback_called[0] = False
future_passed_to_callback[0] = None
computation_future.add_done_callback(callback)
with lock:
self.assertTrue(callback_called[0])
self.assertTrue(future_passed_to_callback[0].done())
def test_cancel(self):
lock = threading.Lock()
cell = [0]
callback_called = [False]
future_passed_to_callback = [None]
def computation():
with lock:
cell[0] += 1
computation_future = later.later(TICK * 2, computation)
def callback(outcome):
with lock:
callback_called[0] = True
future_passed_to_callback[0] = outcome
computation_future.add_done_callback(callback)
time.sleep(TICK)
with lock:
self.assertFalse(callback_called[0])
computation_future.cancel()
self.assertTrue(computation_future.cancelled())
self.assertFalse(computation_future.running())
self.assertTrue(computation_future.done())
with lock:
self.assertTrue(callback_called[0])
self.assertTrue(future_passed_to_callback[0].cancelled())
def test_result(self):
lock = threading.Lock()
cell = [0]
callback_called = [False]
future_passed_to_callback_cell = [None]
return_value = object()
def computation():
with lock:
cell[0] += 1
return return_value
computation_future = later.later(TICK * 2, computation)
def callback(future_passed_to_callback):
with lock:
callback_called[0] = True
future_passed_to_callback_cell[0] = future_passed_to_callback
computation_future.add_done_callback(callback)
returned_value = computation_future.result()
self.assertEqual(return_value, returned_value)
# The callback may not yet have been called! Sleep a tick.
time.sleep(TICK)
with lock:
self.assertTrue(callback_called[0])
self.assertEqual(return_value, future_passed_to_callback_cell[0].result())
if __name__ == '__main__':
unittest.main(verbosity=2)
| bsd-3-clause |
dfc/beets | test/test_info.py | 25 | 3581 | # This file is part of beets.
# Copyright 2015, Thomas Scholtes.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
from __future__ import (division, absolute_import, print_function,
unicode_literals)
from test._common import unittest
from test.helper import TestHelper
from beets.mediafile import MediaFile
from beets.util import displayable_path
class InfoTest(unittest.TestCase, TestHelper):
def setUp(self):
self.setup_beets()
self.load_plugins('info')
def tearDown(self):
self.unload_plugins()
self.teardown_beets()
def run_command(self, *args):
super(InfoTest, self).run_command('info', *args)
def test_path(self):
path = self.create_mediafile_fixture()
mediafile = MediaFile(path)
mediafile.albumartist = 'AAA'
mediafile.disctitle = 'DDD'
mediafile.genres = ['a', 'b', 'c']
mediafile.composer = None
mediafile.save()
out = self.run_with_output(path)
self.assertIn(path, out)
self.assertIn('albumartist: AAA', out)
self.assertIn('disctitle: DDD', out)
self.assertIn('genres: a; b; c', out)
self.assertNotIn('composer:', out)
def test_item_query(self):
item1, item2 = self.add_item_fixtures(count=2)
item1.album = 'xxxx'
item1.write()
item1.album = 'yyyy'
item1.store()
out = self.run_with_output('album:yyyy')
self.assertIn(displayable_path(item1.path), out)
self.assertIn(u'album: xxxx', out)
self.assertNotIn(displayable_path(item2.path), out)
def test_item_library_query(self):
item, = self.add_item_fixtures()
item.album = 'xxxx'
item.store()
out = self.run_with_output('--library', 'album:xxxx')
self.assertIn(displayable_path(item.path), out)
self.assertIn(u'album: xxxx', out)
def test_collect_item_and_path(self):
path = self.create_mediafile_fixture()
mediafile = MediaFile(path)
item, = self.add_item_fixtures()
item.album = mediafile.album = 'AAA'
item.tracktotal = mediafile.tracktotal = 5
item.title = 'TTT'
mediafile.title = 'SSS'
item.write()
item.store()
mediafile.save()
out = self.run_with_output('--summarize', 'album:AAA', path)
self.assertIn(u'album: AAA', out)
self.assertIn(u'tracktotal: 5', out)
self.assertIn(u'title: [various]', out)
def test_include_pattern(self):
item, = self.add_item_fixtures()
item.album = 'xxxx'
item.store()
out = self.run_with_output('--library', 'album:xxxx',
'--include-keys', '*lbu*')
self.assertIn(displayable_path(item.path), out)
self.assertNotIn(u'title:', out)
self.assertIn(u'album: xxxx', out)
def suite():
return unittest.TestLoader().loadTestsFromName(__name__)
if __name__ == b'__main__':
unittest.main(defaultTest='suite')
| mit |
parmegv/keymanager | setup.py | 1 | 4865 | # -*- coding: utf-8 -*-
# setup.py
# Copyright (C) 2013 LEAP
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
setup file for leap.keymanager
"""
import re
from setuptools import setup
from setuptools import find_packages
import versioneer
versioneer.versionfile_source = 'src/leap/keymanager/_version.py'
versioneer.versionfile_build = 'leap/keymanager/_version.py'
versioneer.tag_prefix = '' # tags are like 1.2.0
versioneer.parentdir_prefix = 'leap.keymanager-'
from pkg import utils
trove_classifiers = [
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Topic :: Communications :: Email',
'Topic :: Internet',
'Topic :: Security :: Cryptography',
'Topic :: Software Development :: Libraries',
]
DOWNLOAD_BASE = ('https://github.com/leapcode/keymanager/'
'archive/%s.tar.gz')
_versions = versioneer.get_versions()
VERSION = _versions['version']
VERSION_FULL = _versions['full']
DOWNLOAD_URL = ""
# get the short version for the download url
_version_short = re.findall('\d+\.\d+\.\d+', VERSION)
if len(_version_short) > 0:
VERSION_SHORT = _version_short[0]
DOWNLOAD_URL = DOWNLOAD_BASE % VERSION_SHORT
cmdclass = versioneer.get_cmdclass()
from setuptools import Command
class freeze_debianver(Command):
"""
Freezes the version in a debian branch.
To be used after merging the development branch onto the debian one.
"""
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
proceed = str(raw_input(
"This will overwrite the file _version.py. Continue? [y/N] "))
if proceed != "y":
print("He. You scared. Aborting.")
return
template = r"""
# This file was generated by the `freeze_debianver` command in setup.py
# Using 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '{version}'
version_full = '{version_full}'
"""
templatefun = r"""
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
"""
subst_template = template.format(
version=VERSION_SHORT,
version_full=VERSION_FULL) + templatefun
with open(versioneer.versionfile_source, 'w') as f:
f.write(subst_template)
cmdclass["freeze_debianver"] = freeze_debianver
# XXX add ref to docs
requirements = utils.parse_requirements()
if utils.is_develop_mode():
print
print ("[WARNING] Skipping leap-specific dependencies "
"because development mode is detected.")
print ("[WARNING] You can install "
"the latest published versions with "
"'pip install -r pkg/requirements-leap.pip'")
print ("[WARNING] Or you can instead do 'python setup.py develop' "
"from the parent folder of each one of them.")
print
else:
requirements += utils.parse_requirements(
reqfiles=["pkg/requirements-leap.pip"])
setup(
name='leap.keymanager',
version=VERSION,
cmdclass=cmdclass,
url='https://leap.se/',
download_url=DOWNLOAD_URL,
license='GPLv3+',
description='LEAP\'s Key Manager',
author='The LEAP Encryption Access Project',
author_email='[email protected]',
maintainer='Kali Kaneko',
maintainer_email='[email protected]',
long_description=(
"The Key Manager handles all types of keys to allow for "
"point-to-point encryption between parties communicating through "
"LEAP infrastructure."
),
classifiers=trove_classifiers,
namespace_packages=["leap"],
packages=find_packages('src', exclude=['leap.keymanager.tests']),
package_dir={'': 'src'},
test_suite='leap.keymanager.tests',
install_requires=requirements,
tests_require=utils.parse_requirements(
reqfiles=['pkg/requirements-testing.pip']),
)
| gpl-3.0 |
nikolas/django-extensions | django_extensions/mongodb/models.py | 28 | 2544 | """
Django Extensions abstract base mongoengine Document classes.
"""
import datetime
from django.utils.translation import ugettext_lazy as _
from mongoengine.document import Document
from mongoengine.fields import DateTimeField, IntField, StringField
from mongoengine.queryset import QuerySetManager
from django_extensions.mongodb.fields import (
AutoSlugField, CreationDateTimeField, ModificationDateTimeField,
)
class TimeStampedModel(Document):
""" TimeStampedModel
An abstract base class model that provides self-managed "created" and
"modified" fields.
"""
created = CreationDateTimeField(_('created'))
modified = ModificationDateTimeField(_('modified'))
class Meta:
abstract = True
class TitleSlugDescriptionModel(Document):
""" TitleSlugDescriptionModel
An abstract base class model that provides title and description fields
and a self-managed "slug" field that populates from the title.
"""
title = StringField(_('title'), max_length=255)
slug = AutoSlugField(_('slug'), populate_from='title')
description = StringField(_('description'), blank=True, null=True)
class Meta:
abstract = True
class ActivatorModelManager(QuerySetManager):
""" ActivatorModelManager
Manager to return instances of ActivatorModel: SomeModel.objects.active() / .inactive()
"""
def active(self):
""" Returns active instances of ActivatorModel: SomeModel.objects.active() """
return super(ActivatorModelManager, self).get_query_set().filter(status=1)
def inactive(self):
""" Returns inactive instances of ActivatorModel: SomeModel.objects.inactive() """
return super(ActivatorModelManager, self).get_query_set().filter(status=0)
class ActivatorModel(Document):
""" ActivatorModel
An abstract base class model that provides activate and deactivate fields.
"""
STATUS_CHOICES = (
(0, _('Inactive')),
(1, _('Active')),
)
status = IntField(_('status'), choices=STATUS_CHOICES, default=1)
activate_date = DateTimeField(blank=True, null=True, help_text=_('keep empty for an immediate activation'))
deactivate_date = DateTimeField(blank=True, null=True, help_text=_('keep empty for indefinite activation'))
objects = ActivatorModelManager()
class Meta:
abstract = True
def save(self, *args, **kwargs):
if not self.activate_date:
self.activate_date = datetime.datetime.now()
super(ActivatorModel, self).save(*args, **kwargs)
| mit |
aflaxman/scikit-learn | sklearn/metrics/regression.py | 47 | 19967 | """Metrics to assess performance on regression task
Functions named as ``*_score`` return a scalar value to maximize: the higher
the better
Function named as ``*_error`` or ``*_loss`` return a scalar value to minimize:
the lower the better
"""
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Olivier Grisel <[email protected]>
# Arnaud Joly <[email protected]>
# Jochen Wersdorfer <[email protected]>
# Lars Buitinck
# Joel Nothman <[email protected]>
# Karan Desai <[email protected]>
# Noel Dawe <[email protected]>
# Manoj Kumar <[email protected]>
# Michael Eickenberg <[email protected]>
# Konstantin Shmelkov <[email protected]>
# License: BSD 3 clause
from __future__ import division
import numpy as np
from ..utils.validation import check_array, check_consistent_length
from ..utils.validation import column_or_1d
from ..externals.six import string_types
__ALL__ = [
"mean_absolute_error",
"mean_squared_error",
"mean_squared_log_error",
"median_absolute_error",
"r2_score",
"explained_variance_score"
]
def _check_reg_targets(y_true, y_pred, multioutput):
"""Check that y_true and y_pred belong to the same regression task
Parameters
----------
y_true : array-like,
y_pred : array-like,
multioutput : array-like or string in ['raw_values', uniform_average',
'variance_weighted'] or None
None is accepted due to backward compatibility of r2_score().
Returns
-------
type_true : one of {'continuous', continuous-multioutput'}
The type of the true target data, as output by
'utils.multiclass.type_of_target'
y_true : array-like of shape = (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples, n_outputs)
Estimated target values.
multioutput : array-like of shape = (n_outputs) or string in ['raw_values',
uniform_average', 'variance_weighted'] or None
Custom output weights if ``multioutput`` is array-like or
just the corresponding argument if ``multioutput`` is a
correct keyword.
"""
check_consistent_length(y_true, y_pred)
y_true = check_array(y_true, ensure_2d=False)
y_pred = check_array(y_pred, ensure_2d=False)
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, string_types):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
def mean_absolute_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean absolute error regression loss
Read more in the :ref:`User Guide <mean_absolute_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average']
or array-like of shape (n_outputs)
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
If multioutput is 'raw_values', then mean absolute error is returned
for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.
MAE output is non-negative floating point. The best value is 0.0.
Examples
--------
>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([ 0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.849...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
output_errors = np.average(np.abs(y_pred - y_true),
weights=sample_weight, axis=0)
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def mean_squared_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean squared error regression loss
Read more in the :ref:`User Guide <mean_squared_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average']
or array-like of shape (n_outputs)
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
A non-negative floating point value (the best value is 0.0), or an
array of floating point values, one for each individual target.
Examples
--------
>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [[0.5, 1],[-1, 1],[7, -6]]
>>> y_pred = [[0, 2],[-1, 2],[8, -5]]
>>> mean_squared_error(y_true, y_pred) # doctest: +ELLIPSIS
0.708...
>>> mean_squared_error(y_true, y_pred, multioutput='raw_values')
... # doctest: +ELLIPSIS
array([ 0.416..., 1. ])
>>> mean_squared_error(y_true, y_pred, multioutput=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.824...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
output_errors = np.average((y_true - y_pred) ** 2, axis=0,
weights=sample_weight)
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def mean_squared_log_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean squared logarithmic error regression loss
Read more in the :ref:`User Guide <mean_squared_log_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average'] \
or array-like of shape = (n_outputs)
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors when the input is of multioutput
format.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
A non-negative floating point value (the best value is 0.0), or an
array of floating point values, one for each individual target.
Examples
--------
>>> from sklearn.metrics import mean_squared_log_error
>>> y_true = [3, 5, 2.5, 7]
>>> y_pred = [2.5, 5, 4, 8]
>>> mean_squared_log_error(y_true, y_pred) # doctest: +ELLIPSIS
0.039...
>>> y_true = [[0.5, 1], [1, 2], [7, 6]]
>>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]]
>>> mean_squared_log_error(y_true, y_pred) # doctest: +ELLIPSIS
0.044...
>>> mean_squared_log_error(y_true, y_pred, multioutput='raw_values')
... # doctest: +ELLIPSIS
array([ 0.004..., 0.083...])
>>> mean_squared_log_error(y_true, y_pred, multioutput=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.060...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
if not (y_true >= 0).all() and not (y_pred >= 0).all():
raise ValueError("Mean Squared Logarithmic Error cannot be used when "
"targets contain negative values.")
return mean_squared_error(np.log(y_true + 1), np.log(y_pred + 1),
sample_weight, multioutput)
def median_absolute_error(y_true, y_pred):
"""Median absolute error regression loss
Read more in the :ref:`User Guide <median_absolute_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples)
Estimated target values.
Returns
-------
loss : float
A positive floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import median_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> median_absolute_error(y_true, y_pred)
0.5
"""
y_type, y_true, y_pred, _ = _check_reg_targets(y_true, y_pred,
'uniform_average')
if y_type == 'continuous-multioutput':
raise ValueError("Multioutput not supported in median_absolute_error")
return np.median(np.abs(y_pred - y_true))
def explained_variance_score(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Explained variance regression score function
Best possible score is 1.0, lower values are worse.
Read more in the :ref:`User Guide <explained_variance_score>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average', \
'variance_weighted'] or array-like of shape (n_outputs)
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
Returns
-------
score : float or ndarray of floats
The explained variance or ndarray if 'multioutput' is 'raw_values'.
Notes
-----
This is not a symmetric function.
Examples
--------
>>> from sklearn.metrics import explained_variance_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred) # doctest: +ELLIPSIS
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, multioutput='uniform_average')
... # doctest: +ELLIPSIS
0.983...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
y_diff_avg = np.average(y_true - y_pred, weights=sample_weight, axis=0)
numerator = np.average((y_true - y_pred - y_diff_avg) ** 2,
weights=sample_weight, axis=0)
y_true_avg = np.average(y_true, weights=sample_weight, axis=0)
denominator = np.average((y_true - y_true_avg) ** 2,
weights=sample_weight, axis=0)
nonzero_numerator = numerator != 0
nonzero_denominator = denominator != 0
valid_score = nonzero_numerator & nonzero_denominator
output_scores = np.ones(y_true.shape[1])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing to np.average() None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
def r2_score(y_true, y_pred, sample_weight=None,
multioutput="uniform_average"):
"""R^2 (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). A constant model that always
predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
Read more in the :ref:`User Guide <r2_score>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average', \
'variance_weighted'] or None or array-like of shape (n_outputs)
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
Default is "uniform_average".
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
.. versionchanged:: 0.19
Default value of multioutput is 'uniform_average'.
Returns
-------
z : float or ndarray of floats
The R^2 score or ndarray of scores if 'multioutput' is
'raw_values'.
Notes
-----
This is not a symmetric function.
Unlike most other scores, R^2 score may be negative (it need not actually
be the square of a quantity R).
References
----------
.. [1] `Wikipedia entry on the Coefficient of determination
<https://en.wikipedia.org/wiki/Coefficient_of_determination>`_
Examples
--------
>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred) # doctest: +ELLIPSIS
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='variance_weighted')
... # doctest: +ELLIPSIS
0.938...
>>> y_true = [1,2,3]
>>> y_pred = [1,2,3]
>>> r2_score(y_true, y_pred)
1.0
>>> y_true = [1,2,3]
>>> y_pred = [2,2,2]
>>> r2_score(y_true, y_pred)
0.0
>>> y_true = [1,2,3]
>>> y_pred = [3,2,1]
>>> r2_score(y_true, y_pred)
-3.0
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
if sample_weight is not None:
sample_weight = column_or_1d(sample_weight)
weight = sample_weight[:, np.newaxis]
else:
weight = 1.
numerator = (weight * (y_true - y_pred) ** 2).sum(axis=0,
dtype=np.float64)
denominator = (weight * (y_true - np.average(
y_true, axis=0, weights=sample_weight)) ** 2).sum(axis=0,
dtype=np.float64)
nonzero_denominator = denominator != 0
nonzero_numerator = numerator != 0
valid_score = nonzero_denominator & nonzero_numerator
output_scores = np.ones([y_true.shape[1]])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
# arbitrary set to zero to avoid -inf scores, having a constant
# y_true is not interesting for scoring a regression anyway
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
# avoid fail on constant y or one-element arrays
if not np.any(nonzero_denominator):
if not np.any(nonzero_numerator):
return 1.0
else:
return 0.0
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
| bsd-3-clause |
tima/ansible | test/units/module_utils/facts/test_ansible_collector.py | 25 | 12747 | # -*- coding: utf-8 -*-
#
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Make coding more python3-ish
from __future__ import (absolute_import, division)
__metaclass__ = type
# for testing
from ansible.compat.tests import unittest
from ansible.compat.tests.mock import Mock, patch
from ansible.module_utils.facts import collector
from ansible.module_utils.facts import ansible_collector
from ansible.module_utils.facts import namespace
from ansible.module_utils.facts.other.facter import FacterFactCollector
from ansible.module_utils.facts.other.ohai import OhaiFactCollector
from ansible.module_utils.facts.system.apparmor import ApparmorFactCollector
from ansible.module_utils.facts.system.caps import SystemCapabilitiesFactCollector
from ansible.module_utils.facts.system.date_time import DateTimeFactCollector
from ansible.module_utils.facts.system.env import EnvFactCollector
from ansible.module_utils.facts.system.distribution import DistributionFactCollector
from ansible.module_utils.facts.system.dns import DnsFactCollector
from ansible.module_utils.facts.system.fips import FipsFactCollector
from ansible.module_utils.facts.system.local import LocalFactCollector
from ansible.module_utils.facts.system.lsb import LSBFactCollector
from ansible.module_utils.facts.system.pkg_mgr import PkgMgrFactCollector, OpenBSDPkgMgrFactCollector
from ansible.module_utils.facts.system.platform import PlatformFactCollector
from ansible.module_utils.facts.system.python import PythonFactCollector
from ansible.module_utils.facts.system.selinux import SelinuxFactCollector
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.system.user import UserFactCollector
# from ansible.module_utils.facts.hardware.base import HardwareCollector
from ansible.module_utils.facts.network.base import NetworkCollector
from ansible.module_utils.facts.virtual.base import VirtualCollector
ALL_COLLECTOR_CLASSES = \
[PlatformFactCollector,
DistributionFactCollector,
SelinuxFactCollector,
ApparmorFactCollector,
SystemCapabilitiesFactCollector,
FipsFactCollector,
PkgMgrFactCollector,
OpenBSDPkgMgrFactCollector,
ServiceMgrFactCollector,
LSBFactCollector,
DateTimeFactCollector,
UserFactCollector,
LocalFactCollector,
EnvFactCollector,
DnsFactCollector,
PythonFactCollector,
# FIXME: re-enable when hardware doesnt Hardware() doesnt munge self.facts
# HardwareCollector
NetworkCollector,
VirtualCollector,
OhaiFactCollector,
FacterFactCollector]
def mock_module(gather_subset=None):
if gather_subset is None:
gather_subset = ['all', '!facter', '!ohai']
mock_module = Mock()
mock_module.params = {'gather_subset': gather_subset,
'gather_timeout': 5,
'filter': '*'}
mock_module.get_bin_path = Mock(return_value=None)
return mock_module
def _collectors(module,
all_collector_classes=None,
minimal_gather_subset=None):
gather_subset = module.params.get('gather_subset')
if all_collector_classes is None:
all_collector_classes = ALL_COLLECTOR_CLASSES
if minimal_gather_subset is None:
minimal_gather_subset = frozenset([])
collector_classes = \
collector.collector_classes_from_gather_subset(all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset,
gather_subset=gather_subset)
collectors = []
for collector_class in collector_classes:
collector_obj = collector_class()
collectors.append(collector_obj)
# Add a collector that knows what gather_subset we used so it it can provide a fact
collector_meta_data_collector = \
ansible_collector.CollectorMetaDataCollector(gather_subset=gather_subset,
module_setup=True)
collectors.append(collector_meta_data_collector)
return collectors
ns = namespace.PrefixFactNamespace('ansible_facts', 'ansible_')
# FIXME: this is brute force, but hopefully enough to get some refactoring to make facts testable
class TestInPlace(unittest.TestCase):
def _mock_module(self, gather_subset=None):
return mock_module(gather_subset=gather_subset)
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
def test(self):
gather_subset = ['all']
mock_module = self._mock_module(gather_subset=gather_subset)
all_collector_classes = [EnvFactCollector]
collectors = self._collectors(mock_module,
all_collector_classes=all_collector_classes)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect(module=mock_module)
self.assertIsInstance(res, dict)
self.assertIn('env', res)
self.assertIn('gather_subset', res)
self.assertEqual(res['gather_subset'], ['all'])
def test1(self):
gather_subset = ['all']
mock_module = self._mock_module(gather_subset=gather_subset)
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect(module=mock_module)
self.assertIsInstance(res, dict)
# just assert it's not almost empty
# with run_command and get_file_content mock, many facts are empty, like network
self.assertGreater(len(res), 20)
def test_empty_all_collector_classes(self):
mock_module = self._mock_module()
all_collector_classes = []
collectors = self._collectors(mock_module,
all_collector_classes=all_collector_classes)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect()
self.assertIsInstance(res, dict)
# just assert it's not almost empty
self.assertLess(len(res), 3)
# def test_facts_class(self):
# mock_module = self._mock_module()
# Facts(mock_module)
# def test_facts_class_load_on_init_false(self):
# mock_module = self._mock_module()
# Facts(mock_module, load_on_init=False)
# # FIXME: assert something
class TestCollectedFacts(unittest.TestCase):
gather_subset = ['all', '!facter', '!ohai']
min_fact_count = 30
max_fact_count = 1000
# TODO: add ansible_cmdline, ansible_*_pubkey* back when TempFactCollector goes away
expected_facts = ['date_time',
'user_id', 'distribution',
'gather_subset', 'module_setup',
'env']
not_expected_facts = ['facter', 'ohai']
def _mock_module(self, gather_subset=None):
return mock_module(gather_subset=self.gather_subset)
def setUp(self):
mock_module = self._mock_module()
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
self.facts = fact_collector.collect(module=mock_module)
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
def test_basics(self):
self._assert_basics(self.facts)
def test_expected_facts(self):
self._assert_expected_facts(self.facts)
def test_not_expected_facts(self):
self._assert_not_expected_facts(self.facts)
def _assert_basics(self, facts):
self.assertIsInstance(facts, dict)
# just assert it's not almost empty
self.assertGreaterEqual(len(facts), self.min_fact_count)
# and that is not huge number of keys
self.assertLess(len(facts), self.max_fact_count)
# everything starts with ansible_ namespace
def _assert_ansible_namespace(self, facts):
# FIXME: kluge for non-namespace fact
facts.pop('module_setup', None)
facts.pop('gather_subset', None)
for fact_key in facts:
self.assertTrue(fact_key.startswith('ansible_'),
'The fact name "%s" does not startwith "ansible_"' % fact_key)
def _assert_expected_facts(self, facts):
facts_keys = sorted(facts.keys())
for expected_fact in self.expected_facts:
self.assertIn(expected_fact, facts_keys)
def _assert_not_expected_facts(self, facts):
facts_keys = sorted(facts.keys())
for not_expected_fact in self.not_expected_facts:
self.assertNotIn(not_expected_fact, facts_keys)
class ExceptionThrowingCollector(collector.BaseFactCollector):
def collect(self, module=None, collected_facts=None):
raise Exception('A collector failed')
class TestExceptionCollectedFacts(TestCollectedFacts):
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
collectors = _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
c = [ExceptionThrowingCollector()] + collectors
return c
class TestOnlyExceptionCollector(TestCollectedFacts):
expected_facts = []
min_fact_count = 0
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return [ExceptionThrowingCollector()]
class TestMinimalCollectedFacts(TestCollectedFacts):
gather_subset = ['!all']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestFacterCollectedFacts(TestCollectedFacts):
gather_subset = ['!all', 'facter']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestOhaiCollectedFacts(TestCollectedFacts):
gather_subset = ['!all', 'ohai']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestPkgMgrFacts(TestCollectedFacts):
gather_subset = ['pkg_mgr']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup',
'pkg_mgr']
class TestOpenBSDPkgMgrFacts(TestPkgMgrFacts):
def test_is_openbsd_pkg(self):
self.assertIn('pkg_mgr', self.facts)
self.assertEqual(self.facts['pkg_mgr'], 'openbsd_pkg')
def setUp(self):
self.patcher = patch('platform.system')
mock_platform = self.patcher.start()
mock_platform.return_value = 'OpenBSD'
mock_module = self._mock_module()
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
self.facts = fact_collector.collect(module=mock_module)
def tearDown(self):
self.patcher.stop()
| gpl-3.0 |
fujicoin/electrum-fjc | electrum/x509.py | 3 | 11467 | #!/usr/bin/env python
#
# Electrum - lightweight Bitcoin client
# Copyright (C) 2014 Thomas Voegtlin
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import hashlib
import time
from datetime import datetime
import ecdsa
from . import util
from .util import profiler, bh2u
from .logging import get_logger
_logger = get_logger(__name__)
# algo OIDs
ALGO_RSA_SHA1 = '1.2.840.113549.1.1.5'
ALGO_RSA_SHA256 = '1.2.840.113549.1.1.11'
ALGO_RSA_SHA384 = '1.2.840.113549.1.1.12'
ALGO_RSA_SHA512 = '1.2.840.113549.1.1.13'
ALGO_ECDSA_SHA256 = '1.2.840.10045.4.3.2'
# prefixes, see http://stackoverflow.com/questions/3713774/c-sharp-how-to-calculate-asn-1-der-encoding-of-a-particular-hash-algorithm
PREFIX_RSA_SHA256 = bytearray(
[0x30, 0x31, 0x30, 0x0d, 0x06, 0x09, 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x01, 0x05, 0x00, 0x04, 0x20])
PREFIX_RSA_SHA384 = bytearray(
[0x30, 0x41, 0x30, 0x0d, 0x06, 0x09, 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x02, 0x05, 0x00, 0x04, 0x30])
PREFIX_RSA_SHA512 = bytearray(
[0x30, 0x51, 0x30, 0x0d, 0x06, 0x09, 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x03, 0x05, 0x00, 0x04, 0x40])
# types used in ASN1 structured data
ASN1_TYPES = {
'BOOLEAN' : 0x01,
'INTEGER' : 0x02,
'BIT STRING' : 0x03,
'OCTET STRING' : 0x04,
'NULL' : 0x05,
'OBJECT IDENTIFIER': 0x06,
'SEQUENCE' : 0x70,
'SET' : 0x71,
'PrintableString' : 0x13,
'IA5String' : 0x16,
'UTCTime' : 0x17,
'GeneralizedTime' : 0x18,
'ENUMERATED' : 0x0A,
'UTF8String' : 0x0C,
}
class CertificateError(Exception):
pass
# helper functions
def bitstr_to_bytestr(s):
if s[0] != 0x00:
raise TypeError('no padding')
return s[1:]
def bytestr_to_int(s):
i = 0
for char in s:
i <<= 8
i |= char
return i
def decode_OID(s):
r = []
r.append(s[0] // 40)
r.append(s[0] % 40)
k = 0
for i in s[1:]:
if i < 128:
r.append(i + 128 * k)
k = 0
else:
k = (i - 128) + 128 * k
return '.'.join(map(str, r))
def encode_OID(oid):
x = [int(i) for i in oid.split('.')]
s = chr(x[0] * 40 + x[1])
for i in x[2:]:
ss = chr(i % 128)
while i > 128:
i //= 128
ss = chr(128 + i % 128) + ss
s += ss
return s
class ASN1_Node(bytes):
def get_node(self, ix):
# return index of first byte, first content byte and last byte.
first = self[ix + 1]
if (first & 0x80) == 0:
length = first
ixf = ix + 2
ixl = ixf + length - 1
else:
lengthbytes = first & 0x7F
length = bytestr_to_int(self[ix + 2:ix + 2 + lengthbytes])
ixf = ix + 2 + lengthbytes
ixl = ixf + length - 1
return ix, ixf, ixl
def root(self):
return self.get_node(0)
def next_node(self, node):
ixs, ixf, ixl = node
return self.get_node(ixl + 1)
def first_child(self, node):
ixs, ixf, ixl = node
if self[ixs] & 0x20 != 0x20:
raise TypeError('Can only open constructed types.', hex(self[ixs]))
return self.get_node(ixf)
def is_child_of(node1, node2):
ixs, ixf, ixl = node1
jxs, jxf, jxl = node2
return ((ixf <= jxs) and (jxl <= ixl)) or ((jxf <= ixs) and (ixl <= jxl))
def get_all(self, node):
# return type + length + value
ixs, ixf, ixl = node
return self[ixs:ixl + 1]
def get_value_of_type(self, node, asn1_type):
# verify type byte and return content
ixs, ixf, ixl = node
if ASN1_TYPES[asn1_type] != self[ixs]:
raise TypeError('Wrong type:', hex(self[ixs]), hex(ASN1_TYPES[asn1_type]))
return self[ixf:ixl + 1]
def get_value(self, node):
ixs, ixf, ixl = node
return self[ixf:ixl + 1]
def get_children(self, node):
nodes = []
ii = self.first_child(node)
nodes.append(ii)
while ii[2] < node[2]:
ii = self.next_node(ii)
nodes.append(ii)
return nodes
def get_sequence(self):
return list(map(lambda j: self.get_value(j), self.get_children(self.root())))
def get_dict(self, node):
p = {}
for ii in self.get_children(node):
for iii in self.get_children(ii):
iiii = self.first_child(iii)
oid = decode_OID(self.get_value_of_type(iiii, 'OBJECT IDENTIFIER'))
iiii = self.next_node(iiii)
value = self.get_value(iiii)
p[oid] = value
return p
def decode_time(self, ii):
GENERALIZED_TIMESTAMP_FMT = '%Y%m%d%H%M%SZ'
UTCTIME_TIMESTAMP_FMT = '%y%m%d%H%M%SZ'
try:
return time.strptime(self.get_value_of_type(ii, 'UTCTime').decode('ascii'), UTCTIME_TIMESTAMP_FMT)
except TypeError:
return time.strptime(self.get_value_of_type(ii, 'GeneralizedTime').decode('ascii'), GENERALIZED_TIMESTAMP_FMT)
class X509(object):
def __init__(self, b):
self.bytes = bytearray(b)
der = ASN1_Node(b)
root = der.root()
cert = der.first_child(root)
# data for signature
self.data = der.get_all(cert)
# optional version field
if der.get_value(cert)[0] == 0xa0:
version = der.first_child(cert)
serial_number = der.next_node(version)
else:
serial_number = der.first_child(cert)
self.serial_number = bytestr_to_int(der.get_value_of_type(serial_number, 'INTEGER'))
# signature algorithm
sig_algo = der.next_node(serial_number)
ii = der.first_child(sig_algo)
self.sig_algo = decode_OID(der.get_value_of_type(ii, 'OBJECT IDENTIFIER'))
# issuer
issuer = der.next_node(sig_algo)
self.issuer = der.get_dict(issuer)
# validity
validity = der.next_node(issuer)
ii = der.first_child(validity)
self.notBefore = der.decode_time(ii)
ii = der.next_node(ii)
self.notAfter = der.decode_time(ii)
# subject
subject = der.next_node(validity)
self.subject = der.get_dict(subject)
subject_pki = der.next_node(subject)
public_key_algo = der.first_child(subject_pki)
ii = der.first_child(public_key_algo)
self.public_key_algo = decode_OID(der.get_value_of_type(ii, 'OBJECT IDENTIFIER'))
if self.public_key_algo != '1.2.840.10045.2.1': # for non EC public key
# pubkey modulus and exponent
subject_public_key = der.next_node(public_key_algo)
spk = der.get_value_of_type(subject_public_key, 'BIT STRING')
spk = ASN1_Node(bitstr_to_bytestr(spk))
r = spk.root()
modulus = spk.first_child(r)
exponent = spk.next_node(modulus)
rsa_n = spk.get_value_of_type(modulus, 'INTEGER')
rsa_e = spk.get_value_of_type(exponent, 'INTEGER')
self.modulus = ecdsa.util.string_to_number(rsa_n)
self.exponent = ecdsa.util.string_to_number(rsa_e)
else:
subject_public_key = der.next_node(public_key_algo)
spk = der.get_value_of_type(subject_public_key, 'BIT STRING')
self.ec_public_key = spk
# extensions
self.CA = False
self.AKI = None
self.SKI = None
i = subject_pki
while i[2] < cert[2]:
i = der.next_node(i)
d = der.get_dict(i)
for oid, value in d.items():
value = ASN1_Node(value)
if oid == '2.5.29.19':
# Basic Constraints
self.CA = bool(value)
elif oid == '2.5.29.14':
# Subject Key Identifier
r = value.root()
value = value.get_value_of_type(r, 'OCTET STRING')
self.SKI = bh2u(value)
elif oid == '2.5.29.35':
# Authority Key Identifier
self.AKI = bh2u(value.get_sequence()[0])
else:
pass
# cert signature
cert_sig_algo = der.next_node(cert)
ii = der.first_child(cert_sig_algo)
self.cert_sig_algo = decode_OID(der.get_value_of_type(ii, 'OBJECT IDENTIFIER'))
cert_sig = der.next_node(cert_sig_algo)
self.signature = der.get_value(cert_sig)[1:]
def get_keyID(self):
# http://security.stackexchange.com/questions/72077/validating-an-ssl-certificate-chain-according-to-rfc-5280-am-i-understanding-th
return self.SKI if self.SKI else repr(self.subject)
def get_issuer_keyID(self):
return self.AKI if self.AKI else repr(self.issuer)
def get_common_name(self):
return self.subject.get('2.5.4.3', b'unknown').decode()
def get_signature(self):
return self.cert_sig_algo, self.signature, self.data
def check_ca(self):
return self.CA
def check_date(self):
now = time.gmtime()
if self.notBefore > now:
raise CertificateError('Certificate has not entered its valid date range. (%s)' % self.get_common_name())
if self.notAfter <= now:
dt = datetime.utcfromtimestamp(time.mktime(self.notAfter))
raise CertificateError(f'Certificate ({self.get_common_name()}) has expired (at {dt} UTC).')
def getFingerprint(self):
return hashlib.sha1(self.bytes).digest()
@profiler
def load_certificates(ca_path):
from . import pem
ca_list = {}
ca_keyID = {}
# ca_path = '/tmp/tmp.txt'
with open(ca_path, 'r', encoding='utf-8') as f:
s = f.read()
bList = pem.dePemList(s, "CERTIFICATE")
for b in bList:
try:
x = X509(b)
x.check_date()
except BaseException as e:
# with open('/tmp/tmp.txt', 'w') as f:
# f.write(pem.pem(b, 'CERTIFICATE').decode('ascii'))
_logger.info(f"cert error: {e}")
continue
fp = x.getFingerprint()
ca_list[fp] = x
ca_keyID[x.get_keyID()] = fp
return ca_list, ca_keyID
if __name__ == "__main__":
import certifi
ca_path = certifi.where()
ca_list, ca_keyID = load_certificates(ca_path)
| mit |
slyphon/pants | src/python/pants/goal/run_tracker.py | 4 | 13092 | # coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
import json
import os
import sys
import threading
import time
import uuid
from contextlib import contextmanager
import requests
from pants.base.build_environment import get_pants_cachedir
from pants.base.run_info import RunInfo
from pants.base.worker_pool import SubprocPool, WorkerPool
from pants.base.workunit import WorkUnit
from pants.goal.aggregated_timings import AggregatedTimings
from pants.goal.artifact_cache_stats import ArtifactCacheStats
from pants.reporting.report import Report
from pants.stats.statsdb import StatsDBFactory
from pants.subsystem.subsystem import Subsystem
from pants.util.dirutil import relative_symlink, safe_file_dump
class RunTracker(Subsystem):
"""Tracks and times the execution of a pants run.
Also manages background work.
Use like this:
run_tracker.start()
with run_tracker.new_workunit('compile'):
with run_tracker.new_workunit('java'):
...
with run_tracker.new_workunit('scala'):
...
run_tracker.close()
Can track execution against multiple 'roots', e.g., one for the main thread and another for
background threads.
"""
options_scope = 'run-tracker'
# The name of the tracking root for the main thread (and the foreground worker threads).
DEFAULT_ROOT_NAME = 'main'
# The name of the tracking root for the background worker threads.
BACKGROUND_ROOT_NAME = 'background'
@classmethod
def subsystem_dependencies(cls):
return (StatsDBFactory,)
@classmethod
def register_options(cls, register):
register('--stats-upload-url', advanced=True, default=None,
help='Upload stats to this URL on run completion.')
register('--stats-upload-timeout', advanced=True, type=int, default=2,
help='Wait at most this many seconds for the stats upload to complete.')
register('--num-foreground-workers', advanced=True, type=int, default=8,
help='Number of threads for foreground work.')
register('--num-background-workers', advanced=True, type=int, default=8,
help='Number of threads for background work.')
def __init__(self, *args, **kwargs):
super(RunTracker, self).__init__(*args, **kwargs)
run_timestamp = time.time()
cmd_line = ' '.join(['pants'] + sys.argv[1:])
# run_id is safe for use in paths.
millis = int((run_timestamp * 1000) % 1000)
run_id = 'pants_run_{}_{}_{}'.format(
time.strftime('%Y_%m_%d_%H_%M_%S', time.localtime(run_timestamp)), millis,
uuid.uuid4().hex)
info_dir = os.path.join(self.get_options().pants_workdir, self.options_scope)
self.run_info_dir = os.path.join(info_dir, run_id)
self.run_info = RunInfo(os.path.join(self.run_info_dir, 'info'))
self.run_info.add_basic_info(run_id, run_timestamp)
self.run_info.add_info('cmd_line', cmd_line)
# Create a 'latest' symlink, after we add_infos, so we're guaranteed that the file exists.
link_to_latest = os.path.join(os.path.dirname(self.run_info_dir), 'latest')
relative_symlink(self.run_info_dir, link_to_latest)
# Time spent in a workunit, including its children.
self.cumulative_timings = AggregatedTimings(os.path.join(self.run_info_dir,
'cumulative_timings'))
# Time spent in a workunit, not including its children.
self.self_timings = AggregatedTimings(os.path.join(self.run_info_dir, 'self_timings'))
# Hit/miss stats for the artifact cache.
self.artifact_cache_stats = \
ArtifactCacheStats(os.path.join(self.run_info_dir, 'artifact_cache_stats'))
# Number of threads for foreground work.
self._num_foreground_workers = self.get_options().num_foreground_workers
# Number of threads for background work.
self._num_background_workers = self.get_options().num_background_workers
# We report to this Report.
self.report = None
# self._threadlocal.current_workunit contains the current workunit for the calling thread.
# Note that multiple threads may share a name (e.g., all the threads in a pool).
self._threadlocal = threading.local()
# For main thread work. Created on start().
self._main_root_workunit = None
# For background work. Created lazily if needed.
self._background_worker_pool = None
self._background_root_workunit = None
# Trigger subproc pool init while our memory image is still clean (see SubprocPool docstring)
SubprocPool.foreground()
self._aborted = False
def register_thread(self, parent_workunit):
"""Register the parent workunit for all work in the calling thread.
Multiple threads may have the same parent (e.g., all the threads in a pool).
"""
self._threadlocal.current_workunit = parent_workunit
def is_under_main_root(self, workunit):
"""Is the workunit running under the main thread's root."""
return workunit.root() == self._main_root_workunit
def start(self, report):
"""Start tracking this pants run.
report: an instance of pants.reporting.Report."""
self.report = report
self.report.open()
self._main_root_workunit = WorkUnit(run_info_dir=self.run_info_dir, parent=None,
name=RunTracker.DEFAULT_ROOT_NAME, cmd=None)
self.register_thread(self._main_root_workunit)
self._main_root_workunit.start()
self.report.start_workunit(self._main_root_workunit)
def set_root_outcome(self, outcome):
"""Useful for setup code that doesn't have a reference to a workunit."""
self._main_root_workunit.set_outcome(outcome)
@contextmanager
def new_workunit(self, name, labels=None, cmd='', log_config=None):
"""Creates a (hierarchical) subunit of work for the purpose of timing and reporting.
- name: A short name for this work. E.g., 'resolve', 'compile', 'scala', 'zinc'.
- labels: An optional iterable of labels. The reporters can use this to decide how to
display information about this work.
- cmd: An optional longer string representing this work.
E.g., the cmd line of a compiler invocation.
- log_config: An optional tuple WorkUnit.LogConfig of task-level options affecting reporting.
Use like this:
with run_tracker.new_workunit(name='compile', labels=[WorkUnitLabel.TASK]) as workunit:
<do scoped work here>
<set the outcome on workunit if necessary>
Note that the outcome will automatically be set to failure if an exception is raised
in a workunit, and to success otherwise, so usually you only need to set the
outcome explicitly if you want to set it to warning.
"""
parent = self._threadlocal.current_workunit
with self.new_workunit_under_parent(name, parent=parent, labels=labels, cmd=cmd,
log_config=log_config) as workunit:
self._threadlocal.current_workunit = workunit
try:
yield workunit
finally:
self._threadlocal.current_workunit = parent
@contextmanager
def new_workunit_under_parent(self, name, parent, labels=None, cmd='', log_config=None):
"""Creates a (hierarchical) subunit of work for the purpose of timing and reporting.
- name: A short name for this work. E.g., 'resolve', 'compile', 'scala', 'zinc'.
- parent: The new workunit is created under this parent.
- labels: An optional iterable of labels. The reporters can use this to decide how to
display information about this work.
- cmd: An optional longer string representing this work.
E.g., the cmd line of a compiler invocation.
Task code should not typically call this directly.
"""
workunit = WorkUnit(run_info_dir=self.run_info_dir, parent=parent, name=name, labels=labels,
cmd=cmd, log_config=log_config)
workunit.start()
try:
self.report.start_workunit(workunit)
yield workunit
except KeyboardInterrupt:
workunit.set_outcome(WorkUnit.ABORTED)
self._aborted = True
raise
except:
workunit.set_outcome(WorkUnit.FAILURE)
raise
else:
workunit.set_outcome(WorkUnit.SUCCESS)
finally:
self.end_workunit(workunit)
def log(self, level, *msg_elements):
"""Log a message against the current workunit."""
self.report.log(self._threadlocal.current_workunit, level, *msg_elements)
@classmethod
def post_stats(cls, url, stats, timeout=2):
"""POST stats to the given url.
:return: True if upload was successful, False otherwise.
"""
def error(msg):
# Report aleady closed, so just print error.
print('WARNING: Failed to upload stats to {} due to {}'.format(url, msg),
file=sys.stderr)
return False
# TODO(benjy): The upload protocol currently requires separate top-level params, with JSON
# values. Probably better for there to be one top-level JSON value, namely json.dumps(stats).
# But this will first require changing the upload receiver at every shop that uses this
# (probably only Foursquare at present).
params = {k: json.dumps(v) for (k, v) in stats.items()}
try:
r = requests.post(url, data=params, timeout=timeout)
if r.status_code != requests.codes.ok:
return error("HTTP error code: {}".format(r.status_code))
except Exception as e: # Broad catch - we don't want to fail the build over upload errors.
return error("Error: {}".format(e))
return True
def store_stats(self):
"""Store stats about this run in local and optionally remote stats dbs."""
stats = {
'run_info': self.run_info.get_as_dict(),
'cumulative_timings': self.cumulative_timings.get_all(),
'self_timings': self.self_timings.get_all(),
'artifact_cache_stats': self.artifact_cache_stats.get_all()
}
# Dump individual stat file.
# TODO(benjy): Do we really need these, once the statsdb is mature?
stats_file = os.path.join(get_pants_cachedir(), 'stats',
'{}.json'.format(self.run_info.get_info('id')))
safe_file_dump(stats_file, json.dumps(stats))
# Add to local stats db.
StatsDBFactory.global_instance().get_db().insert_stats(stats)
# Upload to remote stats db.
stats_url = self.get_options().stats_upload_url
if stats_url:
self.post_stats(stats_url, stats, timeout=self.get_options().stats_upload_timeout)
_log_levels = [Report.ERROR, Report.ERROR, Report.WARN, Report.INFO, Report.INFO]
def end(self):
"""This pants run is over, so stop tracking it.
Note: If end() has been called once, subsequent calls are no-ops.
"""
if self._background_worker_pool:
if self._aborted:
self.log(Report.INFO, "Aborting background workers.")
self._background_worker_pool.abort()
else:
self.log(Report.INFO, "Waiting for background workers to finish.")
self._background_worker_pool.shutdown()
self.end_workunit(self._background_root_workunit)
SubprocPool.shutdown(self._aborted)
# Run a dummy work unit to write out one last timestamp
with self.new_workunit("complete"):
pass
self.end_workunit(self._main_root_workunit)
outcome = self._main_root_workunit.outcome()
if self._background_root_workunit:
outcome = min(outcome, self._background_root_workunit.outcome())
outcome_str = WorkUnit.outcome_string(outcome)
log_level = RunTracker._log_levels[outcome]
self.log(log_level, outcome_str)
if self.run_info.get_info('outcome') is None:
# If the goal is clean-all then the run info dir no longer exists, so ignore that error.
self.run_info.add_info('outcome', outcome_str, ignore_errors=True)
self.report.close()
self.store_stats()
def end_workunit(self, workunit):
self.report.end_workunit(workunit)
path, duration, self_time, is_tool = workunit.end()
self.cumulative_timings.add_timing(path, duration, is_tool)
self.self_timings.add_timing(path, self_time, is_tool)
def get_background_root_workunit(self):
if self._background_root_workunit is None:
self._background_root_workunit = WorkUnit(run_info_dir=self.run_info_dir, parent=None,
name='background', cmd=None)
self._background_root_workunit.start()
self.report.start_workunit(self._background_root_workunit)
return self._background_root_workunit
def background_worker_pool(self):
if self._background_worker_pool is None: # Initialize lazily.
self._background_worker_pool = WorkerPool(parent_workunit=self.get_background_root_workunit(),
run_tracker=self,
num_workers=self._num_background_workers)
return self._background_worker_pool
| apache-2.0 |
wgwoods/anaconda | dracut/driver_updates.py | 3 | 21945 | #!/usr/bin/python3
#
# Copyright (C) 2015 by Red Hat, Inc. All rights reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Author(s):
# Brian C. Lane <[email protected]>
# Will Woods <[email protected]>
#
"""
Driver Update Disk handler program.
This will be called once for each requested driverdisk (non-interactive), and
once for interactive mode (if requested).
Usage is one of:
driver-updates --disk DISKSTR DEVNODE
DISKSTR is the string passed by the user ('/dev/sda3', 'LABEL=DD', etc.)
DEVNODE is the actual device node or image (/dev/sda3, /dev/sr0, etc.)
DEVNODE must be mountable, but need not actually be a block device
(e.g. /dd.iso is valid if the user has inserted /dd.iso into initrd)
driver-updates --net URL LOCALFILE
URL is the string passed by the user ('http://.../something.iso')
LOCALFILE is the location of the downloaded file
driver-updates --interactive
The user will be presented with a menu where they can choose a disk
and pick individual drivers to install.
/tmp/dd_net contains the list of URLs given by the user.
/tmp/dd_disk contains the list of disk devices given by the user.
/tmp/dd_interactive contains "menu" if interactive mode was requested.
/tmp/dd.done should be created when all the user-requested stuff above has been
handled; the installer won't start up until this file is created.
Packages will be extracted to /updates, which gets overlaid on top
of the installer's filesystem when we leave the initramfs.
Modules and firmware get moved to /lib/modules/`uname -r`/updates and
/lib/firmware/updates (under /updates, as above). They also get copied into the
corresponding paths in the initramfs, so we can load them immediately.
The repositories get copied into /run/install/DD-1, /run/install/DD-2, etc.
Driver package names are saved in /run/install/dd_packages.
During system installation, anaconda will install the packages listed in
/run/install/dd_packages to the target system.
"""
import logging
import sys
import os
import subprocess
import fnmatch
# Import readline so raw_input gets readline features, like history, and
# backspace working right. Do not import readline if not connected to a tty
# because it breaks sometimes.
if os.isatty(0):
import readline # pylint:disable=unused-import
from contextlib import contextmanager
from logging.handlers import SysLogHandler
# py2 compat
try:
from subprocess import DEVNULL
except ImportError:
DEVNULL = open("/dev/null", 'a+')
try:
_input = raw_input # pylint: disable=undefined-variable
except NameError:
_input = input
log = logging.getLogger("DD")
# NOTE: Yes, the version is wrong, but previous versions of this utility also
# hardcoded this value, because changing it will break any driver disk that has
# binary/library packages with "installer-enhancement = 19.0"..
# If we *need* to break compatibility, this should definitely get changed, but
# otherwise we probably shouldn't change this unless/until we're sure that
# everyone is using something like "installer-enhancement >= 19.0" instead..
ANACONDAVER = "19.0"
ARCH = os.uname()[4]
KERNELVER = os.uname()[2]
MODULE_UPDATES_DIR = "/lib/modules/%s/updates" % KERNELVER
FIRMWARE_UPDATES_DIR = "/lib/firmware/updates"
def mkdir_seq(stem):
"""
Create sequentially-numbered directories starting with stem.
For example, mkdir_seq("/tmp/DD-") would create "/tmp/DD-1";
if that already exists, try "/tmp/DD-2", "/tmp/DD-3", and so on,
until a directory is created.
Returns the newly-created directory name.
"""
n = 1
while True:
dirname = str(stem) + str(n)
try:
os.makedirs(dirname)
except OSError as e:
if e.errno != 17: raise
n += 1
else:
return dirname
def find_repos(mnt):
"""find any valid driverdisk repos that exist under mnt."""
dd_repos = []
for root, dirs, files in os.walk(mnt, followlinks=True):
repo = root+"/rpms/"+ARCH
if "rhdd3" in files and "rpms" in dirs and os.path.isdir(repo):
log.debug("found repo: %s", repo)
dd_repos.append(repo)
return dd_repos
# NOTE: it's unclear whether or not we're supposed to recurse subdirs looking
# for .iso files, but that seems like a bad idea if you mount some huge disk..
# So I've made a judgement call: we only load .iso files from the toplevel.
def find_isos(mnt):
"""find files named '.iso' at the top level of mnt."""
return [mnt+'/'+f for f in os.listdir(mnt) if f.lower().endswith('.iso')]
class Driver(object):
"""Represents a single driver (rpm), as listed by dd_list"""
def __init__(self, source="", name="", flags="", description="", repo=""):
self.source = source
self.name = name
self.flags = flags
self.description = description
self.repo = repo
def dd_list(dd_path, anaconda_ver=None, kernel_ver=None):
log.debug("dd_list: listing %s", dd_path)
if not anaconda_ver:
anaconda_ver = ANACONDAVER
if not kernel_ver:
kernel_ver = KERNELVER
cmd = ["dd_list", '-d', dd_path, '-k', kernel_ver, '-a', anaconda_ver]
out = subprocess.check_output(cmd, stderr=DEVNULL)
out = out.decode('utf-8')
drivers = [Driver(*d.split('\n',3)) for d in out.split('\n---\n') if d]
log.debug("dd_list: found drivers: %s", ' '.join(d.name for d in drivers))
for d in drivers: d.repo = dd_path
return drivers
def dd_extract(rpm_path, outdir, kernel_ver=None, flags='-blmf'):
log.debug("dd_extract: extracting %s", rpm_path)
if not kernel_ver:
kernel_ver = KERNELVER
cmd = ["dd_extract", flags, '-r', rpm_path, '-d', outdir, '-k', kernel_ver]
subprocess.check_output(cmd, stderr=DEVNULL) # discard stdout
def list_drivers(repos, anaconda_ver=None, kernel_ver=None):
return [d for r in repos for d in dd_list(r, anaconda_ver, kernel_ver)]
def mount(dev, mnt=None):
"""Mount the given dev at the mountpoint given by mnt."""
# NOTE: dev may be a filesystem image - "-o loop" is not necessary anymore
if not mnt:
mnt = mkdir_seq("/media/DD-")
cmd = ["mount", dev, mnt]
log.debug("mounting %s at %s", dev, mnt)
subprocess.check_call(cmd)
return mnt
def umount(mnt):
log.debug("unmounting %s", mnt)
subprocess.call(["umount", mnt])
@contextmanager
def mounted(dev, mnt=None):
mnt = mount(dev, mnt)
try:
yield mnt
finally:
umount(mnt)
def iter_files(topdir, pattern=None):
"""iterator; yields full paths to files under topdir that match pattern."""
for head, _, files in os.walk(topdir):
for f in files:
if pattern is None or fnmatch.fnmatch(f, pattern):
yield os.path.join(head, f)
def ensure_dir(d):
"""make sure the given directory exists."""
subprocess.check_call(["mkdir", "-p", d])
def move_files(files, destdir):
"""move files into destdir (iff they're not already under destdir)"""
ensure_dir(destdir)
for f in files:
if f.startswith(destdir):
continue
subprocess.call(["mv", "-f", f, destdir])
def copy_files(files, destdir):
"""copy files into destdir (iff they're not already under destdir)"""
ensure_dir(destdir)
for f in files:
if f.startswith(destdir):
continue
subprocess.call(["cp", "-a", f, destdir])
def append_line(filename, line):
"""simple helper to append a line to a file"""
if not line.endswith("\n"):
line += "\n"
with open(filename, 'a') as outf:
outf.write(line)
# NOTE: items returned by read_lines should match items passed to append_line,
# which is why we remove the newlines
def read_lines(filename):
"""return a list containing each line in filename, with newlines removed."""
try:
return [line.rstrip('\n') for line in open(filename)]
except IOError:
return []
def save_repo(repo, target="/run/install"):
"""copy a repo to the place where the installer will look for it later."""
newdir = mkdir_seq(os.path.join(target, "DD-"))
log.debug("save_repo: copying %s to %s", repo, newdir)
subprocess.call(["cp", "-arT", repo, newdir])
return newdir
def extract_drivers(drivers=None, repos=None, outdir="/updates",
pkglist="/run/install/dd_packages"):
"""
Extract drivers - either a user-selected driver list or full repos.
drivers should be a list of Drivers to extract, or None.
repos should be a list of repo paths to extract, or None.
Raises ValueError if you pass both.
If any packages containing modules or firmware are extracted, also:
* call save_repo for that package's repo
* write the package name(s) to pkglist.
Returns True if any package containing modules was extracted.
"""
if not drivers:
drivers = []
if drivers and repos:
raise ValueError("extract_drivers: drivers or repos, not both")
if repos:
drivers = list_drivers(repos)
save_repos = set()
new_drivers = False
ensure_dir(outdir)
for driver in drivers:
log.info("Extracting: %s", driver.name)
dd_extract(driver.source, outdir)
# Make sure we install modules/firmware into the target system
if 'modules' in driver.flags or 'firmwares' in driver.flags:
append_line(pkglist, driver.name)
save_repos.add(driver.repo)
new_drivers = True
# save the repos containing those packages
for repo in save_repos:
save_repo(repo)
return new_drivers
def grab_driver_files(outdir="/updates"):
"""
copy any modules/firmware we just extracted into the running system.
return a list of the names of any modules we just copied.
"""
modules = list(iter_files(outdir+'/lib/modules',"*.ko*"))
firmware = list(iter_files(outdir+'/lib/firmware'))
copy_files(modules, MODULE_UPDATES_DIR)
copy_files(firmware, FIRMWARE_UPDATES_DIR)
move_files(modules, outdir+MODULE_UPDATES_DIR)
move_files(firmware, outdir+FIRMWARE_UPDATES_DIR)
return [os.path.basename(m).split('.ko')[0] for m in modules]
def load_drivers(modnames):
"""run depmod and try to modprobe all the given module names."""
log.debug("load_drivers: %s", modnames)
subprocess.call(["depmod", "-a"])
subprocess.call(["modprobe", "-a"] + modnames)
# We *could* pass in "outdir" if we wanted to extract things somewhere else,
# but right now the only use case is running inside the initramfs, so..
def process_driver_disk(dev, interactive=False):
try:
_process_driver_disk(dev, interactive=interactive)
except (subprocess.CalledProcessError, IOError) as e:
log.error("ERROR: %s", e)
def _process_driver_disk(dev, interactive=False):
"""
Main entry point for processing a single driver disk.
Mount the device/image, find repos, and install drivers from those repos.
If there are no repos, look for .iso files, and (if present) recursively
process those.
If interactive, ask the user which driver(s) to install from the repos,
or ask which iso file to process (if no repos).
"""
log.info("Examining %s", dev)
with mounted(dev) as mnt:
repos = find_repos(mnt)
isos = find_isos(mnt)
if repos:
if interactive:
new_modules = extract_drivers(drivers=repo_menu(repos))
else:
new_modules = extract_drivers(repos=repos)
if new_modules:
modules = grab_driver_files()
load_drivers(modules)
elif isos:
if interactive:
isos = iso_menu(isos)
for iso in isos:
process_driver_disk(iso, interactive=interactive)
else:
print("=== No driver disks found in %s! ===\n" % dev)
def process_driver_rpm(rpm):
try:
_process_driver_rpm(rpm)
except (subprocess.CalledProcessError, IOError) as e:
log.error("ERROR: %s", e)
def _process_driver_rpm(rpm):
"""
Process a single driver rpm. Extract it, install it, and copy the
rpm for Anaconda to install on the target system.
"""
log.info("Examining %s", rpm)
new_modules = extract_drivers(repos=[os.path.dirname(rpm)])
if new_modules:
modules = grab_driver_files()
load_drivers(modules)
def mark_finished(user_request, topdir="/tmp"):
log.debug("marking %s complete in %s", user_request, topdir)
append_line(topdir+"/dd_finished", user_request)
def all_finished(topdir="/tmp"):
finished = read_lines(topdir+"/dd_finished")
todo = read_lines(topdir+"/dd_todo")
return all(r in finished for r in todo)
def finish(user_request, topdir="/tmp"):
# mark that we've finished processing this request
mark_finished(user_request, topdir)
# if we're done now, let dracut know
if all_finished(topdir):
append_line(topdir+"/dd.done", "true")
# --- DEVICE LISTING HELPERS FOR THE MENU -----------------------------------
class DeviceInfo(object):
def __init__(self, **kwargs):
self.device = kwargs.get("DEVNAME", '')
self.uuid = kwargs.get("UUID", '')
self.fs_type = kwargs.get("TYPE", '')
self.label = kwargs.get("LABEL", '')
def __repr__(self):
return '<DeviceInfo %s>' % self.device
@property
def shortdev(self):
# resolve any symlinks (/dev/disk/by-label/OEMDRV -> /dev/sr0)
dev = os.path.realpath(self.device)
# NOTE: not os.path.basename 'cuz some devices legitimately have
# a '/' in their name: /dev/cciss/c0d0, /dev/i2o/hda, etc.
if dev.startswith('/dev/'):
dev = dev[5:]
return dev
def blkid():
try:
out = subprocess.check_output("blkid -o export -s UUID -s TYPE".split())
out = out.decode('ascii')
return [dict(kv.split('=',1) for kv in block.splitlines())
for block in out.split('\n\n')]
except subprocess.CalledProcessError:
return []
# We use this to get disk labels because blkid's encoding of non-printable and
# non-ascii characters is weird and doesn't match what you'd expect to see.
def get_disk_labels():
return {os.path.realpath(s):os.path.basename(s)
for s in iter_files("/dev/disk/by-label")}
def get_deviceinfo():
disk_labels = get_disk_labels()
deviceinfo = [DeviceInfo(**d) for d in blkid()]
for dev in deviceinfo:
dev.label = disk_labels.get(dev.device, '')
return deviceinfo
# --- INTERACTIVE MENU JUNK ------------------------------------------------
class TextMenu(object):
def __init__(self, items, title=None, formatter=None, headeritem=None,
refresher=None, multi=False, page_height=20):
self.items = items
self.title = title
self.formatter = formatter
self.headeritem = headeritem
self.refresher = refresher
self.multi = multi
self.page_height = page_height
self.pagenum = 1
self.selected_items = []
self.is_done = False
if callable(items):
self.refresher = items
self.refresh()
@property
def num_pages(self):
pages, leftover = divmod(len(self.items), self.page_height)
if leftover:
return pages+1
else:
return pages
def next(self):
if self.pagenum < self.num_pages:
self.pagenum += 1
def prev(self):
if self.pagenum > 1:
self.pagenum -= 1
def refresh(self):
if callable(self.refresher):
self.items = self.refresher()
def done(self):
self.is_done = True
def invalid(self, k):
print("Invalid selection %r" % k)
def toggle_item(self, item):
if item in self.selected_items:
self.selected_items.remove(item)
else:
self.selected_items.append(item)
if not self.multi:
self.done()
def items_on_page(self):
start_idx = (self.pagenum-1) * self.page_height
if start_idx > len(self.items):
return []
else:
items = self.items[start_idx:start_idx+self.page_height]
return enumerate(items, start=start_idx)
def format_item(self, item):
if callable(self.formatter):
return self.formatter(item)
else:
return str(item)
def format_items(self):
for n, i in self.items_on_page():
if self.multi:
x = 'x' if i in self.selected_items else ' '
yield "%2d) [%s] %s" % (n+1, x, self.format_item(i))
else:
yield "%2d) %s" % (n+1, self.format_item(i))
def format_header(self):
if self.multi:
return (8*' ')+self.format_item(self.headeritem)
else:
return (4*' ')+self.format_item(self.headeritem)
def action_dict(self):
actions = {
'r': self.refresh,
'n': self.next,
'p': self.prev,
'c': self.done,
}
for n, i in self.items_on_page():
actions[str(n+1)] = lambda item=i: self.toggle_item(item)
return actions
def format_page(self):
page = '\n(Page {pagenum} of {num_pages}) {title}\n{items}'
items = list(self.format_items())
if self.headeritem:
items.insert(0, self.format_header())
return page.format(pagenum=self.pagenum,
num_pages=self.num_pages,
title=self.title or '',
items='\n'.join(items))
def format_prompt(self):
options = [
'# to toggle selection' if self.multi else '# to select',
"'r'-refresh" if callable(self.refresher) else None,
"'n'-next page" if self.pagenum < self.num_pages else None,
"'p'-previous page" if self.pagenum > 1 else None,
"or 'c'-continue"
]
return ', '.join(o for o in options if o is not None) + ': '
def run(self):
while not self.is_done:
print(self.format_page())
k = _input(self.format_prompt())
action = self.action_dict().get(k)
if action:
action()
else:
self.invalid(k)
return self.selected_items
def repo_menu(repos):
drivers = list_drivers(repos)
if not drivers:
log.info("No suitable drivers found.")
return []
menu = TextMenu(drivers, title="Select drivers to install",
formatter=lambda d: d.source,
multi=True)
result = menu.run()
return result
def iso_menu(isos):
menu = TextMenu(isos, title="Choose driver disk ISO file")
result = menu.run()
return result
def device_menu():
fmt = '{0.shortdev:<8.8} {0.fs_type:<8.8} {0.label:<20.20} {0.uuid:<.36}'
hdr = DeviceInfo(DEVNAME='DEVICE', TYPE='TYPE', LABEL='LABEL', UUID='UUID')
menu = TextMenu(get_deviceinfo, title="Driver disk device selection",
formatter=fmt.format,
headeritem=hdr)
result = menu.run()
return result
# --- COMMANDLINE-TYPE STUFF ------------------------------------------------
def setup_log():
log.setLevel(logging.DEBUG)
handler = SysLogHandler(address="/dev/log")
log.addHandler(handler)
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
formatter = logging.Formatter("DD: %(message)s")
handler.setFormatter(formatter)
log.addHandler(handler)
def print_usage():
print("usage: driver-updates --interactive")
print(" driver-updates --disk DISK KERNELDEV")
print(" driver-updates --net URL LOCALFILE")
def check_args(args):
if args and args[0] == '--interactive':
return True
elif len(args) == 3 and args[0] in ('--disk', '--net'):
return True
else:
return False
def main(args):
if not check_args(args):
print_usage()
raise SystemExit(2)
mode = args.pop(0)
if mode in ('--disk', '--net'):
request, dev = args
if dev.endswith(".rpm"):
process_driver_rpm(dev)
else:
process_driver_disk(dev)
elif mode == '--interactive':
log.info("starting interactive mode")
request = 'menu'
while True:
dev = device_menu()
if not dev: break
process_driver_disk(dev.pop().device, interactive=True)
finish(request)
# When using inst.dd and a cdrom stage2 it isn't mounted before running driver-updates
# In order to get the stage2 cdrom mounted it either needs to be swapped back in
# or we need to re-trigger the block rules.
if os.path.exists("/tmp/anaconda-dd-on-cdrom") and not os.path.exists("/dev/root"):
cmd = ["udevadm", "trigger", "--action=change", "--subsystem-match=block"]
subprocess.check_call(cmd)
if __name__ == '__main__':
setup_log()
try:
main(sys.argv[1:])
except KeyboardInterrupt:
log.info("exiting.")
| gpl-2.0 |
anhngduc/google-python-exersice | google-python-exercises/basic/solution/string2.py | 208 | 3094 | #!/usr/bin/python2.4 -tt
# Copyright 2010 Google Inc.
# Licensed under the Apache License, Version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
# Google's Python Class
# http://code.google.com/edu/languages/google-python-class/
# Additional basic string exercises
# D. verbing
# Given a string, if its length is at least 3,
# add 'ing' to its end.
# Unless it already ends in 'ing', in which case
# add 'ly' instead.
# If the string length is less than 3, leave it unchanged.
# Return the resulting string.
def verbing(s):
# +++your code here+++
# LAB(begin solution)
if len(s) >= 3:
if s[-3:] != 'ing': s = s + 'ing'
else: s = s + 'ly'
return s
# LAB(replace solution)
# return
# LAB(end solution)
# E. not_bad
# Given a string, find the first appearance of the
# substring 'not' and 'bad'. If the 'bad' follows
# the 'not', replace the whole 'not'...'bad' substring
# with 'good'.
# Return the resulting string.
# So 'This dinner is not that bad!' yields:
# This dinner is good!
def not_bad(s):
# +++your code here+++
# LAB(begin solution)
n = s.find('not')
b = s.find('bad')
if n != -1 and b != -1 and b > n:
s = s[:n] + 'good' + s[b+3:]
return s
# LAB(replace solution)
# return
# LAB(end solution)
# F. front_back
# Consider dividing a string into two halves.
# If the length is even, the front and back halves are the same length.
# If the length is odd, we'll say that the extra char goes in the front half.
# e.g. 'abcde', the front half is 'abc', the back half 'de'.
# Given 2 strings, a and b, return a string of the form
# a-front + b-front + a-back + b-back
def front_back(a, b):
# +++your code here+++
# LAB(begin solution)
# Figure out the middle position of each string.
a_middle = len(a) / 2
b_middle = len(b) / 2
if len(a) % 2 == 1: # add 1 if length is odd
a_middle = a_middle + 1
if len(b) % 2 == 1:
b_middle = b_middle + 1
return a[:a_middle] + b[:b_middle] + a[a_middle:] + b[b_middle:]
# LAB(replace solution)
# return
# LAB(end solution)
# Simple provided test() function used in main() to print
# what each function returns vs. what it's supposed to return.
def test(got, expected):
if got == expected:
prefix = ' OK '
else:
prefix = ' X '
print '%s got: %s expected: %s' % (prefix, repr(got), repr(expected))
# main() calls the above functions with interesting inputs,
# using the above test() to check if the result is correct or not.
def main():
print 'verbing'
test(verbing('hail'), 'hailing')
test(verbing('swiming'), 'swimingly')
test(verbing('do'), 'do')
print
print 'not_bad'
test(not_bad('This movie is not so bad'), 'This movie is good')
test(not_bad('This dinner is not that bad!'), 'This dinner is good!')
test(not_bad('This tea is not hot'), 'This tea is not hot')
test(not_bad("It's bad yet not"), "It's bad yet not")
print
print 'front_back'
test(front_back('abcd', 'xy'), 'abxcdy')
test(front_back('abcde', 'xyz'), 'abcxydez')
test(front_back('Kitten', 'Donut'), 'KitDontenut')
if __name__ == '__main__':
main()
| apache-2.0 |
mtougeron/python-openstacksdk | openstack/tests/unit/auth/test_service_filter.py | 2 | 7250 | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import testtools
from openstack.auth import service_filter as filt
from openstack import exceptions
from openstack.identity import identity_service
class TestServiceFilter(testtools.TestCase):
def test_minimum(self):
sot = filt.ServiceFilter()
self.assertEqual("service_type=any,interface=public",
six.text_type(sot))
def test_maximum(self):
sot = filt.ServiceFilter(service_type='compute', interface='admin',
region='b', service_name='c')
exp = "service_type=compute,interface=admin,region=b,service_name=c"
self.assertEqual(exp, six.text_type(sot))
def test_interface(self):
sot = filt.ServiceFilter(service_type='identity', interface='public')
self.assertEqual("service_type=identity,interface=public",
six.text_type(sot))
sot = filt.ServiceFilter(service_type='identity',
interface='internal')
self.assertEqual("service_type=identity,interface=internal",
six.text_type(sot))
sot = filt.ServiceFilter(service_type='identity', interface='admin')
self.assertEqual("service_type=identity,interface=admin",
six.text_type(sot))
sot = filt.ServiceFilter(service_type='identity',
interface='publicURL')
self.assertEqual("service_type=identity,interface=public",
six.text_type(sot))
sot = filt.ServiceFilter(service_type='identity',
interface='internalURL')
self.assertEqual("service_type=identity,interface=internal",
six.text_type(sot))
sot = filt.ServiceFilter(service_type='identity',
interface='adminURL')
self.assertEqual("service_type=identity,interface=admin",
six.text_type(sot))
self.assertRaises(exceptions.SDKException, filt.ServiceFilter,
service_type='identity', interface='b')
sot = filt.ServiceFilter(service_type='identity', interface=None)
self.assertEqual("service_type=identity", six.text_type(sot))
def test_match_service_type(self):
sot = filt.ServiceFilter(service_type='identity')
self.assertTrue(sot.match_service_type('identity'))
self.assertFalse(sot.match_service_type('compute'))
def test_match_service_type_any(self):
sot = filt.ServiceFilter()
self.assertTrue(sot.match_service_type('identity'))
self.assertTrue(sot.match_service_type('compute'))
def test_match_service_name(self):
sot = filt.ServiceFilter(service_type='identity')
self.assertTrue(sot.match_service_name('keystone'))
self.assertTrue(sot.match_service_name('ldap'))
self.assertTrue(sot.match_service_name(None))
sot = filt.ServiceFilter(service_type='identity',
service_name='keystone')
self.assertTrue(sot.match_service_name('keystone'))
self.assertFalse(sot.match_service_name('ldap'))
self.assertFalse(sot.match_service_name(None))
def test_match_region(self):
sot = filt.ServiceFilter(service_type='identity')
self.assertTrue(sot.match_region('East'))
self.assertTrue(sot.match_region('West'))
self.assertTrue(sot.match_region(None))
sot = filt.ServiceFilter(service_type='identity', region='East')
self.assertTrue(sot.match_region('East'))
self.assertFalse(sot.match_region('West'))
self.assertFalse(sot.match_region(None))
def test_match_interface(self):
sot = filt.ServiceFilter(service_type='identity',
interface='internal')
self.assertFalse(sot.match_interface('admin'))
self.assertTrue(sot.match_interface('internal'))
self.assertFalse(sot.match_interface('public'))
def test_join(self):
a = filt.ServiceFilter(region='east')
b = filt.ServiceFilter(service_type='identity')
result = a.join(b)
self.assertEqual("service_type=identity,interface=public,region=east",
six.text_type(result))
self.assertEqual("service_type=any,interface=public,region=east",
six.text_type(a))
self.assertEqual("service_type=identity,interface=public",
six.text_type(b))
def test_join_interface(self):
user_preference = filt.ServiceFilter(interface='public')
service_default = filt.ServiceFilter(interface='admin')
result = user_preference.join(service_default)
self.assertEqual("public", result.interface)
user_preference = filt.ServiceFilter(interface=None)
service_default = filt.ServiceFilter(interface='admin')
result = user_preference.join(service_default)
self.assertEqual("admin", result.interface)
def test_join_version(self):
user_preference = filt.ServiceFilter(version='v2')
service_default = filt.ServiceFilter()
self.assertEqual('v2', user_preference.join(service_default).version)
service_default = filt.ServiceFilter(
version=filt.ServiceFilter.UNVERSIONED
)
self.assertEqual('', user_preference.join(service_default).version)
def test_set_interface(self):
sot = filt.ServiceFilter()
sot.set_interface("PUBLICURL")
self.assertEqual('public', sot.interface)
sot.set_interface("INTERNALURL")
self.assertEqual('internal', sot.interface)
sot.set_interface("ADMINURL")
self.assertEqual('admin', sot.interface)
def test_get_module(self):
sot = identity_service.IdentityService()
self.assertEqual('openstack.identity.v3', sot.get_module())
self.assertEqual('identity', sot.get_service_module())
def test_get_version_path(self):
sot = identity_service.IdentityService()
self.assertEqual('v3', sot.get_version_path('v2'))
sot = identity_service.IdentityService(version='v2')
self.assertEqual('v2', sot.get_version_path('v3'))
sot = identity_service.IdentityService(version='v2.1')
self.assertEqual('v2.1', sot.get_version_path('v3'))
sot = identity_service.IdentityService(version='')
self.assertEqual('', sot.get_version_path('v3'))
class TestValidVersion(testtools.TestCase):
def test_constructor(self):
sot = filt.ValidVersion('v1.0', 'v1')
self.assertEqual('v1.0', sot.module)
self.assertEqual('v1', sot.path)
| apache-2.0 |
trac-hacks/trac-oidc | trac_oidc/tests/test_authenticator.py | 1 | 7340 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2015 Geoffrey T. Dairiki
#
"""
"""
from __future__ import absolute_import
import json
import logging
from urlparse import parse_qsl, urlsplit, urlunsplit
import mock
from oauth2client.client import FlowExchangeError
import pytest
@pytest.fixture
def redirect_url():
return 'http://localhost/trac_oidc/redirect'
@pytest.fixture
def openid_realm():
return 'http://example.net/'
@pytest.fixture
def web_secrets(redirect_url):
return {
'auth_uri': "https://accounts.example.com/auth",
'token_uri': "https://accounts.example.com/token",
'client_id': "ID",
'client_secret': "SEKRET",
'redirect_uris': [redirect_url],
}
@pytest.fixture
def client_secret_file(tmpdir, web_secrets):
secret_file = tmpdir.join('client_secret.json')
secret_file.write(json.dumps({'web': web_secrets}))
return str(secret_file)
class _RequestArgs(dict):
getfirst = dict.get
class DummyRequest(object):
def __init__(self, query=None, oauth_state=None):
self.args = _RequestArgs(query or {})
self.session = {}
if oauth_state is not None:
self.oauth_state = oauth_state
@property
def oauth_state(self): # pragma: NO COVER
return self.session['trac_oidc.oauth_state']
@oauth_state.setter
def oauth_state(self, state): # pragma: NO COVER
self.session['trac_oidc.oauth_state'] = state
@oauth_state.deleter
def oauth_state(self): # pragma: NO COVER
del self.session['trac_oidc.oauth_state']
class TestAuthenticator(object):
@pytest.fixture
def log(self):
return logging.getLogger('Trac')
@pytest.fixture
def authenticator(self, client_secret_file, redirect_url, openid_realm,
log):
from ..authenticator import Authenticator
return Authenticator(client_secret_file, redirect_url,
openid_realm, log)
def test_flow(self, authenticator,
redirect_url, openid_realm, web_secrets):
flow = authenticator.flow
assert flow.client_secret == web_secrets['client_secret']
assert flow.redirect_uri == redirect_url
assert flow.params['access_type'] == 'online'
assert flow.params['openid.realm'] == openid_realm
def test_get_auth_url(self, authenticator, web_secrets):
req = DummyRequest()
auth_url = authenticator.get_auth_url(req)
split = urlsplit(auth_url)
assert urlunsplit((split.scheme, split.netloc, split.path, '', '')) \
== web_secrets['auth_uri']
query = dict(parse_qsl(split.query))
state = query['state']
assert state
assert req.session[authenticator.STATE_SKEY] == state
def test_get_identity(self, authenticator):
req = DummyRequest(query={'code': 'CODE', 'state': 'STATE'},
oauth_state='STATE')
authenticator._get_credentials = mock.Mock()
authenticator._get_openid_profile = mock.Mock(return_value={})
credentials = authenticator._get_credentials.return_value
credentials.id_token = {'iss': 'https://example.net', 'sub': '42'}
id_token = authenticator.get_identity(req)
assert dict(id_token) == credentials.id_token
def test_get_identity_resets_state(self, authenticator):
from ..authenticator import AuthenticationError
req = DummyRequest(query={'code': 'CODE', 'state': 'STATE'},
oauth_state='STATE')
authenticator._get_credentials = mock.Mock()
authenticator._get_openid_profile = mock.Mock(return_value={})
credentials = authenticator._get_credentials.return_value
credentials.id_token = {'iss': 'https://example.net', 'sub': '42'}
authenticator.get_identity(req)
with pytest.raises(AuthenticationError):
authenticator.get_identity(req)
def test_get_code(self, authenticator):
state = 'abcdef'
req = DummyRequest(query={'code': 'CODE', 'state': state},
oauth_state=state)
assert authenticator._get_code(req) == 'CODE'
def test_get_code_authentication_failure(self, authenticator):
from ..authenticator import AuthenticationFailed
req = DummyRequest(query={'error': 'error message'})
with pytest.raises(AuthenticationFailed) as exc_info:
authenticator._get_code(req)
assert 'error message' in exc_info.exconly()
@pytest.mark.parametrize('state, oauth_state', [
('wrong', 'somestate'),
(None, 'somestate'),
(None, None),
('unexpected', None),
])
def test_get_code_incorrect_state(self, authenticator,
state, oauth_state):
from ..authenticator import AuthenticationError
req = DummyRequest(query={'state': state} if state else None,
oauth_state=oauth_state)
with pytest.raises(AuthenticationError):
authenticator._get_code(req)
def test_get_code_missing_code(self, authenticator):
from ..authenticator import AuthenticationError
state = 'abcdef'
req = DummyRequest(query={'state': state}, oauth_state=state)
with pytest.raises(AuthenticationError):
authenticator._get_code(req)
def test_get_credentials(self, authenticator):
authenticator.flow = flow = mock.Mock(name='flow')
credentials = authenticator._get_credentials('CODE')
assert flow.mock_calls == [mock.call.step2_exchange('CODE')]
assert credentials == flow.step2_exchange.return_value
def test_get_credentials_failure(self, authenticator):
from ..authenticator import AuthenticationError
authenticator.flow = flow = mock.Mock(name='flow')
flow.step2_exchange.side_effect = FlowExchangeError('testing')
with pytest.raises(AuthenticationError):
authenticator._get_credentials('CODE')
def test_get_openid_profile(self, authenticator):
credentials = mock.Mock(name='credentials')
http = credentials.authorize.return_value
resp = mock.Mock(name='Response', status=200)
content = b'{"foo": "bar"}'
http.request.return_value = resp, content
profile = authenticator._get_openid_profile(credentials)
assert profile == {'foo': 'bar'}
def test_get_openid_profile_failure(self, authenticator, caplog):
credentials = mock.Mock(name='credentials')
http = credentials.authorize.return_value
resp = mock.Mock(name='Response', status=500)
content = b'{"foo": "bar"}'
http.request.return_value = resp, content
assert authenticator._get_openid_profile(credentials) == {}
assert 'Failed to retrieve profile' in caplog.text()
def test_get_openid_profile_bad_json(self, authenticator, caplog):
credentials = mock.Mock(name='credentials')
http = credentials.authorize.return_value
resp = mock.Mock(name='Response', status=200)
content = b'}'
http.request.return_value = resp, content
assert authenticator._get_openid_profile(credentials) == {}
assert 'Response is not valid JSON' in caplog.text()
| bsd-3-clause |
diagramsoftware/odoo | addons/portal/tests/__init__.py | 261 | 1078 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Business Applications
# Copyright (c) 2012-TODAY OpenERP S.A. <http://openerp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from . import test_portal
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
pratikmallya/hue | desktop/core/ext-py/python-ldap-2.3.13/Lib/ldif.py | 44 | 13729 | """
ldif - generate and parse LDIF data (see RFC 2849)
See http://www.python-ldap.org/ for details.
$Id: ldif.py,v 1.56 2010/07/19 08:23:22 stroeder Exp $
Python compability note:
Tested with Python 2.0+, but should work with Python 1.5.2+.
"""
__version__ = '2.3.12'
__all__ = [
# constants
'ldif_pattern',
# functions
'AttrTypeandValueLDIF','CreateLDIF','ParseLDIF',
# classes
'LDIFWriter',
'LDIFParser',
'LDIFRecordList',
'LDIFCopy',
]
import urlparse,urllib,base64,re,types
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
attrtype_pattern = r'[\w;.-]+(;[\w_-]+)*'
attrvalue_pattern = r'(([^,]|\\,)+|".*?")'
attrtypeandvalue_pattern = attrtype_pattern + r'[ ]*=[ ]*' + attrvalue_pattern
rdn_pattern = attrtypeandvalue_pattern + r'([ ]*\+[ ]*' + attrtypeandvalue_pattern + r')*[ ]*'
dn_pattern = rdn_pattern + r'([ ]*,[ ]*' + rdn_pattern + r')*[ ]*'
dn_regex = re.compile('^%s$' % dn_pattern)
ldif_pattern = '^((dn(:|::) %(dn_pattern)s)|(%(attrtype_pattern)s(:|::) .*)$)+' % vars()
MOD_OP_INTEGER = {
'add':0,'delete':1,'replace':2
}
MOD_OP_STR = {
0:'add',1:'delete',2:'replace'
}
CHANGE_TYPES = ['add','delete','modify','modrdn']
valid_changetype_dict = {}
for c in CHANGE_TYPES:
valid_changetype_dict[c]=None
def is_dn(s):
"""
returns 1 if s is a LDAP DN
"""
if s=='':
return 1
rm = dn_regex.match(s)
return rm!=None and rm.group(0)==s
SAFE_STRING_PATTERN = '(^(\000|\n|\r| |:|<)|[\000\n\r\200-\377]+|[ ]+$)'
safe_string_re = re.compile(SAFE_STRING_PATTERN)
def list_dict(l):
"""
return a dictionary with all items of l being the keys of the dictionary
"""
return dict([(i,None) for i in l])
class LDIFWriter:
"""
Write LDIF entry or change records to file object
Copy LDIF input to a file output object containing all data retrieved
via URLs
"""
def __init__(self,output_file,base64_attrs=None,cols=76,line_sep='\n'):
"""
output_file
file object for output
base64_attrs
list of attribute types to be base64-encoded in any case
cols
Specifies how many columns a line may have before it's
folded into many lines.
line_sep
String used as line separator
"""
self._output_file = output_file
self._base64_attrs = list_dict([a.lower() for a in (base64_attrs or [])])
self._cols = cols
self._line_sep = line_sep
self.records_written = 0
def _unfoldLDIFLine(self,line):
"""
Write string line as one or more folded lines
"""
# Check maximum line length
line_len = len(line)
if line_len<=self._cols:
self._output_file.write(line)
self._output_file.write(self._line_sep)
else:
# Fold line
pos = self._cols
self._output_file.write(line[0:min(line_len,self._cols)])
self._output_file.write(self._line_sep)
while pos<line_len:
self._output_file.write(' ')
self._output_file.write(line[pos:min(line_len,pos+self._cols-1)])
self._output_file.write(self._line_sep)
pos = pos+self._cols-1
return # _unfoldLDIFLine()
def _needs_base64_encoding(self,attr_type,attr_value):
"""
returns 1 if attr_value has to be base-64 encoded because
of special chars or because attr_type is in self._base64_attrs
"""
return self._base64_attrs.has_key(attr_type.lower()) or \
not safe_string_re.search(attr_value) is None
def _unparseAttrTypeandValue(self,attr_type,attr_value):
"""
Write a single attribute type/value pair
attr_type
attribute type
attr_value
attribute value
"""
if self._needs_base64_encoding(attr_type,attr_value):
# Encode with base64
self._unfoldLDIFLine(':: '.join([attr_type,base64.encodestring(attr_value).replace('\n','')]))
else:
self._unfoldLDIFLine(': '.join([attr_type,attr_value]))
return # _unparseAttrTypeandValue()
def _unparseEntryRecord(self,entry):
"""
entry
dictionary holding an entry
"""
attr_types = entry.keys()[:]
attr_types.sort()
for attr_type in attr_types:
for attr_value in entry[attr_type]:
self._unparseAttrTypeandValue(attr_type,attr_value)
def _unparseChangeRecord(self,modlist):
"""
modlist
list of additions (2-tuple) or modifications (3-tuple)
"""
mod_len = len(modlist[0])
if mod_len==2:
changetype = 'add'
elif mod_len==3:
changetype = 'modify'
else:
raise ValueError,"modlist item of wrong length"
self._unparseAttrTypeandValue('changetype',changetype)
for mod in modlist:
if mod_len==2:
mod_type,mod_vals = mod
elif mod_len==3:
mod_op,mod_type,mod_vals = mod
self._unparseAttrTypeandValue(MOD_OP_STR[mod_op],mod_type)
else:
raise ValueError,"Subsequent modlist item of wrong length"
if mod_vals:
for mod_val in mod_vals:
self._unparseAttrTypeandValue(mod_type,mod_val)
if mod_len==3:
self._output_file.write('-'+self._line_sep)
def unparse(self,dn,record):
"""
dn
string-representation of distinguished name
record
Either a dictionary holding the LDAP entry {attrtype:record}
or a list with a modify list like for LDAPObject.modify().
"""
if not record:
# Simply ignore empty records
return
# Start with line containing the distinguished name
self._unparseAttrTypeandValue('dn',dn)
# Dispatch to record type specific writers
if isinstance(record,types.DictType):
self._unparseEntryRecord(record)
elif isinstance(record,types.ListType):
self._unparseChangeRecord(record)
else:
raise ValueError, "Argument record must be dictionary or list"
# Write empty line separating the records
self._output_file.write(self._line_sep)
# Count records written
self.records_written = self.records_written+1
return # unparse()
def CreateLDIF(dn,record,base64_attrs=None,cols=76):
"""
Create LDIF single formatted record including trailing empty line.
This is a compability function. Use is deprecated!
dn
string-representation of distinguished name
record
Either a dictionary holding the LDAP entry {attrtype:record}
or a list with a modify list like for LDAPObject.modify().
base64_attrs
list of attribute types to be base64-encoded in any case
cols
Specifies how many columns a line may have before it's
folded into many lines.
"""
f = StringIO()
ldif_writer = LDIFWriter(f,base64_attrs,cols,'\n')
ldif_writer.unparse(dn,record)
s = f.getvalue()
f.close()
return s
class LDIFParser:
"""
Base class for a LDIF parser. Applications should sub-class this
class and override method handle() to implement something meaningful.
Public class attributes:
records_read
Counter for records processed so far
"""
def _stripLineSep(self,s):
"""
Strip trailing line separators from s, but no other whitespaces
"""
if s[-2:]=='\r\n':
return s[:-2]
elif s[-1:]=='\n':
return s[:-1]
else:
return s
def __init__(
self,
input_file,
ignored_attr_types=None,
max_entries=0,
process_url_schemes=None,
line_sep='\n'
):
"""
Parameters:
input_file
File-object to read the LDIF input from
ignored_attr_types
Attributes with these attribute type names will be ignored.
max_entries
If non-zero specifies the maximum number of entries to be
read from f.
process_url_schemes
List containing strings with URLs schemes to process with urllib.
An empty list turns off all URL processing and the attribute
is ignored completely.
line_sep
String used as line separator
"""
self._input_file = input_file
self._max_entries = max_entries
self._process_url_schemes = list_dict([s.lower() for s in (process_url_schemes or [])])
self._ignored_attr_types = list_dict([a.lower() for a in (ignored_attr_types or [])])
self._line_sep = line_sep
self.records_read = 0
def handle(self,dn,entry):
"""
Process a single content LDIF record. This method should be
implemented by applications using LDIFParser.
"""
def _unfoldLDIFLine(self):
"""
Unfold several folded lines with trailing space into one line
"""
unfolded_lines = [ self._stripLineSep(self._line) ]
self._line = self._input_file.readline()
while self._line and self._line[0]==' ':
unfolded_lines.append(self._stripLineSep(self._line[1:]))
self._line = self._input_file.readline()
return ''.join(unfolded_lines)
def _parseAttrTypeandValue(self):
"""
Parse a single attribute type and value pair from one or
more lines of LDIF data
"""
# Reading new attribute line
unfolded_line = self._unfoldLDIFLine()
# Ignore comments which can also be folded
while unfolded_line and unfolded_line[0]=='#':
unfolded_line = self._unfoldLDIFLine()
if not unfolded_line or unfolded_line=='\n' or unfolded_line=='\r\n':
return None,None
try:
colon_pos = unfolded_line.index(':')
except ValueError:
# Treat malformed lines without colon as non-existent
return None,None
attr_type = unfolded_line[0:colon_pos]
# if needed attribute value is BASE64 decoded
value_spec = unfolded_line[colon_pos:colon_pos+2]
if value_spec=='::':
# attribute value needs base64-decoding
attr_value = base64.decodestring(unfolded_line[colon_pos+2:])
elif value_spec==':<':
# fetch attribute value from URL
url = unfolded_line[colon_pos+2:].strip()
attr_value = None
if self._process_url_schemes:
u = urlparse.urlparse(url)
if self._process_url_schemes.has_key(u[0]):
attr_value = urllib.urlopen(url).read()
elif value_spec==':\r\n' or value_spec=='\n':
attr_value = ''
else:
attr_value = unfolded_line[colon_pos+2:].lstrip()
return attr_type,attr_value
def parse(self):
"""
Continously read and parse LDIF records
"""
self._line = self._input_file.readline()
while self._line and \
(not self._max_entries or self.records_read<self._max_entries):
# Reset record
version = None; dn = None; changetype = None; modop = None; entry = {}
attr_type,attr_value = self._parseAttrTypeandValue()
while attr_type!=None and attr_value!=None:
if attr_type=='dn':
# attr type and value pair was DN of LDIF record
if dn!=None:
raise ValueError, 'Two lines starting with dn: in one record.'
if not is_dn(attr_value):
raise ValueError, 'No valid string-representation of distinguished name %s.' % (repr(attr_value))
dn = attr_value
elif attr_type=='version' and dn is None:
version = 1
elif attr_type=='changetype':
# attr type and value pair was DN of LDIF record
if dn is None:
raise ValueError, 'Read changetype: before getting valid dn: line.'
if changetype!=None:
raise ValueError, 'Two lines starting with changetype: in one record.'
if not valid_changetype_dict.has_key(attr_value):
raise ValueError, 'changetype value %s is invalid.' % (repr(attr_value))
changetype = attr_value
elif attr_value!=None and \
not self._ignored_attr_types.has_key(attr_type.lower()):
# Add the attribute to the entry if not ignored attribute
if entry.has_key(attr_type):
entry[attr_type].append(attr_value)
else:
entry[attr_type]=[attr_value]
# Read the next line within an entry
attr_type,attr_value = self._parseAttrTypeandValue()
if entry:
# append entry to result list
self.handle(dn,entry)
self.records_read = self.records_read+1
return # parse()
class LDIFRecordList(LDIFParser):
"""
Collect all records of LDIF input into a single list.
of 2-tuples (dn,entry). It can be a memory hog!
"""
def __init__(
self,
input_file,
ignored_attr_types=None,max_entries=0,process_url_schemes=None
):
"""
See LDIFParser.__init__()
Additional Parameters:
all_records
List instance for storing parsed records
"""
LDIFParser.__init__(self,input_file,ignored_attr_types,max_entries,process_url_schemes)
self.all_records = []
def handle(self,dn,entry):
"""
Append single record to dictionary of all records.
"""
self.all_records.append((dn,entry))
class LDIFCopy(LDIFParser):
"""
Copy LDIF input to LDIF output containing all data retrieved
via URLs
"""
def __init__(
self,
input_file,output_file,
ignored_attr_types=None,max_entries=0,process_url_schemes=None,
base64_attrs=None,cols=76,line_sep='\n'
):
"""
See LDIFParser.__init__() and LDIFWriter.__init__()
"""
LDIFParser.__init__(self,input_file,ignored_attr_types,max_entries,process_url_schemes)
self._output_ldif = LDIFWriter(output_file,base64_attrs,cols,line_sep)
def handle(self,dn,entry):
"""
Write single LDIF record to output file.
"""
self._output_ldif.unparse(dn,entry)
def ParseLDIF(f,ignore_attrs=None,maxentries=0):
"""
Parse LDIF records read from file.
This is a compability function. Use is deprecated!
"""
ldif_parser = LDIFRecordList(
f,ignored_attr_types=ignore_attrs,max_entries=maxentries,process_url_schemes=0
)
ldif_parser.parse()
return ldif_parser.all_records
| apache-2.0 |
smart-developerr/my-first-blog | Lib/site-packages/django/contrib/staticfiles/utils.py | 335 | 1976 | import fnmatch
import os
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
def matches_patterns(path, patterns=None):
"""
Return True or False depending on whether the ``path`` should be
ignored (if it matches any pattern in ``ignore_patterns``).
"""
if patterns is None:
patterns = []
for pattern in patterns:
if fnmatch.fnmatchcase(path, pattern):
return True
return False
def get_files(storage, ignore_patterns=None, location=''):
"""
Recursively walk the storage directories yielding the paths
of all files that should be copied.
"""
if ignore_patterns is None:
ignore_patterns = []
directories, files = storage.listdir(location)
for fn in files:
if matches_patterns(fn, ignore_patterns):
continue
if location:
fn = os.path.join(location, fn)
yield fn
for dir in directories:
if matches_patterns(dir, ignore_patterns):
continue
if location:
dir = os.path.join(location, dir)
for fn in get_files(storage, ignore_patterns, dir):
yield fn
def check_settings(base_url=None):
"""
Checks if the staticfiles settings have sane values.
"""
if base_url is None:
base_url = settings.STATIC_URL
if not base_url:
raise ImproperlyConfigured(
"You're using the staticfiles app "
"without having set the required STATIC_URL setting.")
if settings.MEDIA_URL == base_url:
raise ImproperlyConfigured("The MEDIA_URL and STATIC_URL "
"settings must have different values")
if ((settings.MEDIA_ROOT and settings.STATIC_ROOT) and
(settings.MEDIA_ROOT == settings.STATIC_ROOT)):
raise ImproperlyConfigured("The MEDIA_ROOT and STATIC_ROOT "
"settings must have different values")
| gpl-3.0 |
kennethreitz/python-logplex | logplex/packages/requests/packages/charade/eucjpprober.py | 206 | 3768 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is mozilla.org code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
import sys
from . import constants
from .mbcharsetprober import MultiByteCharSetProber
from .codingstatemachine import CodingStateMachine
from .chardistribution import EUCJPDistributionAnalysis
from .jpcntx import EUCJPContextAnalysis
from .mbcssm import EUCJPSMModel
class EUCJPProber(MultiByteCharSetProber):
def __init__(self):
MultiByteCharSetProber.__init__(self)
self._mCodingSM = CodingStateMachine(EUCJPSMModel)
self._mDistributionAnalyzer = EUCJPDistributionAnalysis()
self._mContextAnalyzer = EUCJPContextAnalysis()
self.reset()
def reset(self):
MultiByteCharSetProber.reset(self)
self._mContextAnalyzer.reset()
def get_charset_name(self):
return "EUC-JP"
def feed(self, aBuf):
aLen = len(aBuf)
for i in range(0, aLen):
# PY3K: aBuf is a byte array, so aBuf[i] is an int, not a byte
codingState = self._mCodingSM.next_state(aBuf[i])
if codingState == constants.eError:
if constants._debug:
sys.stderr.write(self.get_charset_name()
+ ' prober hit error at byte ' + str(i)
+ '\n')
self._mState = constants.eNotMe
break
elif codingState == constants.eItsMe:
self._mState = constants.eFoundIt
break
elif codingState == constants.eStart:
charLen = self._mCodingSM.get_current_charlen()
if i == 0:
self._mLastChar[1] = aBuf[0]
self._mContextAnalyzer.feed(self._mLastChar, charLen)
self._mDistributionAnalyzer.feed(self._mLastChar, charLen)
else:
self._mContextAnalyzer.feed(aBuf[i - 1:i + 1], charLen)
self._mDistributionAnalyzer.feed(aBuf[i - 1:i + 1],
charLen)
self._mLastChar[0] = aBuf[aLen - 1]
if self.get_state() == constants.eDetecting:
if (self._mContextAnalyzer.got_enough_data() and
(self.get_confidence() > constants.SHORTCUT_THRESHOLD)):
self._mState = constants.eFoundIt
return self.get_state()
def get_confidence(self):
contxtCf = self._mContextAnalyzer.get_confidence()
distribCf = self._mDistributionAnalyzer.get_confidence()
return max(contxtCf, distribCf)
| bsd-2-clause |
xin3liang/platform_external_chromium_org_third_party_WebKit | Tools/Scripts/webkitpy/style/checkers/xcodeproj_unittest.py | 48 | 3070 | # Copyright (C) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Unit test for xcodeproj.py."""
import unittest
import xcodeproj
class TestErrorHandler(object):
"""Error handler for XcodeProjectFileChecker unittests"""
def __init__(self, handler):
self.handler = handler
def turn_off_line_filtering(self):
pass
def __call__(self, line_number, category, confidence, message):
self.handler(self, line_number, category, confidence, message)
return True
class XcodeProjectFileCheckerTest(unittest.TestCase):
"""Tests XcodeProjectFileChecker class."""
def assert_no_error(self, lines):
def handler(error_handler, line_number, category, confidence, message):
self.fail('Unexpected error: %d %s %d %s' % (line_number, category, confidence, message))
error_handler = TestErrorHandler(handler)
checker = xcodeproj.XcodeProjectFileChecker('', error_handler)
checker.check(lines)
def assert_error(self, lines, expected_message):
self.had_error = False
def handler(error_handler, line_number, category, confidence, message):
self.assertEqual(expected_message, message)
self.had_error = True
error_handler = TestErrorHandler(handler)
checker = xcodeproj.XcodeProjectFileChecker('', error_handler)
checker.check(lines)
self.assertTrue(self.had_error, '%s should have error: %s.' % (lines, expected_message))
def test_detect_development_region(self):
self.assert_no_error(['developmentRegion = English;'])
self.assert_error([''], 'Missing "developmentRegion = English".')
self.assert_error(['developmentRegion = Japanese;'],
'developmentRegion is not English.')
| bsd-3-clause |
Bysmyyr/blink-crosswalk | Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py | 17 | 38706 | # Copyright (C) 2012 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Unit tests for run_perf_tests."""
import StringIO
import datetime
import json
import re
import unittest
from webkitpy.common.host_mock import MockHost
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.layout_tests.port.driver import DriverOutput
from webkitpy.layout_tests.port.test import TestPort
from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
from webkitpy.performance_tests.perftest import DEFAULT_TEST_RUNNER_COUNT
from webkitpy.performance_tests.perftest import PerfTest
from webkitpy.performance_tests.perftestsrunner import PerfTestsRunner
class MainTest(unittest.TestCase):
def create_runner(self, args=[]):
options, parsed_args = PerfTestsRunner._parse_args(args)
test_port = TestPort(host=MockHost(), options=options)
runner = PerfTestsRunner(args=args, port=test_port)
runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
return runner, test_port
def _add_file(self, runner, dirname, filename, content=True):
dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
runner._host.filesystem.maybe_make_directory(dirname)
runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
def test_collect_tests(self):
runner, port = self.create_runner()
self._add_file(runner, 'inspector', 'a_file.html', 'a content')
tests = runner._collect_tests()
self.assertEqual(len(tests), 1)
def _collect_tests_and_sort_test_name(self, runner):
return sorted([test.test_name() for test in runner._collect_tests()])
def test_collect_tests_with_multile_files(self):
runner, port = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
def add_file(filename):
port.host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
add_file('test1.html')
add_file('test2.html')
add_file('test3.html')
port.host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['test1.html', 'test2.html'])
def test_collect_tests_with_skipped_list(self):
runner, port = self.create_runner()
self._add_file(runner, 'inspector', 'test1.html')
self._add_file(runner, 'inspector', 'unsupported_test1.html')
self._add_file(runner, 'inspector', 'test2.html')
self._add_file(runner, 'inspector/resources', 'resource_file.html')
self._add_file(runner, 'unsupported', 'unsupported_test2.html')
port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html'])
def test_collect_tests_with_skipped_list_and_files(self):
runner, port = self.create_runner(args=['Suite/Test1.html', 'Suite/SkippedTest1.html', 'SkippedSuite/Test1.html'])
self._add_file(runner, 'SkippedSuite', 'Test1.html')
self._add_file(runner, 'SkippedSuite', 'Test2.html')
self._add_file(runner, 'Suite', 'Test1.html')
self._add_file(runner, 'Suite', 'Test2.html')
self._add_file(runner, 'Suite', 'SkippedTest1.html')
self._add_file(runner, 'Suite', 'SkippedTest2.html')
port.skipped_perf_tests = lambda: ['Suite/SkippedTest1.html', 'Suite/SkippedTest1.html', 'SkippedSuite']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner),
['SkippedSuite/Test1.html', 'Suite/SkippedTest1.html', 'Suite/Test1.html'])
def test_collect_tests_with_ignored_skipped_list(self):
runner, port = self.create_runner(args=['--force'])
self._add_file(runner, 'inspector', 'test1.html')
self._add_file(runner, 'inspector', 'unsupported_test1.html')
self._add_file(runner, 'inspector', 'test2.html')
self._add_file(runner, 'inspector/resources', 'resource_file.html')
self._add_file(runner, 'unsupported', 'unsupported_test2.html')
port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html', 'inspector/unsupported_test1.html', 'unsupported/unsupported_test2.html'])
def test_default_args(self):
runner, port = self.create_runner()
options, args = PerfTestsRunner._parse_args([])
self.assertTrue(options.build)
self.assertEqual(options.time_out_ms, 600 * 1000)
self.assertTrue(options.generate_results)
self.assertTrue(options.show_results)
self.assertTrue(options.use_skipped_list)
self.assertEqual(options.repeat, 1)
self.assertEqual(options.test_runner_count, DEFAULT_TEST_RUNNER_COUNT)
def test_parse_args(self):
runner, port = self.create_runner()
options, args = PerfTestsRunner._parse_args([
'--build-directory=folder42',
'--platform=platform42',
'--builder-name', 'webkit-mac-1',
'--build-number=56',
'--time-out-ms=42',
'--no-show-results',
'--reset-results',
'--output-json-path=a/output.json',
'--slave-config-json-path=a/source.json',
'--test-results-server=somehost',
'--additional-driver-flag=--enable-threaded-parser',
'--additional-driver-flag=--awesomesauce',
'--repeat=5',
'--test-runner-count=5',
'--debug'])
self.assertTrue(options.build)
self.assertEqual(options.build_directory, 'folder42')
self.assertEqual(options.platform, 'platform42')
self.assertEqual(options.builder_name, 'webkit-mac-1')
self.assertEqual(options.build_number, '56')
self.assertEqual(options.time_out_ms, '42')
self.assertEqual(options.configuration, 'Debug')
self.assertFalse(options.show_results)
self.assertTrue(options.reset_results)
self.assertEqual(options.output_json_path, 'a/output.json')
self.assertEqual(options.slave_config_json_path, 'a/source.json')
self.assertEqual(options.test_results_server, 'somehost')
self.assertEqual(options.additional_driver_flag, ['--enable-threaded-parser', '--awesomesauce'])
self.assertEqual(options.repeat, 5)
self.assertEqual(options.test_runner_count, 5)
def test_upload_json(self):
runner, port = self.create_runner()
port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
class MockFileUploader:
called = []
upload_single_text_file_throws = False
upload_single_text_file_return_value = None
@classmethod
def reset(cls):
cls.called = []
cls.upload_single_text_file_throws = False
cls.upload_single_text_file_return_value = None
def __init__(mock, url, timeout):
self.assertEqual(url, 'https://some.host/some/path')
self.assertTrue(isinstance(timeout, int) and timeout)
mock.called.append('FileUploader')
def upload_single_text_file(mock, filesystem, content_type, filename):
self.assertEqual(filesystem, port.host.filesystem)
self.assertEqual(content_type, 'application/json')
self.assertEqual(filename, 'some.json')
mock.called.append('upload_single_text_file')
if mock.upload_single_text_file_throws:
raise Exception
return mock.upload_single_text_file_return_value
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('OK')
self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('Some error')
output = OutputCapture()
output.capture_output()
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
_, _, logs = output.restore_output()
self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got a bad response:\nSome error\n')
# Throwing an exception upload_single_text_file shouldn't blow up _upload_json
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_throws = True
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "OK"}')
self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "SomethingHasFailed", "failureStored": false}')
output = OutputCapture()
output.capture_output()
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
_, _, logs = output.restore_output()
serialized_json = json.dumps({'status': 'SomethingHasFailed', 'failureStored': False}, indent=4)
self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got an error:\n%s\n' % serialized_json)
class InspectorPassTestData:
text = 'RESULT group_name: test_name= 42 ms'
output = """Running inspector/pass.html (2 of 2)
RESULT group_name: test_name= 42 ms
Finished: 0.1 s
"""
class EventTargetWrapperTestData:
text = """Running 20 times
Ignoring warm-up run (1502)
1504
1505
1510
1504
1507
1509
1510
1487
1488
1472
1472
1488
1473
1472
1475
1487
1486
1486
1475
1471
Time:
values 1486, 1471, 1510, 1505, 1478, 1490 ms
avg 1490 ms
median 1488 ms
stdev 15.13935 ms
min 1471 ms
max 1510 ms
"""
output = """Running Bindings/event-target-wrapper.html (1 of 2)
RESULT Bindings: event-target-wrapper: Time= 1490.0 ms
median= 1488.0 ms, stdev= 14.11751 ms, min= 1471.0 ms, max= 1510.0 ms
Finished: 0.1 s
"""
results = {'url': 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
'metrics': {'Time': {'current': [[1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]] * 4}}}
class SomeParserTestData:
text = """Running 20 times
Ignoring warm-up run (1115)
Time:
values 1080, 1120, 1095, 1101, 1104 ms
avg 1100 ms
median 1101 ms
stdev 14.50861 ms
min 1080 ms
max 1120 ms
"""
output = """Running Parser/some-parser.html (2 of 2)
RESULT Parser: some-parser: Time= 1100.0 ms
median= 1101.0 ms, stdev= 13.31402 ms, min= 1080.0 ms, max= 1120.0 ms
Finished: 0.1 s
"""
class MemoryTestData:
text = """Running 20 times
Ignoring warm-up run (1115)
Time:
values 1080, 1120, 1095, 1101, 1104 ms
avg 1100 ms
median 1101 ms
stdev 14.50861 ms
min 1080 ms
max 1120 ms
JS Heap:
values 825000, 811000, 848000, 837000, 829000 bytes
avg 830000 bytes
median 829000 bytes
stdev 13784.04875 bytes
min 811000 bytes
max 848000 bytes
Malloc:
values 529000, 511000, 548000, 536000, 521000 bytes
avg 529000 bytes
median 529000 bytes
stdev 14124.44689 bytes
min 511000 bytes
max 548000 bytes
"""
output = """Running 1 tests
Running Parser/memory-test.html (1 of 1)
RESULT Parser: memory-test: Time= 1100.0 ms
median= 1101.0 ms, stdev= 13.31402 ms, min= 1080.0 ms, max= 1120.0 ms
RESULT Parser: memory-test: JSHeap= 830000.0 bytes
median= 829000.0 bytes, stdev= 12649.11064 bytes, min= 811000.0 bytes, max= 848000.0 bytes
RESULT Parser: memory-test: Malloc= 529000.0 bytes
median= 529000.0 bytes, stdev= 12961.48139 bytes, min= 511000.0 bytes, max= 548000.0 bytes
Finished: 0.1 s
"""
results = {'current': [[1080, 1120, 1095, 1101, 1104]] * 4}
js_heap_results = {'current': [[825000, 811000, 848000, 837000, 829000]] * 4}
malloc_results = {'current': [[529000, 511000, 548000, 536000, 521000]] * 4}
class TestDriver:
def run_test(self, driver_input, stop_when_done):
text = ''
timeout = False
crash = False
if driver_input.test_name.endswith('pass.html'):
text = InspectorPassTestData.text
elif driver_input.test_name.endswith('timeout.html'):
timeout = True
elif driver_input.test_name.endswith('failed.html'):
text = None
elif driver_input.test_name.endswith('tonguey.html'):
text = 'we are not expecting an output from perf tests but RESULT blablabla'
elif driver_input.test_name.endswith('crash.html'):
crash = True
elif driver_input.test_name.endswith('event-target-wrapper.html'):
text = EventTargetWrapperTestData.text
elif driver_input.test_name.endswith('some-parser.html'):
text = SomeParserTestData.text
elif driver_input.test_name.endswith('memory-test.html'):
text = MemoryTestData.text
return DriverOutput(text, '', '', '', crash=crash, timeout=timeout)
def start(self):
"""do nothing"""
def stop(self):
"""do nothing"""
class IntegrationTest(unittest.TestCase):
def _normalize_output(self, log):
return re.sub(r'(stdev=\s+\d+\.\d{5})\d+', r'\1', re.sub(r'Finished: [0-9\.]+ s', 'Finished: 0.1 s', log))
def _load_output_json(self, runner):
json_content = runner._host.filesystem.read_text_file(runner._output_json_path())
return json.loads(re.sub(r'("stdev":\s*\d+\.\d{5})\d+', r'\1', json_content))
def create_runner(self, args=[], driver_class=TestDriver):
options, parsed_args = PerfTestsRunner._parse_args(args)
test_port = TestPort(host=MockHost(), options=options)
test_port.create_driver = lambda worker_number=None, no_timeout=False: driver_class()
runner = PerfTestsRunner(args=args, port=test_port)
runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
return runner, test_port
def run_test(self, test_name):
runner, port = self.create_runner()
tests = [ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name))]
return runner._run_tests_set(tests) == 0
def test_run_passing_test(self):
self.assertTrue(self.run_test('pass.html'))
def test_run_silent_test(self):
self.assertFalse(self.run_test('silent.html'))
def test_run_failed_test(self):
self.assertFalse(self.run_test('failed.html'))
def test_run_tonguey_test(self):
self.assertFalse(self.run_test('tonguey.html'))
def test_run_timeout_test(self):
self.assertFalse(self.run_test('timeout.html'))
def test_run_crash_test(self):
self.assertFalse(self.run_test('crash.html'))
def _tests_for_runner(self, runner, test_names):
filesystem = runner._host.filesystem
tests = []
for test in test_names:
path = filesystem.join(runner._base_path, test)
dirname = filesystem.dirname(path)
if test.startswith('inspector/'):
tests.append(ChromiumStylePerfTest(runner._port, test, path))
else:
tests.append(PerfTest(runner._port, test, path))
return tests
def test_run_test_set(self):
runner, port = self.create_runner()
tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner._run_tests_set(tests)
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, len(tests) - 1)
self.assertTrue('\nRESULT group_name: test_name= 42 ms\n' in log)
def test_run_test_set_kills_drt_per_run(self):
class TestDriverWithStopCount(TestDriver):
stop_count = 0
def stop(self):
TestDriverWithStopCount.stop_count += 1
runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
unexpected_result_count = runner._run_tests_set(tests)
self.assertEqual(TestDriverWithStopCount.stop_count, 6)
def test_run_test_set_for_parser_tests(self):
runner, port = self.create_runner()
tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner._run_tests_set(tests)
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, 0)
self.assertEqual(self._normalize_output(log), EventTargetWrapperTestData.output + SomeParserTestData.output)
def test_run_memory_test(self):
runner, port = self.create_runner_and_setup_results_template()
runner._timestamp = 123456789
port.host.filesystem.write_text_file(runner._base_path + '/Parser/memory-test.html', 'some content')
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner.run()
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, 0)
self.assertEqual(self._normalize_output(log), MemoryTestData.output + '\nMOCK: user.open_url: file://...\n')
parser_tests = self._load_output_json(runner)[0]['tests']['Parser']['tests']
self.assertEqual(parser_tests['memory-test']['metrics']['Time'], MemoryTestData.results)
self.assertEqual(parser_tests['memory-test']['metrics']['JSHeap'], MemoryTestData.js_heap_results)
self.assertEqual(parser_tests['memory-test']['metrics']['Malloc'], MemoryTestData.malloc_results)
def _test_run_with_json_output(self, runner, filesystem, upload_succeeds=False, results_shown=True, expected_exit_code=0, repeat=1, compare_logs=True):
filesystem.write_text_file(runner._base_path + '/inspector/pass.html', 'some content')
filesystem.write_text_file(runner._base_path + '/Bindings/event-target-wrapper.html', 'some content')
uploaded = [False]
def mock_upload_json(hostname, json_path, host_path=None):
# FIXME: Get rid of the hard-coded perf.webkit.org once we've completed the transition.
self.assertIn(hostname, ['some.host'])
self.assertIn(json_path, ['/mock-checkout/output.json'])
self.assertIn(host_path, [None, '/api/report'])
uploaded[0] = upload_succeeds
return upload_succeeds
runner._upload_json = mock_upload_json
runner._timestamp = 123456789
runner._utc_timestamp = datetime.datetime(2013, 2, 8, 15, 19, 37, 460000)
output_capture = OutputCapture()
output_capture.capture_output()
try:
self.assertEqual(runner.run(), expected_exit_code)
finally:
stdout, stderr, logs = output_capture.restore_output()
if not expected_exit_code and compare_logs:
expected_logs = ''
for i in xrange(repeat):
runs = ' (Run %d of %d)' % (i + 1, repeat) if repeat > 1 else ''
expected_logs += 'Running 2 tests%s\n' % runs + EventTargetWrapperTestData.output + InspectorPassTestData.output
if results_shown:
expected_logs += 'MOCK: user.open_url: file://...\n'
self.assertEqual(self._normalize_output(logs), expected_logs)
self.assertEqual(uploaded[0], upload_succeeds)
return logs
_event_target_wrapper_and_inspector_results = {
"Bindings":
{"url": "https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings",
"tests": {"event-target-wrapper": EventTargetWrapperTestData.results}}}
def test_run_with_json_output(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
filesystem = port.host.filesystem
self.assertTrue(filesystem.isfile(runner._output_json_path()))
self.assertTrue(filesystem.isfile(filesystem.splitext(runner._output_json_path())[0] + '.html'))
def test_run_with_description(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--description', 'some description'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "description": "some description",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def create_runner_and_setup_results_template(self, args=[]):
runner, port = self.create_runner(args)
filesystem = port.host.filesystem
filesystem.write_text_file(runner._base_path + '/resources/results-template.html',
'BEGIN<script src="%AbsolutePathToWebKitTrunk%/some.js"></script>'
'<script src="%AbsolutePathToWebKitTrunk%/other.js"></script><script>%PeformanceTestsResultsJSON%</script>END')
filesystem.write_text_file(runner._base_path + '/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js', 'jquery content')
return runner, port
def test_run_respects_no_results(self):
runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--no-results'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=False, results_shown=False)
self.assertFalse(port.host.filesystem.isfile('/mock-checkout/output.json'))
def test_run_generates_json_by_default(self):
runner, port = self.create_runner_and_setup_results_template()
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
results_page_path = filesystem.splitext(output_json_path)[0] + '.html'
self.assertFalse(filesystem.isfile(output_json_path))
self.assertFalse(filesystem.isfile(results_page_path))
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(output_json_path))
self.assertTrue(filesystem.isfile(results_page_path))
def test_run_merges_output_by_default(self):
runner, port = self.create_runner_and_setup_results_template()
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{"previous": "results"}, {
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
def test_run_respects_reset_results(self):
runner, port = self.create_runner_and_setup_results_template(args=["--reset-results"])
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
pass
def test_run_generates_and_show_results_page(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
page_shown = []
port.show_results_html_file = lambda path: page_shown.append(path)
filesystem = port.host.filesystem
self._test_run_with_json_output(runner, filesystem, results_shown=False)
expected_entry = {"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}
self.maxDiff = None
self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
self.assertEqual(self._load_output_json(runner), [expected_entry])
self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
'<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
self.assertEqual(page_shown[0], '/mock-checkout/output.html')
self._test_run_with_json_output(runner, filesystem, results_shown=False)
self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
self.assertEqual(self._load_output_json(runner), [expected_entry, expected_entry])
self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
'<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
def test_run_respects_no_show_results(self):
show_results_html_file = lambda path: page_shown.append(path)
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
page_shown = []
port.show_results_html_file = show_results_html_file
self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
self.assertEqual(page_shown[0], '/mock-checkout/output.html')
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--no-show-results'])
page_shown = []
port.show_results_html_file = show_results_html_file
self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
self.assertEqual(page_shown, [])
def test_run_with_bad_output_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
port.host.filesystem.write_text_file('/mock-checkout/output.json', 'bad json')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
port.host.filesystem.write_text_file('/mock-checkout/output.json', '{"another bad json": "1"}')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
def test_run_with_slave_config_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value"}')
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}, "builderKey": "value"}])
def test_run_with_bad_slave_config_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
logs = self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
self.assertTrue('Missing slave configuration JSON file: /mock-checkout/slave-config.json' in logs)
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', 'bad json')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '["another bad json"]')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
def test_run_with_multiple_repositories(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host'])
port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"webkit": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"},
"some": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def test_run_with_upload_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertEqual(generated_json[0]['platform'], 'platform1')
self.assertEqual(generated_json[0]['builderName'], 'builder1')
self.assertEqual(generated_json[0]['buildNumber'], 123)
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=False, expected_exit_code=PerfTestsRunner.EXIT_CODE_FAILED_UPLOADING)
def test_run_with_upload_json_should_generate_perf_webkit_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123',
'--slave-config-json-path=/mock-checkout/slave-config.json'])
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value1"}')
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertTrue(isinstance(generated_json, list))
self.assertEqual(len(generated_json), 1)
output = generated_json[0]
self.maxDiff = None
self.assertEqual(output['platform'], 'platform1')
self.assertEqual(output['buildNumber'], 123)
self.assertEqual(output['buildTime'], '2013-02-08T15:19:37.460000')
self.assertEqual(output['builderName'], 'builder1')
self.assertEqual(output['builderKey'], 'value1')
self.assertEqual(output['revisions'], {'blink': {'revision': '5678', 'timestamp': '2013-02-01 08:48:05 +0000'}})
self.assertEqual(output['tests'].keys(), ['Bindings'])
self.assertEqual(sorted(output['tests']['Bindings'].keys()), ['tests', 'url'])
self.assertEqual(output['tests']['Bindings']['url'], 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings')
self.assertEqual(output['tests']['Bindings']['tests'].keys(), ['event-target-wrapper'])
self.assertEqual(output['tests']['Bindings']['tests']['event-target-wrapper'], {
'url': 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
'metrics': {'Time': {'current': [[1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]] * 4}}})
def test_run_with_repeat(self):
self.maxDiff = None
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--repeat', '5'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True, repeat=5)
self.assertEqual(self._load_output_json(runner), [
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def test_run_with_test_runner_count(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-runner-count=3'])
self._test_run_with_json_output(runner, port.host.filesystem, compare_logs=False)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertTrue(isinstance(generated_json, list))
self.assertEqual(len(generated_json), 1)
output = generated_json[0]['tests']['Bindings']['tests']['event-target-wrapper']['metrics']['Time']['current']
self.assertEqual(len(output), 3)
expectedMetrics = EventTargetWrapperTestData.results['metrics']['Time']['current'][0]
for metrics in output:
self.assertEqual(metrics, expectedMetrics)
| bsd-3-clause |
Lab603/PicEncyclopedias | jni-build/jni/include/tensorflow/python/client/session.py | 3 | 47046 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A client interface for TensorFlow."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
import threading
import numpy as np
from tensorflow.core.protobuf import config_pb2
from tensorflow.python import pywrap_tensorflow as tf_session
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.ops import session_ops
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util import compat
from tensorflow.python.util import nest
class SessionInterface(object):
"""Base class for implementations of TensorFlow client sessions."""
@property
def graph(self):
"""The underlying TensorFlow graph, to be used in building Operations."""
raise NotImplementedError('graph')
@property
def sess_str(self):
"""The TensorFlow process to which this session will connect."""
raise NotImplementedError('sess_str')
def run(self, fetches, feed_dict=None, options=None, run_metadata=None):
"""Runs operations in the session. See `Session.run()` for details."""
raise NotImplementedError('run')
def partial_run_setup(self, fetches, feeds=None):
"""Sets up the feeds and fetches for partial runs in the session."""
raise NotImplementedError('partial_run_setup')
def partial_run(self, handle, fetches, feed_dict=None):
"""Continues the execution with additional feeds and fetches."""
raise NotImplementedError('partial_run')
def _get_indexed_slices_value_from_fetches(fetched_vals):
return ops.IndexedSlicesValue(fetched_vals[0], fetched_vals[1],
fetched_vals[2]
if len(fetched_vals) == 3 else None)
def _get_feeds_for_indexed_slices(feed, feed_val):
return list(zip([feed.values, feed.indices] if feed.dense_shape is None else
[feed.values, feed.indices, feed.dense_shape], feed_val))
# List of extensions supported to convert run arguments into actual fetches and
# feeds.
#
# Each element in the list is a tuple of (Type, fetch_fn, feed_fn1, feed_fn2),
# where the function signatures are:
# fetch_fn : Type -> (list of Tensors,
# lambda: list of fetched np.ndarray -> TypeVal)
# feed_fn1 : Type, TypeVal -> list of (Tensor, value)
# feed_fn2 : Type -> list of Tensors
#
# `fetch_fn` describes how to expand fetch into its
# component Tensors and how to contract the fetched results back into
# a single return value.
#
# Each feed function describes how to unpack a single fed value and map it to
# feeds of one or more tensors and their corresponding values: `feed_fn1` is
# used to feed a run, `feed_fn2` to set up a partial run.
#
# TODO(touts): We could reimplement these as specialized _FeedMapper
# implementations after we refactor the feed handling code to use them.
#
# Eventually, this registration could be opened up to support custom Tensor
# expansions.
# pylint: disable=g-long-lambda
_REGISTERED_EXPANSIONS = [
# SparseTensors are fetched as SparseTensorValues. They can be fed
# SparseTensorValues or normal tuples.
(ops.SparseTensor,
lambda fetch: (
[fetch.indices, fetch.values, fetch.shape],
lambda fetched_vals: ops.SparseTensorValue(*fetched_vals)),
lambda feed, feed_val: list(zip(
[feed.indices, feed.values, feed.shape], feed_val)),
lambda feed: [feed.indices, feed.values, feed.shape]),
# IndexedSlices are fetched as IndexedSlicesValues. They can be fed
# IndexedSlicesValues or normal tuples.
(ops.IndexedSlices,
lambda fetch: (
[fetch.values, fetch.indices] if fetch.dense_shape is None
else [fetch.values, fetch.indices, fetch.dense_shape],
_get_indexed_slices_value_from_fetches),
_get_feeds_for_indexed_slices,
lambda feed: [feed.values, feed.indices] if feed.dense_shape is None
else [feed.values, feed.indices, feed.dense_shape]),
# The default catches all other types and performs no expansions.
(object,
lambda fetch: ([fetch], lambda fetched_vals: fetched_vals[0]),
lambda feed, feed_val: [(feed, feed_val)],
lambda feed: [feed])]
# pylint: enable=g-long-lambda
class _FetchMapper(object):
"""Definition of the interface provided by fetch mappers.
Fetch mappers are utility classes used by the _FetchHandler to handle
arbitrary structures for the `fetch` argument to `Session.run()`.
The `fetch` argument can be of various shapes: single tensor or op, list of
fetches, tuple of fetches, namedtuple of fetches, or dict of fetches. The
structures can be arbitrarily nested.
The low level run() API only wants a list of tensor or op names. The various
`_FetchMapper` subclasses below take care of handling the different shapes:
uniquifying the fetches, and constructing results with the original shape.
"""
def unique_fetches(self):
"""Return the list of unique tensors or ops needed by this fetch mapper.
Returns:
A list of tensors or ops.
"""
raise NotImplementedError('Must be implemented by subclasses')
def build_results(self, values):
"""Build results that match the original shape of the fetch.
Args:
values: List of values returned by run(). The values correspond
exactly to the list tensors or ops returned by unique_fetches().
Returns:
A struct of the same shape as the original fetch object handled by
this fetch mapper. In the returned struct, the original fetches are
replaced by their fetched values.
"""
raise NotImplementedError('Must be implemented by subclasses')
@staticmethod
def for_fetch(fetch):
"""Creates fetch mapper that handles the structure of `fetch`.
The default graph must be the one from which we want to fetch values when
this function is called.
Args:
fetch: An arbitrary fetch structure: singleton, list, tuple,
namedtuple, or dict.
Returns:
An instance of a subclass of `_FetchMapper` that handles the shape.
"""
if fetch is None:
raise TypeError('Fetch argument %r has invalid type %r' %
(fetch, type(fetch)))
elif isinstance(fetch, (list, tuple)):
# NOTE(touts): This is also the code path for namedtuples.
return _ListFetchMapper(fetch)
elif isinstance(fetch, dict):
return _DictFetchMapper(fetch)
else:
# Look for a handler in the registered expansions.
for tensor_type, fetch_fn, _, _ in _REGISTERED_EXPANSIONS:
if isinstance(fetch, tensor_type):
fetches, contraction_fn = fetch_fn(fetch)
return _ElementFetchMapper(fetches, contraction_fn)
# Did not find anything.
raise TypeError('Fetch argument %r has invalid type %r' %
(fetch, type(fetch)))
class _ElementFetchMapper(_FetchMapper):
"""Fetch mapper for singleton tensors and ops."""
def __init__(self, fetches, contraction_fn):
"""Creates an _ElementFetchMapper.
This is the fetch mapper used for leaves in the fetch struct. Because of
the expansions mechanism, a leaf can actually fetch more than one tensor.
Also note that the fetches here can be just strings (tensor or op names) or
any other object that the graph knows how to convert to a tensor, such as a
Variable. So we have to run each fetch through `as_graph_element()` to get
the corresponding tensor or op.
Args:
fetches: List of objects, as returned by a fetch_fn defined
in _REGISTERED_EXPANSIONS.
contraction_fn: Callable as returned by a fetch_fn.
"""
self._unique_fetches = []
for fetch in fetches:
try:
self._unique_fetches.append(ops.get_default_graph().as_graph_element(
fetch, allow_tensor=True, allow_operation=True))
except TypeError as e:
raise TypeError('Fetch argument %r has invalid type %r, '
'must be a string or Tensor. (%s)'
% (fetch, type(fetch), str(e)))
except ValueError as e:
raise ValueError('Fetch argument %r cannot be interpreted as a '
'Tensor. (%s)' % (fetch, str(e)))
except KeyError as e:
raise ValueError('Fetch argument %r cannot be interpreted as a '
'Tensor. (%s)' % (fetch, str(e)))
self._contraction_fn = contraction_fn
def unique_fetches(self):
return self._unique_fetches
def build_results(self, values):
if not values:
# 'Operation' case
return None
else:
return self._contraction_fn(values)
def _uniquify_fetches(fetch_mappers):
"""Uniquifies fetches from a list of fetch_mappers.
This is a utility function used by _ListFetchMapper and _DictFetchMapper. It
gathers all the unique fetches from a list of mappers and builds a list
containing all of them but without duplicates (unique_fetches).
It also returns a 2-D list of integers (values_indices) indicating at which
index in unique_fetches the fetches of the mappers are located.
This list is as follows:
values_indices[mapper_index][mapper_fetch_index] = unique_fetches_index
Args:
fetch_mappers: list of fetch mappers.
Returns:
A list of fetches.
A 2-D list of integers.
"""
unique_fetches = []
value_indices = []
seen_fetches = {}
for m in fetch_mappers:
m_value_indices = []
for f in m.unique_fetches():
j = seen_fetches.get(f)
if j is None:
j = len(seen_fetches)
seen_fetches[f] = j
unique_fetches.append(f)
m_value_indices.append(j)
value_indices.append(m_value_indices)
return unique_fetches, value_indices
class _ListFetchMapper(_FetchMapper):
"""Fetch mapper for lists, tuples, and namedtuples."""
def __init__(self, fetches):
"""Creates a _ListFetchMapper.
Args:
fetches: List, tuple, or namedtuple of fetches.
"""
self._fetch_type = type(fetches)
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
def unique_fetches(self):
return self._unique_fetches
def build_results(self, values):
# Create the list of results for each mapper.
results = []
for m, vi in zip(self._mappers, self._value_indices):
results.append(m.build_results([values[j] for j in vi]))
# Return a value of the original type of the fetches.
if self._fetch_type == list:
return results
elif self._fetch_type == tuple:
return tuple(results)
else:
# This is the code path for namedtuple.
return self._fetch_type(*results)
class _DictFetchMapper(_FetchMapper):
"""Fetch mapper for dicts."""
def __init__(self, fetches):
"""Creates a _DictFetchMapper.
Args:
fetches: Dict of fetches.
"""
self._keys = fetches.keys()
self._mappers = [_FetchMapper.for_fetch(fetch)
for fetch in fetches.values()]
self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
def unique_fetches(self):
return self._unique_fetches
def build_results(self, values):
results = {}
for k, m, vi in zip(self._keys, self._mappers, self._value_indices):
results[k] = m.build_results([values[j] for j in vi])
return results
class _FetchHandler(object):
"""Handler for structured fetches.
Given a graph, a user-provided structure for fetches, and a feed dict, this
class takes care of generating a list of tensor names to fetch and op names
to run for a low level `run()` call.
Given the results of the low level run call, this class can also rebuild a
result structure matching the user-provided structure for fetches, but
containing the corresponding results.
"""
# TODO(touts): Make this class also take care of destructuring the feed
# dict instead of doing it in the callers.
def __init__(self, graph, fetches, feeds):
"""Creates a fetch handler.
Args:
graph: Graph of the fetches. Used to check for fetchability
and to convert all fetches to tensors or ops as needed.
fetches: An arbitrary fetch structure: singleton, list, tuple,
namedtuple, or dict.
feeds: A feed dict where keys are fully resolved tensor names.
"""
with graph.as_default():
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
self._fetches = []
self._targets = []
self._feeds = feeds
self._ops = []
self._fetch_handles = {}
for fetch in self._fetch_mapper.unique_fetches():
fetch_name = compat.as_bytes(fetch.name)
if isinstance(fetch, ops.Operation):
self._assert_fetchable(graph, fetch)
self._targets.append(fetch_name)
self._ops.append(True)
else:
self._assert_fetchable(graph, fetch.op)
self._fetches.append(fetch_name)
self._ops.append(False)
# Remember the fetch if it is for a tensor handle.
if isinstance(fetch, ops.Tensor) and fetch.op.type == 'GetSessionHandle':
self._fetch_handles[fetch_name] = fetch.op.inputs[0].dtype
self._final_fetches = [x for x in self._fetches if x not in feeds]
def _assert_fetchable(self, graph, op):
if not graph.is_fetchable(op):
raise ValueError(
'Operation %r has been marked as not fetchable.' % op.name)
def fetches(self):
"""Return the unique names of tensors to fetch.
Returns:
A list of strings.
"""
return self._final_fetches
def targets(self):
"""Return the unique names of ops to run.
Returns:
A list of strings.
"""
return self._targets
def build_results(self, session, tensor_values):
"""Build results matching the original fetch shape.
`tensor_values` must be a list of the same length as
the one returned by `fetches()`, and holding the requested
fetch values.
This method builds a struct with the same shape as the original `fetches`
passed to the constructor, in which the fetches are replaced by their
fetched value.
Args:
session: The enclosing session. Used for tensor handles.
tensor_values: List of values matching the list returned
by fetches().
Returns:
A structure of the same shape as the original `fetches` argument but
containing tensors or None (for fetched ops).
"""
full_values = []
assert len(self._final_fetches) == len(tensor_values)
i = 0
j = 0
for is_op in self._ops:
if is_op:
full_values.append(None)
else:
# If the fetch was in the feeds, use the fed value, otherwise
# use the returned value.
value = self._feeds.get(self._fetches[i])
if value is None:
value = tensor_values[j]
j += 1
dtype = self._fetch_handles.get(self._fetches[i])
if dtype:
full_values.append(session_ops.TensorHandle(value, dtype, session))
else:
full_values.append(value)
i += 1
assert j == len(tensor_values)
return self._fetch_mapper.build_results(full_values)
class BaseSession(SessionInterface):
"""A class for interacting with a TensorFlow computation.
The BaseSession enables incremental graph building with inline
execution of Operations and evaluation of Tensors.
"""
def __init__(self, target='', graph=None, config=None):
"""Constructs a new TensorFlow session.
Args:
target: (Optional) The TensorFlow execution engine to connect to.
graph: (Optional) The graph to be used. If this argument is None,
the default graph will be used.
config: (Optional) ConfigProto proto used to configure the session.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
creating the TensorFlow session.
TypeError: If one of the arguments has the wrong type.
"""
if graph is None:
self._graph = ops.get_default_graph()
else:
if not isinstance(graph, ops.Graph):
raise TypeError('graph must be a tf.Graph, but got %s' % type(graph))
self._graph = graph
self._opened = False
self._closed = False
self._current_version = 0
self._extend_lock = threading.Lock()
if target is not None:
try:
self._target = compat.as_bytes(target)
except TypeError:
raise TypeError('target must be a string, but got %s' % type(target))
else:
self._target = None
self._delete_lock = threading.Lock()
self._dead_handles = []
if config is not None:
if not isinstance(config, config_pb2.ConfigProto):
raise TypeError('config must be a tf.ConfigProto, but got %s'
% type(config))
self._config = config
self._add_shapes = config.graph_options.infer_shapes
else:
self._config = None
self._add_shapes = False
self._session = None
opts = tf_session.TF_NewSessionOptions(target=self._target, config=config)
try:
with errors.raise_exception_on_not_ok_status() as status:
self._session = tf_session.TF_NewSession(opts, status)
finally:
tf_session.TF_DeleteSessionOptions(opts)
def close(self):
"""Closes this session.
Calling this method frees all resources associated with the session.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
closing the TensorFlow session.
"""
with self._extend_lock:
if self._opened and not self._closed:
self._closed = True
with errors.raise_exception_on_not_ok_status() as status:
tf_session.TF_CloseSession(self._session, status)
def __del__(self):
self.close()
if self._session is not None:
with errors.raise_exception_on_not_ok_status() as status:
tf_session.TF_DeleteSession(self._session, status)
self._session = None
@property
def graph(self):
"""The graph that was launched in this session."""
return self._graph
@property
def graph_def(self):
"""A serializable version of the underlying TensorFlow graph.
Returns:
A graph_pb2.GraphDef proto containing nodes for all of the Operations in
the underlying TensorFlow graph.
"""
return self._graph.as_graph_def(add_shapes=self._add_shapes)
@property
def sess_str(self):
return self._target
def as_default(self):
"""Returns a context manager that makes this object the default session.
Use with the `with` keyword to specify that calls to
[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
executed in this session.
```python
c = tf.constant(..)
sess = tf.Session()
with sess.as_default():
assert tf.get_default_session() is sess
print(c.eval())
```
To get the current default session, use
[`tf.get_default_session()`](#get_default_session).
*N.B.* The `as_default` context manager *does not* close the
session when you exit the context, and you must close the session
explicitly.
```python
c = tf.constant(...)
sess = tf.Session()
with sess.as_default():
print(c.eval())
# ...
with sess.as_default():
print(c.eval())
sess.close()
```
Alternatively, you can use `with tf.Session():` to create a
session that is automatically closed on exiting the context,
including when an uncaught exception is raised.
*N.B.* The default graph is a property of the current thread. If you
create a new thread, and wish to use the default session in that
thread, you must explicitly add a `with sess.as_default():` in that
thread's function.
Returns:
A context manager using this session as the default session.
"""
return ops.default_session(self)
def run(self, fetches, feed_dict=None, options=None, run_metadata=None):
"""Runs operations and evaluates tensors in `fetches`.
This method runs one "step" of TensorFlow computation, by
running the necessary graph fragment to execute every `Operation`
and evaluate every `Tensor` in `fetches`, substituting the values in
`feed_dict` for the corresponding input values.
The `fetches` argument may be a single graph element, or an arbitrarily
nested list, tuple, namedtuple, or dict containing graph elements at its
leaves. A graph element can be one of the following types:
* An [`Operation`](../../api_docs/python/framework.md#Operation).
The corresponding fetched value will be `None`.
* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
The corresponding fetched value will be a numpy ndarray containing the
value of that tensor.
* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
The corresponding fetched value will be a
[`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
containing the value of that sparse tensor.
* A `get_tensor_handle` op. The corresponding fetched value will be a
numpy ndarray containing the handle of that tensor.
* A `string` which is the name of a tensor or operation in the graph.
The value returned by `run()` has the same shape as the `fetches` argument,
where the leaves are replaced by the corresponding values returned by
TensorFlow.
Example:
```python
a = tf.constant([10, 20])
b = tf.constant([1.0, 2.0])
# 'fetches' can be a singleton
v = session.run(a)
# v is the numpy array [10, 20]
# 'fetches' can be a list.
v = session.run([a, b])
# v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
# 1-D array [1.0, 2.0]
# 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
MyData = collections.namedtuple('MyData', ['a', 'b'])
v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
# v is a dict with
# v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
# 'b' the numpy array [1.0, 2.0]
# v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
# [10, 20].
```
The optional `feed_dict` argument allows the caller to override
the value of tensors in the graph. Each key in `feed_dict` can be
one of the following types:
* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
value may be a Python scalar, string, list, or numpy ndarray
that can be converted to the same `dtype` as that
tensor. Additionally, if the key is a
[placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
the value will be checked for compatibility with the placeholder.
* If the key is a
[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
the value should be a
[`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
should be a nested tuple with the same structure that maps to their
corresponding values as above.
Each value in `feed_dict` must be convertible to a numpy array of the dtype
of the corresponding key.
The optional `options` argument expects a [`RunOptions`] proto. The options
allow controlling the behavior of this particular step (e.g. turning tracing
on).
The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
appropriate, the non-Tensor output of this step will be collected there. For
example, when users turn on tracing in `options`, the profiled info will be
collected into this argument and passed back.
Args:
fetches: A single graph element, a list of graph elements,
or a dictionary whose values are graph elements or lists of graph
elements (described above).
feed_dict: A dictionary that maps graph elements to values
(described above).
options: A [`RunOptions`] protocol buffer
run_metadata: A [`RunMetadata`] protocol buffer
Returns:
Either a single value if `fetches` is a single graph element, or
a list of values if `fetches` is a list, or a dictionary with the
same keys as `fetches` if that is a dictionary (described above).
Raises:
RuntimeError: If this `Session` is in an invalid state (e.g. has been
closed).
TypeError: If `fetches` or `feed_dict` keys are of an inappropriate type.
ValueError: If `fetches` or `feed_dict` keys are invalid or refer to a
`Tensor` that doesn't exist.
"""
run_metadata_ptr = tf_session.TF_NewBuffer()
if options:
options_ptr = tf_session.TF_NewBufferFromString(
compat.as_bytes(options.SerializeToString()))
else:
options_ptr = None
try:
result = self._run(None, fetches, feed_dict, options_ptr,
run_metadata_ptr)
if run_metadata:
proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
run_metadata.ParseFromString(compat.as_bytes(proto_data))
finally:
tf_session.TF_DeleteBuffer(run_metadata_ptr)
if options:
tf_session.TF_DeleteBuffer(options_ptr)
return result
def partial_run(self, handle, fetches, feed_dict=None):
"""Continues the execution with more feeds and fetches.
This is EXPERIMENTAL and subject to change.
To use partial execution, a user first calls `partial_run_setup()` and
then a sequence of `partial_run()`. `partial_run_setup` specifies the
list of feeds and fetches that will be used in the subsequent
`partial_run` calls.
The optional `feed_dict` argument allows the caller to override
the value of tensors in the graph. See run() for more information.
Below is a simple example:
```python
a = array_ops.placeholder(dtypes.float32, shape=[])
b = array_ops.placeholder(dtypes.float32, shape=[])
c = array_ops.placeholder(dtypes.float32, shape=[])
r1 = math_ops.add(a, b)
r2 = math_ops.mul(r1, c)
h = sess.partial_run_setup([r1, r2], [a, b, c])
res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
res = sess.partial_run(h, r2, feed_dict={c: res})
```
Args:
handle: A handle for a sequence of partial runs.
fetches: A single graph element, a list of graph elements,
or a dictionary whose values are graph elements or lists of graph
elements (see documentation for `run`).
feed_dict: A dictionary that maps graph elements to values
(described above).
Returns:
Either a single value if `fetches` is a single graph element, or
a list of values if `fetches` is a list, or a dictionary with the
same keys as `fetches` if that is a dictionary
(see documentation for `run`).
Raises:
tf.errors.OpError: Or one of its subclasses on error.
"""
# TODO(touts): Support feeding and fetching the same tensor.
return self._run(handle, fetches, feed_dict, None, None)
def partial_run_setup(self, fetches, feeds=None):
"""Sets up a graph with feeds and fetches for partial run.
This is EXPERIMENTAL and subject to change.
Note that contrary to `run`, `feeds` only specifies the graph elements.
The tensors will be supplied by the subsequent `partial_run` calls.
Args:
fetches: A single graph element, or a list of graph elements.
feeds: A single graph element, or a list of graph elements.
Returns:
A handle for partial run.
Raises:
RuntimeError: If this `Session` is in an invalid state (e.g. has been
closed).
TypeError: If `fetches` or `feed_dict` keys are of an inappropriate type.
tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
"""
def _feed_fn(feed):
for tensor_type, _, _, feed_fn in _REGISTERED_EXPANSIONS:
if isinstance(feed, tensor_type):
return feed_fn(feed)
raise TypeError('Feed argument %r has invalid type %r'
% (feed, type(feed)))
# Check session.
if self._closed:
raise RuntimeError('Attempted to use a closed Session.')
if self.graph.version == 0:
raise RuntimeError('The Session graph is empty. Add operations to the '
'graph before calling run().')
# Create request.
feed_list = []
# Validate and process feed_list.
is_list_feed = isinstance(feeds, (list, tuple))
if not is_list_feed:
feeds = [feeds]
for feed in feeds:
for subfeed in _feed_fn(feed):
try:
subfeed_t = self.graph.as_graph_element(subfeed, allow_tensor=True,
allow_operation=False)
feed_list.append(compat.as_bytes(subfeed_t.name))
except Exception as e:
e.message = ('Cannot interpret feed_list key as Tensor: '
+ e.message)
e.args = (e.message,)
raise e
# Validate and process fetches.
# TODO(touts): Support feeding and fetching the same tensor.
fetch_handler = _FetchHandler(self._graph, fetches, {})
# Set up a graph with feeds and fetches for partial run.
def _setup_fn(session, feed_list, fetch_list, target_list):
self._extend_graph()
with errors.raise_exception_on_not_ok_status() as status:
return tf_session.TF_PRunSetup(session, feed_list, fetch_list,
target_list, status)
return self._do_call(_setup_fn, self._session, feed_list,
fetch_handler.fetches(), fetch_handler.targets())
def _run(self, handle, fetches, feed_dict, options, run_metadata):
"""Perform either run or partial_run, depending the presence of `handle`."""
def _feed_fn(feed, feed_val):
for tensor_type, _, feed_fn, _ in _REGISTERED_EXPANSIONS:
if isinstance(feed, tensor_type):
return feed_fn(feed, feed_val)
raise TypeError('Feed argument %r has invalid type %r'
% (feed, type(feed)))
# Check session.
if self._closed:
raise RuntimeError('Attempted to use a closed Session.')
if self.graph.version == 0:
raise RuntimeError('The Session graph is empty. Add operations to the '
'graph before calling run().')
# Create request.
feed_dict_string = {}
feed_map = {}
# Validate and process feed_dict.
if feed_dict:
feed_dict = nest.flatten_dict_items(feed_dict)
for feed, feed_val in feed_dict.items():
for subfeed, subfeed_val in _feed_fn(feed, feed_val):
try:
subfeed_t = self.graph.as_graph_element(subfeed, allow_tensor=True,
allow_operation=False)
except Exception as e:
raise TypeError('Cannot interpret feed_dict key as Tensor: '
+ e.args[0])
if isinstance(subfeed_val, ops.Tensor):
raise TypeError('The value of a feed cannot be a tf.Tensor object. '
'Acceptable feed values include Python scalars, '
'strings, lists, or numpy ndarrays.')
subfeed_dtype = subfeed_t.dtype.as_numpy_dtype
if isinstance(subfeed_val,
int) and subfeed_dtype(subfeed_val) != subfeed_val:
raise TypeError(
'Type of feed value ' + str(subfeed_val) + ' is not'
' compatible with Tensor type ' + str(subfeed_dtype) + '.'
' Try explicitly setting the type of the feed tensor'
' to a larger type (e.g. int64).')
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
if not subfeed_t.get_shape().is_compatible_with(np_val.shape):
raise ValueError(
'Cannot feed value of shape %r for Tensor %r, '
'which has shape %r'
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
if not self.graph.is_feedable(subfeed_t):
raise ValueError('Tensor %s may not be fed.' % subfeed_t)
subfeed_name = compat.as_bytes(subfeed_t.name)
feed_dict_string[subfeed_name] = np_val
feed_map[subfeed_name] = (subfeed_t, subfeed_val)
# Create a fetch handler to take care of the structure of fetches.
fetch_handler = _FetchHandler(self._graph, fetches, feed_dict_string)
# Run request and get response.
# We need to keep the movers alive for the following _do_run().
# These movers are no longer needed when _do_run() completes, and
# are deleted when `movers` goes out of scope when this _run() ends.
# TODO(yuanbyu, keveman): Revisit whether we should just treat feeding
# of a handle from a different device as an error.
movers = self._update_with_movers(feed_dict_string, feed_map)
final_fetches = fetch_handler.fetches()
final_targets = fetch_handler.targets()
if final_fetches or final_targets:
results = self._do_run(handle, final_targets, final_fetches,
feed_dict_string, options, run_metadata)
else:
results = []
return fetch_handler.build_results(self, results)
# Captures the name of a node in an error status.
_NODEDEF_NAME_RE = re.compile(r'\[\[Node: ([^ ]*?) =')
def _do_run(self, handle, target_list, fetch_list, feed_dict,
options, run_metadata):
"""Runs a step based on the given fetches and feeds.
Args:
handle: a handle for partial_run. None if this is just a call to run().
target_list: A list of byte arrays corresponding to names of tensors
or operations to be run to, but not fetched.
fetch_list: A list of byte arrays corresponding to names of tensors to
be fetched and operations to be run.
feed_dict: A dictionary that maps tensor names (as byte arrays) to
numpy ndarrays.
options: A (pointer to a) [`RunOptions`] protocol buffer, or None
run_metadata: A (pointer to a) [`RunMetadata`] protocol buffer, or None
Returns:
A list of numpy ndarrays, corresponding to the elements of
`fetch_list`. If the ith element of `fetch_list` contains the
name of an operation, the first Tensor output of that operation
will be returned for that element.
Raises:
tf.errors.OpError: Or one of its subclasses on error.
"""
def _run_fn(session, feed_dict, fetch_list, target_list, options,
run_metadata):
# Ensure any changes to the graph are reflected in the runtime.
self._extend_graph()
with errors.raise_exception_on_not_ok_status() as status:
return tf_session.TF_Run(session, options,
feed_dict, fetch_list, target_list,
status, run_metadata)
def _prun_fn(session, handle, feed_dict, fetch_list):
if target_list:
raise RuntimeError('partial_run() requires empty target_list.')
with errors.raise_exception_on_not_ok_status() as status:
return tf_session.TF_PRun(session, handle, feed_dict, fetch_list,
status)
if handle is None:
return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
target_list, options, run_metadata)
else:
return self._do_call(_prun_fn, self._session, handle, feed_dict,
fetch_list)
def _do_call(self, fn, *args):
try:
return fn(*args)
except errors.OpError as e:
message = compat.as_text(e.message)
m = BaseSession._NODEDEF_NAME_RE.search(message)
node_def = None
op = None
if m is not None:
node_name = m.group(1)
try:
op = self._graph.get_operation_by_name(node_name)
node_def = op.node_def
except KeyError:
pass
raise type(e)(node_def, op, message)
def _extend_graph(self):
# Ensure any changes to the graph are reflected in the runtime.
with self._extend_lock:
if self._graph.version > self._current_version:
# pylint: disable=protected-access
graph_def, self._current_version = self._graph._as_graph_def(
from_version=self._current_version,
add_shapes=self._add_shapes)
# pylint: enable=protected-access
with errors.raise_exception_on_not_ok_status() as status:
tf_session.TF_ExtendGraph(
self._session, graph_def.SerializeToString(), status)
self._opened = True
# The threshold to run garbage collection to delete dead tensors.
_DEAD_HANDLES_THRESHOLD = 10
def _register_dead_handle(self, handle):
# Register a dead handle in the session. Delete the dead tensors when
# the number of dead tensors exceeds certain threshold.
tensors_to_delete = None
with self._delete_lock:
self._dead_handles.append(handle)
if len(self._dead_handles) == BaseSession._DEAD_HANDLES_THRESHOLD:
tensors_to_delete = self._dead_handles
self._dead_handles = []
# Delete the dead tensors.
# TODO(yuanbyu): For now we use a sequence of runs to minimize the graph
# size and the overhead of graph construction/partitioning.
if tensors_to_delete:
for tensor_handle in tensors_to_delete:
feeds = {}
fetches = []
holder, deleter = session_ops._get_handle_deleter(self.graph,
tensor_handle)
feeds[holder] = tensor_handle
fetches.append(deleter)
self.run(fetches, feed_dict=feeds)
def _update_with_movers(self, feed_dict, feed_map):
# If a tensor handle that is fed to a device incompatible placeholder,
# we move the tensor to the right device, generate a new tensor handle,
# and update `feed_dict` to use the new handle.
handle_movers = []
for feed_name, val in feed_map.items():
mover = session_ops._get_handle_mover(self.graph, *val)
if mover:
handle_movers.append((feed_name, val[1], mover))
# Transfer a tensor to the right device if needed.
if not handle_movers:
return []
else:
feeds = {}
fetches = []
for _, handle, mover in handle_movers:
feeds[mover[0]] = handle
fetches.append(mover[1])
handles = self.run(fetches, feed_dict=feeds)
for handle_mover, handle in zip(handle_movers, handles):
np_val = np.array(handle.handle, dtype=np.object)
feed_dict[handle_mover[0]] = np_val
return handles
class Session(BaseSession):
"""A class for running TensorFlow operations.
A `Session` object encapsulates the environment in which `Operation`
objects are executed, and `Tensor` objects are evaluated. For
example:
```python
# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session.
sess = tf.Session()
# Evaluate the tensor `c`.
print(sess.run(c))
```
A session may own resources, such as
[variables](../../api_docs/python/state_ops.md#Variable), [queues](../../api_docs/python/io_ops.md#QueueBase),
and [readers](../../api_docs/python/io_ops.md#ReaderBase). It is important to release
these resources when they are no longer required. To do this, either
invoke the [`close()`](#Session.close) method on the session, or use
the session as a context manager. The following two examples are
equivalent:
```python
# Using the `close()` method.
sess = tf.Session()
sess.run(...)
sess.close()
# Using the context manager.
with tf.Session() as sess:
sess.run(...)
```
The [`ConfigProto`]
(https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
protocol buffer exposes various configuration options for a
session. For example, to create a session that uses soft constraints
for device placement, and log the resulting placement decisions,
create a session as follows:
```python
# Launch the graph in a session that allows soft device placement and
# logs the placement decisions.
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True))
```
@@__init__
@@run
@@close
@@graph
@@as_default
@@reset
"""
def __init__(self, target='', graph=None, config=None):
"""Creates a new TensorFlow session.
If no `graph` argument is specified when constructing the session,
the default graph will be launched in the session. If you are
using more than one graph (created with `tf.Graph()` in the same
process, you will have to use different sessions for each graph,
but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.
Args:
target: (Optional.) The execution engine to connect to.
Defaults to using an in-process engine. See [Distributed Tensorflow]
(https://www.tensorflow.org/how_tos/distributed/index.html)
for more examples.
graph: (Optional.) The `Graph` to be launched (described above).
config: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
protocol buffer with configuration options for the session.
"""
super(Session, self).__init__(target, graph, config=config)
self._context_managers = [self.graph.as_default(), self.as_default()]
def __enter__(self):
for context_manager in self._context_managers:
context_manager.__enter__()
return self
def __exit__(self, exec_type, exec_value, exec_tb):
if exec_type is errors.OpError:
logging.error('Session closing due to OpError: %s', (exec_value,))
for context_manager in reversed(self._context_managers):
context_manager.__exit__(exec_type, exec_value, exec_tb)
self.close()
@staticmethod
def reset(target, containers=None, config=None):
"""Resets resource containers on `target`, and close all connected sessions.
A resource container is distributed across all workers in the
same cluster as `target`. When a resource container on `target`
is reset, resources associated with that container will be cleared.
In particular, all Variables in the container will become undefined:
they lose their values and shapes.
NOTE:
(i) reset() is currently only implemented for distributed sessions.
(ii) Any sessions on the master named by `target` will be closed.
If no resource containers are provided, all containers are reset.
Args:
target: The execution engine to connect to.
containers: A list of resource container name strings, or `None` if all of
all the containers are to be reset.
config: (Optional.) Protocol buffer with configuration options.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
resetting containers.
"""
if target is not None:
target = compat.as_bytes(target)
if containers is not None:
containers = [compat.as_bytes(c) for c in containers]
else:
containers = []
tf_session.TF_Reset(target, containers, config)
class InteractiveSession(BaseSession):
"""A TensorFlow `Session` for use in interactive contexts, such as a shell.
The only difference with a regular `Session` is that an `InteractiveSession`
installs itself as the default session on construction.
The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval)
and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run)
will use that session to run ops.
This is convenient in interactive shells and [IPython
notebooks](http://ipython.org), as it avoids having to pass an explicit
`Session` object to run ops.
For example:
```python
sess = tf.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use 'c.eval()' without passing 'sess'
print(c.eval())
sess.close()
```
Note that a regular session installs itself as the default session when it
is created in a `with` statement. The common usage in non-interactive
programs is to follow that pattern:
```python
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session():
# We can also use 'c.eval()' here.
print(c.eval())
```
@@__init__
@@close
"""
def __init__(self, target='', graph=None, config=None):
"""Creates a new interactive TensorFlow session.
If no `graph` argument is specified when constructing the session,
the default graph will be launched in the session. If you are
using more than one graph (created with `tf.Graph()` in the same
process, you will have to use different sessions for each graph,
but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.
Args:
target: (Optional.) The execution engine to connect to.
Defaults to using an in-process engine.
graph: (Optional.) The `Graph` to be launched (described above).
config: (Optional) `ConfigProto` proto used to configure the session.
"""
if not config:
config = config_pb2.ConfigProto()
# Interactive sessions always place pruned graphs.
config.graph_options.place_pruned_graph = True
super(InteractiveSession, self).__init__(target, graph, config)
self._default_session = self.as_default()
self._default_session.enforce_nesting = False
self._default_session.__enter__()
self._explicit_graph = graph
if self._explicit_graph is not None:
self._default_graph = graph.as_default()
self._default_graph.enforce_nesting = False
self._default_graph.__enter__()
def close(self):
"""Closes an `InteractiveSession`."""
super(InteractiveSession, self).close()
if self._explicit_graph is not None:
self._default_graph.__exit__(None, None, None)
self._default_session.__exit__(None, None, None)
| mit |
nischalsheth/contrail-controller | src/config/schema-transformer/logger.py | 3 | 5204 | # vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Copyright (c) 2016 Juniper Networks, Inc. All rights reserved.
#
"""
Schema Transformer monitor logger
"""
from sandesh_common.vns.ttypes import Module
from cfgm_common.vnc_logger import ConfigServiceLogger
from schema_transformer.config_db import DBBaseST, VirtualNetworkST,\
RoutingInstanceST, ServiceChain
from schema_transformer.sandesh.st_introspect import ttypes as sandesh
class SchemaTransformerLogger(ConfigServiceLogger):
def __init__(self, args=None, http_server_port=None):
module = Module.SCHEMA_TRANSFORMER
module_pkg = "schema_transformer"
self.context = "to_bgp"
super(SchemaTransformerLogger, self).__init__(
module, module_pkg, args, http_server_port)
def sandesh_init(self, http_server_port=None):
super(SchemaTransformerLogger, self).sandesh_init(http_server_port)
self._sandesh.trace_buffer_create(name="MessageBusNotifyTraceBuf",
size=1000)
def redefine_sandesh_handles(self):
sandesh.VnList.handle_request = self.sandesh_vn_handle_request
sandesh.RoutingInstanceList.handle_request = \
self.sandesh_ri_handle_request
sandesh.ServiceChainList.handle_request = \
self.sandesh_sc_handle_request
sandesh.StObjectReq.handle_request = \
self.sandesh_st_object_handle_request
def sandesh_ri_build(self, vn_name, ri_name):
vn = VirtualNetworkST.get(vn_name)
sandesh_ri_list = []
for riname in vn.routing_instances:
ri = RoutingInstanceST.get(riname)
if ri is None:
continue
sandesh_ri = sandesh.RoutingInstance(name=ri.obj.get_fq_name_str())
sandesh_ri.service_chain = ri.service_chain
sandesh_ri.connections = list(ri.connections)
sandesh_ri_list.append(sandesh_ri)
return sandesh_ri_list
# end sandesh_ri_build
def sandesh_ri_handle_request(self, req):
# Return the list of VNs
ri_resp = sandesh.RoutingInstanceListResp(routing_instances=[])
if req.vn_name is None:
for vn in VirtualNetworkST:
sandesh_ri = self.sandesh_ri_build(vn, req.ri_name)
ri_resp.routing_instances.extend(sandesh_ri)
elif req.vn_name in VirtualNetworkST:
sandesh_ri = self.sandesh_ri_build(req.vn_name, req.ri_name)
ri_resp.routing_instances.extend(sandesh_ri)
ri_resp.response(req.context())
# end sandesh_ri_handle_request
def sandesh_vn_build(self, vn_name):
vn = VirtualNetworkST.get(vn_name)
sandesh_vn = sandesh.VirtualNetwork(name=vn_name)
sandesh_vn.policies = vn.network_policys.keys()
sandesh_vn.connections = list(vn.connections)
sandesh_vn.routing_instances = vn.routing_instances
if vn.acl:
sandesh_vn.acl = vn.acl.get_fq_name_str()
if vn.dynamic_acl:
sandesh_vn.dynamic_acl = vn.dynamic_acl.get_fq_name_str()
return sandesh_vn
# end sandesh_vn_build
def sandesh_vn_handle_request(self, req):
# Return the list of VNs
vn_resp = sandesh.VnListResp(vn_names=[])
if req.vn_name is None:
for vn in VirtualNetworkST:
sandesh_vn = self.sandesh_vn_build(vn)
vn_resp.vn_names.append(sandesh_vn)
elif req.vn_name in VirtualNetworkST:
sandesh_vn = self.sandesh_vn_build(req.vn_name)
vn_resp.vn_names.append(sandesh_vn)
vn_resp.response(req.context())
# end sandesh_vn_handle_request
def sandesh_sc_handle_request(self, req):
sc_resp = sandesh.ServiceChainListResp(service_chains=[])
if req.sc_name is None:
for sc in ServiceChain.values():
sandesh_sc = sc.build_introspect()
sc_resp.service_chains.append(sandesh_sc)
elif req.sc_name in ServiceChain:
sandesh_sc = ServiceChain.get(req.sc_name).build_introspect()
sc_resp.service_chains.append(sandesh_sc)
sc_resp.response(req.context())
# end sandesh_sc_handle_request
def sandesh_st_object_handle_request(self, req):
st_resp = sandesh.StObjectListResp(objects=[])
obj_type_map = DBBaseST.get_obj_type_map()
if req.object_type is not None:
if req.object_type not in obj_type_map:
return st_resp
obj_cls_list = [obj_type_map[req.object_type]]
else:
obj_cls_list = obj_type_map.values()
for obj_cls in obj_cls_list:
id_or_name = req.object_id_or_fq_name
if id_or_name:
obj = obj_cls.get(id_or_name) or \
obj_cls.get_by_uuid(id_or_name)
if obj is None:
continue
st_resp.objects.append(obj.handle_st_object_req())
else:
for obj in obj_cls.values():
st_resp.objects.append(obj.handle_st_object_req())
st_resp.response(req.context())
# end sandesh_st_object_handle_request
| apache-2.0 |
promptworks/keystone | keystone/cli.py | 2 | 22753 | # Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
from __future__ import print_function
import os
from oslo_config import cfg
from oslo_log import log
import pbr.version
from keystone import assignment
from keystone.common import driver_hints
from keystone.common import openssl
from keystone.common import sql
from keystone.common.sql import migration_helpers
from keystone.common import utils
from keystone import config
from keystone import exception
from keystone.i18n import _, _LW
from keystone import identity
from keystone import resource
from keystone import token
from keystone.token.providers.fernet import utils as fernet
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class BaseApp(object):
name = None
@classmethod
def add_argument_parser(cls, subparsers):
parser = subparsers.add_parser(cls.name, help=cls.__doc__)
parser.set_defaults(cmd_class=cls)
return parser
class DbSync(BaseApp):
"""Sync the database."""
name = 'db_sync'
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(DbSync, cls).add_argument_parser(subparsers)
parser.add_argument('version', default=None, nargs='?',
help=('Migrate the database up to a specified '
'version. If not provided, db_sync will '
'migrate the database to the latest known '
'version.'))
parser.add_argument('--extension', default=None,
help=('Migrate the database for the specified '
'extension. If not provided, db_sync will '
'migrate the common repository.'))
return parser
@staticmethod
def main():
version = CONF.command.version
extension = CONF.command.extension
migration_helpers.sync_database_to_version(extension, version)
class DbVersion(BaseApp):
"""Print the current migration version of the database."""
name = 'db_version'
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(DbVersion, cls).add_argument_parser(subparsers)
parser.add_argument('--extension', default=None,
help=('Print the migration version of the '
'database for the specified extension. If '
'not provided, print it for the common '
'repository.'))
@staticmethod
def main():
extension = CONF.command.extension
migration_helpers.print_db_version(extension)
class BasePermissionsSetup(BaseApp):
"""Common user/group setup for file permissions."""
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(BasePermissionsSetup,
cls).add_argument_parser(subparsers)
running_as_root = (os.geteuid() == 0)
parser.add_argument('--keystone-user', required=running_as_root)
parser.add_argument('--keystone-group', required=running_as_root)
return parser
@staticmethod
def get_user_group():
keystone_user_id = None
keystone_group_id = None
try:
a = CONF.command.keystone_user
if a:
keystone_user_id = utils.get_unix_user(a)[0]
except KeyError:
raise ValueError("Unknown user '%s' in --keystone-user" % a)
try:
a = CONF.command.keystone_group
if a:
keystone_group_id = utils.get_unix_group(a)[0]
except KeyError:
raise ValueError("Unknown group '%s' in --keystone-group" % a)
return keystone_user_id, keystone_group_id
class BaseCertificateSetup(BasePermissionsSetup):
"""Provides common options for certificate setup."""
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(BaseCertificateSetup,
cls).add_argument_parser(subparsers)
parser.add_argument('--rebuild', default=False, action='store_true',
help=('Rebuild certificate files: erase previous '
'files and regenerate them.'))
return parser
class PKISetup(BaseCertificateSetup):
"""Set up Key pairs and certificates for token signing and verification.
This is NOT intended for production use, see Keystone Configuration
documentation for details.
"""
name = 'pki_setup'
@classmethod
def main(cls):
LOG.warn(_LW('keystone-manage pki_setup is not recommended for '
'production use.'))
keystone_user_id, keystone_group_id = cls.get_user_group()
conf_pki = openssl.ConfigurePKI(keystone_user_id, keystone_group_id,
rebuild=CONF.command.rebuild)
conf_pki.run()
class SSLSetup(BaseCertificateSetup):
"""Create key pairs and certificates for HTTPS connections.
This is NOT intended for production use, see Keystone Configuration
documentation for details.
"""
name = 'ssl_setup'
@classmethod
def main(cls):
LOG.warn(_LW('keystone-manage ssl_setup is not recommended for '
'production use.'))
keystone_user_id, keystone_group_id = cls.get_user_group()
conf_ssl = openssl.ConfigureSSL(keystone_user_id, keystone_group_id,
rebuild=CONF.command.rebuild)
conf_ssl.run()
class FernetSetup(BasePermissionsSetup):
"""Setup a key repository for Fernet tokens.
This also creates a primary key used for both creating and validating
Keystone Lightweight tokens. To improve security, you should rotate your
keys (using keystone-manage fernet_rotate, for example).
"""
name = 'fernet_setup'
@classmethod
def main(cls):
keystone_user_id, keystone_group_id = cls.get_user_group()
fernet.create_key_directory(keystone_user_id, keystone_group_id)
if fernet.validate_key_repository():
fernet.initialize_key_repository(
keystone_user_id, keystone_group_id)
class FernetRotate(BasePermissionsSetup):
"""Rotate Fernet encryption keys.
This assumes you have already run keystone-manage fernet_setup.
A new primary key is placed into rotation, which is used for new tokens.
The old primary key is demoted to secondary, which can then still be used
for validating tokens. Excess secondary keys (beyond [fernet_tokens]
max_active_keys) are revoked. Revoked keys are permanently deleted. A new
staged key will be created and used to validate tokens. The next time key
rotation takes place, the staged key will be put into rotation as the
primary key.
Rotating keys too frequently, or with [fernet_tokens] max_active_keys set
too low, will cause tokens to become invalid prior to their expiration.
"""
name = 'fernet_rotate'
@classmethod
def main(cls):
keystone_user_id, keystone_group_id = cls.get_user_group()
if fernet.validate_key_repository():
fernet.rotate_keys(keystone_user_id, keystone_group_id)
class TokenFlush(BaseApp):
"""Flush expired tokens from the backend."""
name = 'token_flush'
@classmethod
def main(cls):
token_manager = token.persistence.PersistenceManager()
token_manager.driver.flush_expired_tokens()
class MappingPurge(BaseApp):
"""Purge the mapping table."""
name = 'mapping_purge'
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(MappingPurge, cls).add_argument_parser(subparsers)
parser.add_argument('--all', default=False, action='store_true',
help=('Purge all mappings.'))
parser.add_argument('--domain-name', default=None,
help=('Purge any mappings for the domain '
'specified.'))
parser.add_argument('--public-id', default=None,
help=('Purge the mapping for the Public ID '
'specified.'))
parser.add_argument('--local-id', default=None,
help=('Purge the mappings for the Local ID '
'specified.'))
parser.add_argument('--type', default=None, choices=['user', 'group'],
help=('Purge any mappings for the type '
'specified.'))
return parser
@staticmethod
def main():
def validate_options():
# NOTE(henry-nash); It would be nice to use the argparse automated
# checking for this validation, but the only way I can see doing
# that is to make the default (i.e. if no optional parameters
# are specified) to purge all mappings - and that sounds too
# dangerous as a default. So we use it in a slightly
# unconventional way, where all parameters are optional, but you
# must specify at least one.
if (CONF.command.all is False and
CONF.command.domain_name is None and
CONF.command.public_id is None and
CONF.command.local_id is None and
CONF.command.type is None):
raise ValueError(_('At least one option must be provided'))
if (CONF.command.all is True and
(CONF.command.domain_name is not None or
CONF.command.public_id is not None or
CONF.command.local_id is not None or
CONF.command.type is not None)):
raise ValueError(_('--all option cannot be mixed with '
'other options'))
def get_domain_id(name):
try:
identity.Manager()
# init assignment manager to avoid KeyError in resource.core
assignment.Manager()
resource_manager = resource.Manager()
return resource_manager.driver.get_domain_by_name(name)['id']
except KeyError:
raise ValueError(_("Unknown domain '%(name)s' specified by "
"--domain-name") % {'name': name})
validate_options()
# Now that we have validated the options, we know that at least one
# option has been specified, and if it was the --all option then this
# was the only option specified.
#
# The mapping dict is used to filter which mappings are purged, so
# leaving it empty means purge them all
mapping = {}
if CONF.command.domain_name is not None:
mapping['domain_id'] = get_domain_id(CONF.command.domain_name)
if CONF.command.public_id is not None:
mapping['public_id'] = CONF.command.public_id
if CONF.command.local_id is not None:
mapping['local_id'] = CONF.command.local_id
if CONF.command.type is not None:
mapping['type'] = CONF.command.type
mapping_manager = identity.MappingManager()
mapping_manager.driver.purge_mappings(mapping)
DOMAIN_CONF_FHEAD = 'keystone.'
DOMAIN_CONF_FTAIL = '.conf'
class DomainConfigUploadFiles(object):
def __init__(self):
super(DomainConfigUploadFiles, self).__init__()
self.load_backends()
def load_backends(self):
"""Load the backends needed for uploading domain configs.
We only need the resource and domain_config managers, but there are
some dependencies which mean we have to load the assignment and
identity managers as well.
The order of loading the backends is important, since the resource
manager depends on the assignment manager, which in turn depends on
the identity manager.
"""
identity.Manager()
assignment.Manager()
self.resource_manager = resource.Manager()
self.domain_config_manager = resource.DomainConfigManager()
def valid_options(self):
"""Validate the options, returning True if they are indeed valid.
It would be nice to use the argparse automated checking for this
validation, but the only way I can see doing that is to make the
default (i.e. if no optional parameters are specified) to upload
all configuration files - and that sounds too dangerous as a
default. So we use it in a slightly unconventional way, where all
parameters are optional, but you must specify at least one.
"""
if (CONF.command.all is False and
CONF.command.domain_name is None):
print(_('At least one option must be provided, use either '
'--all or --domain-name'))
raise ValueError
if (CONF.command.all is True and
CONF.command.domain_name is not None):
print(_('The --all option cannot be used with '
'the --domain-name option'))
raise ValueError
def upload_config_to_database(self, file_name, domain_name):
"""Upload a single config file to the database.
:param file_name: the file containing the config options
:param domain_name: the domain name
:raises: ValueError: the domain does not exist or already has domain
specific configurations defined
:raises: Exceptions from oslo config: there is an issue with options
defined in the config file or its
format
The caller of this method should catch the errors raised and handle
appropriately in order that the best UX experience can be provided for
both the case of when a user has asked for a specific config file to
be uploaded, as well as all config files in a directory.
"""
try:
domain_ref = (
self.resource_manager.driver.get_domain_by_name(domain_name))
except exception.DomainNotFound:
print(_('Invalid domain name: %(domain)s found in config file '
'name: %(file)s - ignoring this file.') % {
'domain': domain_name,
'file': file_name})
raise ValueError
if self.domain_config_manager.get_config_with_sensitive_info(
domain_ref['id']):
print(_('Domain: %(domain)s already has a configuration '
'defined - ignoring file: %(file)s.') % {
'domain': domain_name,
'file': file_name})
raise ValueError
sections = {}
try:
parser = cfg.ConfigParser(file_name, sections)
parser.parse()
except Exception:
# We explicitly don't try and differentiate the error cases, in
# order to keep the code in this tool more robust as oslo.config
# changes.
print(_('Error parsing configuration file for domain: %(domain)s, '
'file: %(file)s.') % {
'domain': domain_name,
'file': file_name})
raise
for group in sections:
for option in sections[group]:
sections[group][option] = sections[group][option][0]
self.domain_config_manager.create_config(domain_ref['id'], sections)
def upload_configs_to_database(self, file_name, domain_name):
"""Upload configs from file and load into database.
This method will be called repeatedly for all the config files in the
config directory. To provide a better UX, we differentiate the error
handling in this case (versus when the user has asked for a single
config file to be uploaded).
"""
try:
self.upload_config_to_database(file_name, domain_name)
except ValueError:
# We've already given all the info we can in a message, so carry
# on to the next one
pass
except Exception:
# Some other error occurred relating to this specific config file
# or domain. Since we are trying to upload all the config files,
# we'll continue and hide this exception. However, we tell the
# user how to get more info about this error by re-running with
# just the domain at fault. When we run in single-domain mode we
# will NOT hide the exception.
print(_('To get a more detailed information on this error, re-run '
'this command for the specific domain, i.e.: '
'keystone-manage domain_config_upload --domain-name %s') %
domain_name)
pass
def read_domain_configs_from_files(self):
"""Read configs from file(s) and load into database.
The command line parameters have already been parsed and the CONF
command option will have been set. It is either set to the name of an
explicit domain, or it's None to indicate that we want all domain
config files.
"""
domain_name = CONF.command.domain_name
conf_dir = CONF.identity.domain_config_dir
if not os.path.exists(conf_dir):
print(_('Unable to locate domain config directory: %s') % conf_dir)
raise ValueError
if domain_name:
# Request is to upload the configs for just one domain
fname = DOMAIN_CONF_FHEAD + domain_name + DOMAIN_CONF_FTAIL
self.upload_config_to_database(
os.path.join(conf_dir, fname), domain_name)
return
# Request is to transfer all config files, so let's read all the
# files in the config directory, and transfer those that match the
# filename pattern of 'keystone.<domain_name>.conf'
for r, d, f in os.walk(conf_dir):
for fname in f:
if (fname.startswith(DOMAIN_CONF_FHEAD) and
fname.endswith(DOMAIN_CONF_FTAIL)):
if fname.count('.') >= 2:
self.upload_configs_to_database(
os.path.join(r, fname),
fname[len(DOMAIN_CONF_FHEAD):
-len(DOMAIN_CONF_FTAIL)])
else:
LOG.warn(_LW('Ignoring file (%s) while scanning '
'domain config directory'), fname)
def run(self):
# First off, let's just check we can talk to the domain database
try:
self.resource_manager.driver.list_domains(driver_hints.Hints())
except Exception:
# It is likely that there is some SQL or other backend error
# related to set up
print(_('Unable to access the keystone database, please check it '
'is configured correctly.'))
raise
try:
self.valid_options()
self.read_domain_configs_from_files()
except ValueError:
# We will already have printed out a nice message, so indicate
# to caller the non-success error code to be used.
return 1
class DomainConfigUpload(BaseApp):
"""Upload the domain specific configuration files to the database."""
name = 'domain_config_upload'
@classmethod
def add_argument_parser(cls, subparsers):
parser = super(DomainConfigUpload, cls).add_argument_parser(subparsers)
parser.add_argument('--all', default=False, action='store_true',
help='Upload contents of all domain specific '
'configuration files. Either use this option '
'or use the --domain-name option to choose a '
'specific domain.')
parser.add_argument('--domain-name', default=None,
help='Upload contents of the specific '
'configuration file for the given domain. '
'Either use this option or use the --all '
'option to upload contents for all domains.')
return parser
@staticmethod
def main():
dcu = DomainConfigUploadFiles()
status = dcu.run()
if status is not None:
exit(status)
class SamlIdentityProviderMetadata(BaseApp):
"""Generate Identity Provider metadata."""
name = 'saml_idp_metadata'
@staticmethod
def main():
# NOTE(marek-denis): Since federation is currently an extension import
# corresponding modules only when they are really going to be used.
from keystone.contrib.federation import idp
metadata = idp.MetadataGenerator().generate_metadata()
print(metadata.to_string())
CMDS = [
DbSync,
DbVersion,
DomainConfigUpload,
FernetRotate,
FernetSetup,
MappingPurge,
PKISetup,
SamlIdentityProviderMetadata,
SSLSetup,
TokenFlush,
]
def add_command_parsers(subparsers):
for cmd in CMDS:
cmd.add_argument_parser(subparsers)
command_opt = cfg.SubCommandOpt('command',
title='Commands',
help='Available commands',
handler=add_command_parsers)
def main(argv=None, config_files=None):
CONF.register_cli_opt(command_opt)
config.configure()
sql.initialize()
config.set_default_for_default_log_levels()
CONF(args=argv[1:],
project='keystone',
version=pbr.version.VersionInfo('keystone').version_string(),
usage='%(prog)s [' + '|'.join([cmd.name for cmd in CMDS]) + ']',
default_config_files=config_files)
config.setup_logging()
CONF.command.cmd_class.main()
| apache-2.0 |
phalt/django | tests/forms_tests/tests/tests.py | 6 | 16659 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import datetime
from django.core.files.uploadedfile import SimpleUploadedFile
from django.db import models
from django.forms import (
CharField, FileField, Form, ModelChoiceField, ModelForm,
)
from django.forms.models import ModelFormMetaclass
from django.test import SimpleTestCase, TestCase
from django.utils import six
from ..models import (
BoundaryModel, ChoiceFieldModel, ChoiceModel, ChoiceOptionModel, Defaults,
FileModel, Group, OptionalMultiChoiceModel,
)
class ChoiceFieldForm(ModelForm):
class Meta:
model = ChoiceFieldModel
fields = '__all__'
class OptionalMultiChoiceModelForm(ModelForm):
class Meta:
model = OptionalMultiChoiceModel
fields = '__all__'
class ChoiceFieldExclusionForm(ModelForm):
multi_choice = CharField(max_length=50)
class Meta:
exclude = ['multi_choice']
model = ChoiceFieldModel
class EmptyCharLabelChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice']
class EmptyIntegerLabelChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice_integer']
class EmptyCharLabelNoneChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice_string_w_none']
class FileForm(Form):
file1 = FileField()
class TestModelChoiceField(TestCase):
def test_choices_not_fetched_when_not_rendering(self):
"""
Generating choices for ModelChoiceField should require 1 query (#12510).
"""
self.groups = [Group.objects.create(name=name) for name in 'abc']
# only one query is required to pull the model from DB
with self.assertNumQueries(1):
field = ModelChoiceField(Group.objects.order_by('-name'))
self.assertEqual('a', field.clean(self.groups[0].pk).name)
def test_queryset_manager(self):
f = ModelChoiceField(ChoiceOptionModel.objects)
choice = ChoiceOptionModel.objects.create(name="choice 1")
self.assertEqual(list(f.choices), [('', '---------'), (choice.pk, str(choice))])
class TestTicket14567(TestCase):
"""
The return values of ModelMultipleChoiceFields are QuerySets
"""
def test_empty_queryset_return(self):
"If a model's ManyToManyField has blank=True and is saved with no data, a queryset is returned."
option = ChoiceOptionModel.objects.create(name='default')
form = OptionalMultiChoiceModelForm({'multi_choice_optional': '', 'multi_choice': [option.pk]})
self.assertTrue(form.is_valid())
# The empty value is a QuerySet
self.assertIsInstance(form.cleaned_data['multi_choice_optional'], models.query.QuerySet)
# While we're at it, test whether a QuerySet is returned if there *is* a value.
self.assertIsInstance(form.cleaned_data['multi_choice'], models.query.QuerySet)
class ModelFormCallableModelDefault(TestCase):
def test_no_empty_option(self):
"If a model's ForeignKey has blank=False and a default, no empty option is created (Refs #10792)."
option = ChoiceOptionModel.objects.create(name='default')
choices = list(ChoiceFieldForm().fields['choice'].choices)
self.assertEqual(len(choices), 1)
self.assertEqual(choices[0], (option.pk, six.text_type(option)))
def test_callable_initial_value(self):
"The initial value for a callable default returning a queryset is the pk (refs #13769)"
ChoiceOptionModel.objects.create(id=1, name='default')
ChoiceOptionModel.objects.create(id=2, name='option 2')
ChoiceOptionModel.objects.create(id=3, name='option 3')
self.assertHTMLEqual(
ChoiceFieldForm().as_p(),
"""<p><label for="id_choice">Choice:</label> <select name="choice" id="id_choice" required>
<option value="1" selected>ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice" value="1" id="initial-id_choice" /></p>
<p><label for="id_choice_int">Choice int:</label> <select name="choice_int" id="id_choice_int" required>
<option value="1" selected>ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice_int" value="1" id="initial-id_choice_int" /></p>
<p><label for="id_multi_choice">Multi choice:</label>
<select multiple="multiple" name="multi_choice" id="id_multi_choice" required>
<option value="1" selected>ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice" value="1" id="initial-id_multi_choice_0" /></p>
<p><label for="id_multi_choice_int">Multi choice int:</label>
<select multiple="multiple" name="multi_choice_int" id="id_multi_choice_int" required>
<option value="1" selected>ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice_int" value="1" id="initial-id_multi_choice_int_0" /></p>"""
)
def test_initial_instance_value(self):
"Initial instances for model fields may also be instances (refs #7287)"
ChoiceOptionModel.objects.create(id=1, name='default')
obj2 = ChoiceOptionModel.objects.create(id=2, name='option 2')
obj3 = ChoiceOptionModel.objects.create(id=3, name='option 3')
self.assertHTMLEqual(
ChoiceFieldForm(initial={
'choice': obj2,
'choice_int': obj2,
'multi_choice': [obj2, obj3],
'multi_choice_int': ChoiceOptionModel.objects.exclude(name="default"),
}).as_p(),
"""<p><label for="id_choice">Choice:</label> <select name="choice" id="id_choice" required>
<option value="1">ChoiceOption 1</option>
<option value="2" selected>ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice" value="2" id="initial-id_choice" /></p>
<p><label for="id_choice_int">Choice int:</label> <select name="choice_int" id="id_choice_int" required>
<option value="1">ChoiceOption 1</option>
<option value="2" selected>ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice_int" value="2" id="initial-id_choice_int" /></p>
<p><label for="id_multi_choice">Multi choice:</label>
<select multiple="multiple" name="multi_choice" id="id_multi_choice" required>
<option value="1">ChoiceOption 1</option>
<option value="2" selected>ChoiceOption 2</option>
<option value="3" selected>ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice" value="2" id="initial-id_multi_choice_0" />
<input type="hidden" name="initial-multi_choice" value="3" id="initial-id_multi_choice_1" /></p>
<p><label for="id_multi_choice_int">Multi choice int:</label>
<select multiple="multiple" name="multi_choice_int" id="id_multi_choice_int" required>
<option value="1">ChoiceOption 1</option>
<option value="2" selected>ChoiceOption 2</option>
<option value="3" selected>ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice_int" value="2" id="initial-id_multi_choice_int_0" />
<input type="hidden" name="initial-multi_choice_int" value="3" id="initial-id_multi_choice_int_1" /></p>"""
)
class FormsModelTestCase(TestCase):
def test_unicode_filename(self):
# FileModel with unicode filename and data #########################
file1 = SimpleUploadedFile('我隻氣墊船裝滿晒鱔.txt', 'मेरी मँडराने वाली नाव सर्पमीनों से भरी ह'.encode('utf-8'))
f = FileForm(data={}, files={'file1': file1}, auto_id=False)
self.assertTrue(f.is_valid())
self.assertIn('file1', f.cleaned_data)
m = FileModel.objects.create(file=f.cleaned_data['file1'])
self.assertEqual(m.file.name, 'tests/\u6211\u96bb\u6c23\u588a\u8239\u88dd\u6eff\u6652\u9c54.txt')
m.delete()
def test_boundary_conditions(self):
# Boundary conditions on a PositiveIntegerField #########################
class BoundaryForm(ModelForm):
class Meta:
model = BoundaryModel
fields = '__all__'
f = BoundaryForm({'positive_integer': 100})
self.assertTrue(f.is_valid())
f = BoundaryForm({'positive_integer': 0})
self.assertTrue(f.is_valid())
f = BoundaryForm({'positive_integer': -100})
self.assertFalse(f.is_valid())
def test_formfield_initial(self):
# Formfield initial values ########
# If the model has default values for some fields, they are used as the formfield
# initial values.
class DefaultsForm(ModelForm):
class Meta:
model = Defaults
fields = '__all__'
self.assertEqual(DefaultsForm().fields['name'].initial, 'class default value')
self.assertEqual(DefaultsForm().fields['def_date'].initial, datetime.date(1980, 1, 1))
self.assertEqual(DefaultsForm().fields['value'].initial, 42)
r1 = DefaultsForm()['callable_default'].as_widget()
r2 = DefaultsForm()['callable_default'].as_widget()
self.assertNotEqual(r1, r2)
# In a ModelForm that is passed an instance, the initial values come from the
# instance's values, not the model's defaults.
foo_instance = Defaults(name='instance value', def_date=datetime.date(1969, 4, 4), value=12)
instance_form = DefaultsForm(instance=foo_instance)
self.assertEqual(instance_form.initial['name'], 'instance value')
self.assertEqual(instance_form.initial['def_date'], datetime.date(1969, 4, 4))
self.assertEqual(instance_form.initial['value'], 12)
from django.forms import CharField
class ExcludingForm(ModelForm):
name = CharField(max_length=255)
class Meta:
model = Defaults
exclude = ['name', 'callable_default']
f = ExcludingForm({'name': 'Hello', 'value': 99, 'def_date': datetime.date(1999, 3, 2)})
self.assertTrue(f.is_valid())
self.assertEqual(f.cleaned_data['name'], 'Hello')
obj = f.save()
self.assertEqual(obj.name, 'class default value')
self.assertEqual(obj.value, 99)
self.assertEqual(obj.def_date, datetime.date(1999, 3, 2))
class RelatedModelFormTests(SimpleTestCase):
def test_invalid_loading_order(self):
"""
Test for issue 10405
"""
class A(models.Model):
ref = models.ForeignKey("B", models.CASCADE)
class Meta:
model = A
fields = '__all__'
with self.assertRaises(ValueError):
ModelFormMetaclass(str('Form'), (ModelForm,), {'Meta': Meta})
class B(models.Model):
pass
def test_valid_loading_order(self):
"""
Test for issue 10405
"""
class C(models.Model):
ref = models.ForeignKey("D", models.CASCADE)
class D(models.Model):
pass
class Meta:
model = C
fields = '__all__'
self.assertTrue(issubclass(ModelFormMetaclass(str('Form'), (ModelForm,), {'Meta': Meta}), ModelForm))
class ManyToManyExclusionTestCase(TestCase):
def test_m2m_field_exclusion(self):
# Issue 12337. save_instance should honor the passed-in exclude keyword.
opt1 = ChoiceOptionModel.objects.create(id=1, name='default')
opt2 = ChoiceOptionModel.objects.create(id=2, name='option 2')
opt3 = ChoiceOptionModel.objects.create(id=3, name='option 3')
initial = {
'choice': opt1,
'choice_int': opt1,
}
data = {
'choice': opt2.pk,
'choice_int': opt2.pk,
'multi_choice': 'string data!',
'multi_choice_int': [opt1.pk],
}
instance = ChoiceFieldModel.objects.create(**initial)
instance.multi_choice.set([opt2, opt3])
instance.multi_choice_int.set([opt2, opt3])
form = ChoiceFieldExclusionForm(data=data, instance=instance)
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['multi_choice'], data['multi_choice'])
form.save()
self.assertEqual(form.instance.choice.pk, data['choice'])
self.assertEqual(form.instance.choice_int.pk, data['choice_int'])
self.assertEqual(list(form.instance.multi_choice.all()), [opt2, opt3])
self.assertEqual([obj.pk for obj in form.instance.multi_choice_int.all()], data['multi_choice_int'])
class EmptyLabelTestCase(TestCase):
def test_empty_field_char(self):
f = EmptyCharLabelChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" required /></p>
<p><label for="id_choice">Choice:</label> <select id="id_choice" name="choice">
<option value="" selected>No Preference</option>
<option value="f">Foo</option>
<option value="b">Bar</option>
</select></p>"""
)
def test_empty_field_char_none(self):
f = EmptyCharLabelNoneChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" required /></p>
<p><label for="id_choice_string_w_none">Choice string w none:</label>
<select id="id_choice_string_w_none" name="choice_string_w_none">
<option value="" selected>No Preference</option>
<option value="f">Foo</option>
<option value="b">Bar</option>
</select></p>"""
)
def test_save_empty_label_forms(self):
# Saving a form with a blank choice results in the expected
# value being stored in the database.
tests = [
(EmptyCharLabelNoneChoiceForm, 'choice_string_w_none', None),
(EmptyIntegerLabelChoiceForm, 'choice_integer', None),
(EmptyCharLabelChoiceForm, 'choice', ''),
]
for form, key, expected in tests:
f = form({'name': 'some-key', key: ''})
self.assertTrue(f.is_valid())
m = f.save()
self.assertEqual(expected, getattr(m, key))
self.assertEqual('No Preference',
getattr(m, 'get_{}_display'.format(key))())
def test_empty_field_integer(self):
f = EmptyIntegerLabelChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" required /></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="" selected>No Preference</option>
<option value="1">Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
def test_get_display_value_on_none(self):
m = ChoiceModel.objects.create(name='test', choice='', choice_integer=None)
self.assertIsNone(m.choice_integer)
self.assertEqual('No Preference', m.get_choice_integer_display())
def test_html_rendering_of_prepopulated_models(self):
none_model = ChoiceModel(name='none-test', choice_integer=None)
f = EmptyIntegerLabelChoiceForm(instance=none_model)
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label>
<input id="id_name" maxlength="10" name="name" type="text" value="none-test" required /></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="" selected>No Preference</option>
<option value="1">Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
foo_model = ChoiceModel(name='foo-test', choice_integer=1)
f = EmptyIntegerLabelChoiceForm(instance=foo_model)
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label>
<input id="id_name" maxlength="10" name="name" type="text" value="foo-test" required /></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="">No Preference</option>
<option value="1" selected>Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
| bsd-3-clause |
femtotrader/rabbit4mt4 | receive/Python/receive_logs.py | 1 | 2274 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import traceback
import logging.config
import pika
import sys
import argparse
import datetime
def get_logging_level_from_name(name):
try:
name = name.upper()
except:
name = "CRITICAL"
level = logging.getLevelName(name)
if isinstance(level, int):
return(level)
else:
return(logging.CRITICAL)
def callback(ch, method, properties, body):
try:
routing_key = method.routing_key
t_routing_key = routing_key.split(".")
terminal_id = t_routing_key[0]
level = get_logging_level_from_name(t_routing_key[-1])
#logging.info("%s - %s" % (routing_key, body))
logging.log(level, "%s - %s" % (terminal_id, body))
except:
logging.error(traceback.format_exc())
def main(args):
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
exchange = 'topic_logs'
channel.exchange_declare(exchange=exchange,
type='topic')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = args.binding_keys.split(',')
for binding_key in binding_keys:
channel.queue_bind(exchange=exchange,
queue=queue_name,
routing_key=binding_key)
logging.info(' [*] Waiting for logs. To exit press CTRL+C')
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
if __name__ == '__main__':
logging.config.fileConfig("logging.conf")
logger = logging.getLogger("simpleExample")
parser = argparse.ArgumentParser()
parser.add_argument("--binding_keys", help="binding keys (use comma ',' to split several binding keys), default is '#' to receive any message, binding_key can be 'mt4_demo01_123456.events.logs.*.*' or 'mt4_demo01_123456.events.logs.main.debug', 'mt4_demo01_123456.events.logs.main.info', 'mt4_demo01_123456.events.logs.main.warning', 'mt4_demo01_123456.events.logs.main.error' or 'mt4_demo01_123456.events.logs.main.critical'", default="#")
args = parser.parse_args()
main(args) | gpl-2.0 |
pganssle/bdateutil | tests.py | 1 | 17387 | # bdateutil
# ---------
# Adds business day logic and improved data type flexibility to
# python-dateutil. 100% backwards compatible with python-dateutil,
# simply replace dateutil imports with bdateutil.
#
# Author: ryanss <[email protected]>
# Website: https://github.com/ryanss/bdateutil
# License: MIT (see LICENSE file)
import datetime as dt
import unittest
import holidays
from bdateutil import isbday
from bdateutil import relativedelta
from bdateutil import parse
from bdateutil.rrule import *
from bdateutil import date, datetime, time
from testdateutil import *
class TestIsBday(unittest.TestCase):
def test_isbday(self):
self.assertFalse(isbday(date(2014, 1, 4)))
self.assertFalse(isbday("2014-01-04"))
self.assertTrue(isbday(date(2014, 1, 1)))
self.assertTrue(isbday("2014-01-01"))
self.assertFalse(isbday(date(2014, 1, 1), holidays=holidays.US()))
self.assertTrue(isbday(datetime(2014, 1, 1, 16, 30)))
self.assertTrue(isbday(datetime(2014, 1, 1, 17, 30)))
self.assertFalse(isbday(datetime(2014, 1, 1, 16, 30),
holidays=holidays.US()))
self.assertFalse(isbday(datetime(2014, 1, 1, 17, 30),
holidays=holidays.US()))
isbday.holidays = holidays.US()
self.assertFalse(isbday(date(2014, 1, 1)))
self.assertFalse(isbday(date(2014, 7, 4)))
self.assertTrue(isbday(date(2014, 7, 4), holidays=holidays.CA()))
class TestRelativeDelta(unittest.TestCase):
def test_init(self):
self.assertEqual(relativedelta(date(2014, 1, 7), date(2014, 1, 3)),
relativedelta(days=4, bdays=2))
self.assertEqual(relativedelta(date(2014, 1, 31), date(2014, 1, 1)),
relativedelta(days=30, bdays=22))
self.assertEqual(relativedelta(date(2014, 2, 1), date(2014, 1, 1)),
relativedelta(months=1, bdays=23))
self.assertEqual(relativedelta(date(2014, 2, 2), date(2014, 1, 1)),
relativedelta(months=1, days=1, bdays=23))
self.assertEqual(relativedelta(date(2014, 1, 1), date(2014, 2, 2)),
relativedelta(months=-1, days=-1, bdays=-23))
def test_init_time(self):
self.assertEqual(relativedelta(datetime(2015, 1, 5, 9, 15),
datetime(2015, 1, 2, 16, 45)),
relativedelta(days=2, hours=16, minutes=30,
bminutes=30))
self.assertEqual(relativedelta(datetime(2015, 1, 20, 21, 22),
datetime(2015, 1, 9, 3, 0)),
relativedelta(days=11, hours=18, minutes=22,
bdays=7, bhours=8, bminutes=0))
self.assertEqual(relativedelta(datetime(2015, 1, 20, 21, 22),
datetime(2015, 1, 9, 3, 0),
holidays=holidays.US()),
relativedelta(days=11, hours=18, minutes=22,
bdays=6, bhours=8, bminutes=0))
relativedelta.holidays = holidays.US()
self.assertEqual(relativedelta(datetime(2015, 1, 20, 21, 22),
datetime(2015, 1, 9, 3, 0)),
relativedelta(days=11, hours=18, minutes=22,
bdays=6, bhours=8, bminutes=0))
del relativedelta.holidays
self.assertEqual(relativedelta(time(3, 40), time(2, 37)),
relativedelta(hours=1, minutes=3))
def test_add(self):
rd1 = relativedelta(years=+1, months=+2, bdays=+3, days=+4,
bhours=+5, bminutes=+6, bseconds=+7,
hours=+8, minutes=+9, seconds=+10)
rd2 = relativedelta(years=+10, months=-9, bdays=+8, days=-7,
bhours=+6, bminutes=-5, bseconds=+4,
hours=-3, minutes=+2, seconds=-1)
rd3 = relativedelta(years=+11, months=-7, bdays=+11, days=-3,
bhours=+11, bminutes=+1, bseconds=+11,
hours=+5, minutes=+11, seconds=+9)
self.assertEqual(rd1 + rd2, rd3)
self.assertEqual(relativedelta(bdays=3) + date(2014, 1, 3),
date(2014, 1, 8))
rd4 = relativedelta(years=+1, months=+2, days=+1)
rd5 = relativedelta(years=+12, months=-5, bdays=+11, days=-2,
bhours=+11, bminutes=+1, bseconds=+11,
hours=+5, minutes=+11, seconds=+9)
self.assertEqual(rd3 + rd4, rd5)
self.assertEqual("2014-01-01" + relativedelta(weekday=FR),
datetime(2014, 1, 3))
self.assertEqual("2014-11-15" + relativedelta(bdays=1),
datetime(2014, 11, 18))
def test_bdays_zero(self):
self.assertEqual("2014-11-15" + relativedelta(bdays=0),
datetime(2014, 11, 17))
self.assertEqual("2014-11-17" + relativedelta(bdays=0),
datetime(2014, 11, 17))
self.assertEqual("2014-11-15" - relativedelta(bdays=0),
datetime(2014, 11, 14))
self.assertEqual("2014-11-14" - relativedelta(bdays=0),
datetime(2014, 11, 14))
def test_radd(self):
self.assertEqual(date(2014, 1, 3) + relativedelta(bdays=2),
date(2014, 1, 7))
self.assertEqual(date(2014, 1, 7) + relativedelta(bdays=-2),
date(2014, 1, 3))
self.assertEqual(date(2014, 2, 3) + relativedelta(bdays=-19),
date(2014, 1, 7))
self.assertEqual(date(2014, 1, 3) + relativedelta(bdays=1.5),
datetime(2014, 1, 6, 13, 0))
def test_radd_time(self):
self.assertEqual("2015-01-02 16:45" + relativedelta(bminutes=+30),
datetime(2015, 1, 5, 9, 15))
self.assertEqual(date(2015, 1, 2) + relativedelta(bminutes=+30),
datetime(2015, 1, 2, 9, 30))
self.assertEqual(date(2014, 1, 3) + relativedelta(bdays=1, bhours=4),
datetime(2014, 1, 6, 13, 0))
relativedelta.btstart = time(7, 30)
self.assertEqual("2015-01-02 16:45" + relativedelta(bminutes=+30),
datetime(2015, 1, 5, 7, 45))
self.assertEqual("2015-01-02 16:45" + relativedelta(bhours=+0.5),
datetime(2015, 1, 5, 7, 45))
del relativedelta.btstart
def test_sub(self):
rd1 = relativedelta(years=+1, months=+2, bdays=+3, days=+4,
bhours=+5, bminutes=+6, bseconds=+7,
hours=+8, minutes=+9, seconds=+10)
rd2 = relativedelta(years=+10, months=-9, bdays=+8, days=-7,
bhours=+6, bminutes=-5, bseconds=+4,
hours=-3, minutes=+2, seconds=-1)
rd3 = relativedelta(years=-9, months=+11, bdays=-5, days=+11,
bhours=-1, bminutes=+11, bseconds=+3,
hours=+11, minutes=+7, seconds=+11)
self.assertEqual(rd1 - rd2, rd3)
def test_rsub(self):
self.assertEqual(date(2014, 1, 7) - relativedelta(bdays=2),
date(2014, 1, 3))
self.assertEqual(date(2014, 1, 3) - relativedelta(bdays=-2),
date(2014, 1, 7))
self.assertEqual(date(2014, 2, 3) - relativedelta(bdays=19),
date(2014, 1, 7))
self.assertEqual("2014-11-15" - relativedelta(bdays=1),
datetime(2014, 11, 14))
self.assertEqual(date.today() - relativedelta(bdays=+45),
date.today() + relativedelta(bdays=-45))
def test_neg(self):
self.assertEqual(-relativedelta(years=+1, bdays=-3),
relativedelta(years=-1, bdays=+3))
def test_bool(self):
self.assertTrue(relativedelta(bdays=1))
self.assertTrue(relativedelta(days=1))
self.assertFalse(relativedelta())
def test_mul(self):
self.assertEqual(relativedelta(years=+1, bdays=-3) * 3,
relativedelta(years=+3, bdays=-9))
self.assertEqual(relativedelta(years=+1, bdays=-3) * -3,
relativedelta(years=-3, bdays=+9))
self.assertEqual(relativedelta(years=+1, bdays=-3) * 0,
relativedelta(years=0, bdays=0))
def test_rmul(self):
self.assertEqual(3 * relativedelta(years=+1, bdays=-3),
relativedelta(years=+3, bdays=-9))
self.assertEqual(-3 * relativedelta(years=+1, bdays=-3),
relativedelta(years=-3, bdays=+9))
self.assertEqual(0 * relativedelta(years=+1, bdays=-3),
relativedelta(years=0, bdays=0))
def test_eq(self):
r1 = relativedelta(years=1, months=2, days=3, bdays=1,
hours=4, minutes=5, seconds=6, microseconds=7)
r2 = relativedelta(years=1, months=2, days=3, bdays=1,
hours=4, minutes=5, seconds=6, microseconds=7)
self.assertEqual(r1, r2)
self.assertTrue(r1 == r2)
r2.days = 4
self.assertNotEqual(r1, r2)
self.assertFalse(r1 == r2)
r2.days = 3
r2.bdays = 0
self.assertNotEqual(r1, r2)
self.assertFalse(r1 == r2)
self.assertEqual(relativedelta(), relativedelta())
self.assertTrue(relativedelta() == relativedelta())
self.assertNotEqual(relativedelta(days=1), relativedelta(bdays=1))
self.assertFalse(relativedelta() == relativedelta(months=1))
self.assertNotEqual(relativedelta(days=1), relativedelta(bdays=1))
self.assertFalse(relativedelta() == relativedelta(months=1))
def test_ne(self):
r1 = relativedelta(years=1, months=2, days=3, bdays=1,
hours=4, minutes=5, seconds=6, microseconds=7)
r2 = relativedelta(years=1, months=2, days=3, bdays=1,
hours=4, minutes=5, seconds=6, microseconds=7)
self.assertFalse(r1 != r2)
r2.days = 4
self.assertTrue(r1 != r2)
r2.days = 3
r2.bdays = 0
self.assertTrue(r1 != r2)
self.assertFalse(relativedelta() != relativedelta())
self.assertTrue(relativedelta() != relativedelta(months=1))
self.assertTrue(relativedelta() != relativedelta(months=1))
def test_div(self):
self.assertEqual(relativedelta(years=+3, bdays=-9) / 3,
relativedelta(years=+1, bdays=-3))
self.assertEqual(relativedelta(years=+3, bdays=-9) / -3,
relativedelta(years=-1, bdays=+3))
self.assertRaises(ZeroDivisionError,
lambda: relativedelta(bdays=-3) / 0)
def test_truediv(self):
self.assertEqual(relativedelta(years=+4, bdays=-10) / 3.0,
relativedelta(years=+1, bdays=-3))
def test_repr(self):
rd1 = relativedelta(years=+1, months=+2, days=-3)
self.assertEqual(str(rd1),
"relativedelta(years=+1, months=+2, days=-3)")
rd2 = relativedelta(years=+1, months=+2, bdays=-7)
self.assertEqual(str(rd2),
"relativedelta(years=+1, months=+2, bdays=-7)")
rd3 = relativedelta(years=-1, months=-2, bdays=+7)
self.assertEqual(str(rd3),
"relativedelta(years=-1, months=-2, bdays=+7)")
rd4 = relativedelta(year=2014, month=1, day=2)
self.assertEqual(str(rd4),
"relativedelta(year=2014, month=1, day=2)")
class TestParser(unittest.TestCase):
def test_timestamp(self):
self.assertEqual(parse(1388577600).date(), date(2014, 1, 1))
def test_parserinfo(self):
self.assertEqual(parse("1/2/2014"), datetime(2014, 1, 2))
self.assertEqual(parse(b"1/2/2014"), datetime(2014, 1, 2))
self.assertEqual(parse("1/2/2014", dayfirst=True),
datetime(2014, 2, 1))
self.assertEqual(parse("1/2/2014", parserinfo(dayfirst=True)),
datetime(2014, 2, 1))
def test_exceptions(self):
self.assertRaises(ValueError, lambda: parse("abc"))
self.assertRaises(TypeError, lambda: parse(['a', 'b', 'c']))
class TestRRule(unittest.TestCase):
def test_bdaily(self):
start = parse("2014-01-01")
self.assertEqual(list(rrule(BDAILY, count=4, dtstart=start)),
[datetime(2014, 1, 1, 0, 0),
datetime(2014, 1, 2, 0, 0),
datetime(2014, 1, 3, 0, 0),
datetime(2014, 1, 6, 0, 0)])
until = parse("2014-01-09")
self.assertEqual(list(rrule(BDAILY, dtstart=start, until=until)),
[datetime(2014, 1, 1, 0, 0),
datetime(2014, 1, 2, 0, 0),
datetime(2014, 1, 3, 0, 0),
datetime(2014, 1, 6, 0, 0),
datetime(2014, 1, 7, 0, 0),
datetime(2014, 1, 8, 0, 0),
datetime(2014, 1, 9, 0, 0)])
def test_parse(self):
self.assertEqual(list(rrule(BDAILY, count=4, dtstart="2014-01-01")),
[datetime(2014, 1, 1, 0, 0),
datetime(2014, 1, 2, 0, 0),
datetime(2014, 1, 3, 0, 0),
datetime(2014, 1, 6, 0, 0)])
self.assertEqual(list(rrule(BDAILY, count=4, dtstart="2014-01-01",
until="01/04/2014")),
[datetime(2014, 1, 1, 0, 0),
datetime(2014, 1, 2, 0, 0),
datetime(2014, 1, 3, 0, 0)])
def test_holidays(self):
self.assertEqual(list(rrule(BDAILY, count=4, dtstart="2015-07-01")),
[datetime(2015, 7, 1, 0, 0),
datetime(2015, 7, 2, 0, 0),
datetime(2015, 7, 3, 0, 0),
datetime(2015, 7, 6, 0, 0)])
rrule.holidays = holidays.US()
self.assertEqual(list(rrule(BDAILY, count=4, dtstart="2015-07-01")),
[datetime(2015, 7, 1, 0, 0),
datetime(2015, 7, 2, 0, 0),
datetime(2015, 7, 6, 0, 0),
datetime(2015, 7, 7, 0, 0)])
self.assertEqual(list(rrule(BDAILY, count=4, dtstart="2015-07-01",
holidays=holidays.CA())),
[datetime(2015, 7, 2, 0, 0),
datetime(2015, 7, 3, 0, 0),
datetime(2015, 7, 6, 0, 0),
datetime(2015, 7, 7, 0, 0)])
del rrule.holidays
class TestDateTime(unittest.TestCase):
def test_date(self):
self.assertEqual(date("2015-03-25"), dt.date(2015, 3, 25))
self.assertEqual(date("1/2/2014"), dt.date(2014, 1, 2))
self.assertEqual(date(1388577600), dt.date(2014, 1, 1))
self.assertRaises(ValueError, lambda: date("abc"))
self.assertRaises(TypeError, lambda: date(['a', 'b', 'c']))
self.assertEqual(date(2015, 2, 99), date(2015, 2, 28))
self.assertEqual(date.today(), dt.date.today())
self.assertEqual(date.today(days=+1),
dt.date.today() + relativedelta(days=+1))
self.assertEqual(date.today(bdays=+200, holidays=holidays.US()),
dt.date.today()
+ relativedelta(bdays=+200, holidays=holidays.US()))
relativedelta.holidays = holidays.US()
self.assertEqual(date.today(bdays=+200),
dt.date.today() + relativedelta(bdays=+200))
del relativedelta.holidays
def test_datetime(self):
self.assertEqual(datetime("2015-03-25 12:34"),
dt.datetime(2015, 3, 25, 12, 34))
self.assertEqual(datetime(2015, 3, 99, 23, 45),
datetime(2015, 3, 31, 23, 45))
self.assertEqual(datetime.now().date(), dt.datetime.now().date())
self.assertEqual(datetime.now(bdays=-45).date(),
(dt.datetime.now() - relativedelta(bdays=45)).date())
def test_time(self):
self.assertEqual(time("12:45:54"), time(12, 45, 54))
self.assertEqual(time("2:30 PM"), time(14, 30))
self.assertEqual(relativedelta(time("3:40"), time(2, 30)),
relativedelta(hours=1, minutes=10))
self.assertEqual(relativedelta("3:40", time(2, 30)),
relativedelta(hours=1, minutes=10))
self.assertEqual(relativedelta(time(2, 30), time(3, 40)),
relativedelta(hours=-1, minutes=-10))
def test_eomday(self):
self.assertEqual(date("2015-02-15").eomday, dt.date(2015, 2, 28))
self.assertEqual(datetime("2015-03-01 12:34").eomday,
dt.datetime(2015, 3, 31, 12, 34))
if __name__ == "__main__":
unittest.main()
| mit |
ernw/dizzy | dizzy/interaction_state.py | 1 | 2155 | # interaction_state.py
#
# Copyright 2018 Daniel Mende <[email protected]>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of the nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from dizzy.state import State
class InteractionState(State):
def __init__(self, obj):
State.__init__(self, obj)
def next(self):
# Mutate the dizz object before the dizz functions is call
self.bak = self.iter.mutate()
# Call the dizz functions and return the current state
self.cur = self.iter.call_functions()
def reset(self):
self.iter.reset()
self.cur = self.bak
| bsd-3-clause |
xuxiao19910803/edx | lms/djangoapps/branding/__init__.py | 45 | 2858 | from xmodule.modulestore.django import modulestore
from xmodule.course_module import CourseDescriptor
from django.conf import settings
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from microsite_configuration import microsite
def get_visible_courses():
"""
Return the set of CourseDescriptors that should be visible in this branded instance
"""
filtered_by_org = microsite.get_value('course_org_filter')
_courses = modulestore().get_courses(org=filtered_by_org)
courses = [c for c in _courses
if isinstance(c, CourseDescriptor)]
courses = sorted(courses, key=lambda course: course.number)
subdomain = microsite.get_value('subdomain', 'default')
# See if we have filtered course listings in this domain
filtered_visible_ids = None
# this is legacy format which is outside of the microsite feature -- also handle dev case, which should not filter
if hasattr(settings, 'COURSE_LISTINGS') and subdomain in settings.COURSE_LISTINGS and not settings.DEBUG:
filtered_visible_ids = frozenset([SlashSeparatedCourseKey.from_deprecated_string(c) for c in settings.COURSE_LISTINGS[subdomain]])
if filtered_by_org:
return [course for course in courses if course.location.org == filtered_by_org]
if filtered_visible_ids:
return [course for course in courses if course.id in filtered_visible_ids]
else:
# Let's filter out any courses in an "org" that has been declared to be
# in a Microsite
org_filter_out_set = microsite.get_all_orgs()
return [course for course in courses if course.location.org not in org_filter_out_set]
def get_university_for_request():
"""
Return the university name specified for the domain, or None
if no university was specified
"""
return microsite.get_value('university')
def get_logo_url():
"""
Return the url for the branded logo image to be used
"""
# if the MicrositeConfiguration has a value for the logo_image_url
# let's use that
image_url = microsite.get_value('logo_image_url')
if image_url:
return '{static_url}{image_url}'.format(
static_url=settings.STATIC_URL,
image_url=image_url
)
# otherwise, use the legacy means to configure this
university = microsite.get_value('university')
if university is None and settings.FEATURES.get('IS_EDX_DOMAIN', False):
return '{static_url}images/edx-theme/edx-logo-77x36.png'.format(
static_url=settings.STATIC_URL
)
elif university:
return '{static_url}images/{uni}-on-edx-logo.png'.format(
static_url=settings.STATIC_URL, uni=university
)
else:
return '{static_url}images/default-theme/logo.png'.format(
static_url=settings.STATIC_URL
)
| agpl-3.0 |
vitaly4uk/django | tests/aggregation_regress/tests.py | 66 | 53789 | from __future__ import unicode_literals
import datetime
import pickle
from decimal import Decimal
from operator import attrgetter
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldError
from django.db import connection
from django.db.models import (
F, Q, Avg, Count, Max, StdDev, Sum, Value, Variance,
)
from django.test import TestCase, skipUnlessAnyDBFeature, skipUnlessDBFeature
from django.test.utils import Approximate
from django.utils import six
from .models import (
Alfa, Author, Book, Bravo, Charlie, Clues, Entries, HardbackBook, ItemTag,
Publisher, SelfRefFK, Store, WithManualPK,
)
class AggregationTests(TestCase):
@classmethod
def setUpTestData(cls):
cls.a1 = Author.objects.create(name='Adrian Holovaty', age=34)
cls.a2 = Author.objects.create(name='Jacob Kaplan-Moss', age=35)
cls.a3 = Author.objects.create(name='Brad Dayley', age=45)
cls.a4 = Author.objects.create(name='James Bennett', age=29)
cls.a5 = Author.objects.create(name='Jeffrey Forcier', age=37)
cls.a6 = Author.objects.create(name='Paul Bissex', age=29)
cls.a7 = Author.objects.create(name='Wesley J. Chun', age=25)
cls.a8 = Author.objects.create(name='Peter Norvig', age=57)
cls.a9 = Author.objects.create(name='Stuart Russell', age=46)
cls.a1.friends.add(cls.a2, cls.a4)
cls.a2.friends.add(cls.a1, cls.a7)
cls.a4.friends.add(cls.a1)
cls.a5.friends.add(cls.a6, cls.a7)
cls.a6.friends.add(cls.a5, cls.a7)
cls.a7.friends.add(cls.a2, cls.a5, cls.a6)
cls.a8.friends.add(cls.a9)
cls.a9.friends.add(cls.a8)
cls.p1 = Publisher.objects.create(name='Apress', num_awards=3)
cls.p2 = Publisher.objects.create(name='Sams', num_awards=1)
cls.p3 = Publisher.objects.create(name='Prentice Hall', num_awards=7)
cls.p4 = Publisher.objects.create(name='Morgan Kaufmann', num_awards=9)
cls.p5 = Publisher.objects.create(name="Jonno's House of Books", num_awards=0)
cls.b1 = Book.objects.create(
isbn='159059725', name='The Definitive Guide to Django: Web Development Done Right',
pages=447, rating=4.5, price=Decimal('30.00'), contact=cls.a1, publisher=cls.p1,
pubdate=datetime.date(2007, 12, 6)
)
cls.b2 = Book.objects.create(
isbn='067232959', name='Sams Teach Yourself Django in 24 Hours',
pages=528, rating=3.0, price=Decimal('23.09'), contact=cls.a3, publisher=cls.p2,
pubdate=datetime.date(2008, 3, 3)
)
cls.b3 = Book.objects.create(
isbn='159059996', name='Practical Django Projects',
pages=300, rating=4.0, price=Decimal('29.69'), contact=cls.a4, publisher=cls.p1,
pubdate=datetime.date(2008, 6, 23)
)
cls.b4 = Book.objects.create(
isbn='013235613', name='Python Web Development with Django',
pages=350, rating=4.0, price=Decimal('29.69'), contact=cls.a5, publisher=cls.p3,
pubdate=datetime.date(2008, 11, 3)
)
cls.b5 = HardbackBook.objects.create(
isbn='013790395', name='Artificial Intelligence: A Modern Approach',
pages=1132, rating=4.0, price=Decimal('82.80'), contact=cls.a8, publisher=cls.p3,
pubdate=datetime.date(1995, 1, 15), weight=4.5)
cls.b6 = HardbackBook.objects.create(
isbn='155860191', name='Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp',
pages=946, rating=5.0, price=Decimal('75.00'), contact=cls.a8, publisher=cls.p4,
pubdate=datetime.date(1991, 10, 15), weight=3.7)
cls.b1.authors.add(cls.a1, cls.a2)
cls.b2.authors.add(cls.a3)
cls.b3.authors.add(cls.a4)
cls.b4.authors.add(cls.a5, cls.a6, cls.a7)
cls.b5.authors.add(cls.a8, cls.a9)
cls.b6.authors.add(cls.a8)
s1 = Store.objects.create(
name='Amazon.com',
original_opening=datetime.datetime(1994, 4, 23, 9, 17, 42),
friday_night_closing=datetime.time(23, 59, 59)
)
s2 = Store.objects.create(
name='Books.com',
original_opening=datetime.datetime(2001, 3, 15, 11, 23, 37),
friday_night_closing=datetime.time(23, 59, 59)
)
s3 = Store.objects.create(
name="Mamma and Pappa's Books",
original_opening=datetime.datetime(1945, 4, 25, 16, 24, 14),
friday_night_closing=datetime.time(21, 30)
)
s1.books.add(cls.b1, cls.b2, cls.b3, cls.b4, cls.b5, cls.b6)
s2.books.add(cls.b1, cls.b3, cls.b5, cls.b6)
s3.books.add(cls.b3, cls.b4, cls.b6)
def assertObjectAttrs(self, obj, **kwargs):
for attr, value in six.iteritems(kwargs):
self.assertEqual(getattr(obj, attr), value)
def test_aggregates_in_where_clause(self):
"""
Regression test for #12822: DatabaseError: aggregates not allowed in
WHERE clause
Tests that the subselect works and returns results equivalent to a
query with the IDs listed.
Before the corresponding fix for this bug, this test passed in 1.1 and
failed in 1.2-beta (trunk).
"""
qs = Book.objects.values('contact').annotate(Max('id'))
qs = qs.order_by('contact').values_list('id__max', flat=True)
# don't do anything with the queryset (qs) before including it as a
# subquery
books = Book.objects.order_by('id')
qs1 = books.filter(id__in=qs)
qs2 = books.filter(id__in=list(qs))
self.assertEqual(list(qs1), list(qs2))
def test_aggregates_in_where_clause_pre_eval(self):
"""
Regression test for #12822: DatabaseError: aggregates not allowed in
WHERE clause
Same as the above test, but evaluates the queryset for the subquery
before it's used as a subquery.
Before the corresponding fix for this bug, this test failed in both
1.1 and 1.2-beta (trunk).
"""
qs = Book.objects.values('contact').annotate(Max('id'))
qs = qs.order_by('contact').values_list('id__max', flat=True)
# force the queryset (qs) for the subquery to be evaluated in its
# current state
list(qs)
books = Book.objects.order_by('id')
qs1 = books.filter(id__in=qs)
qs2 = books.filter(id__in=list(qs))
self.assertEqual(list(qs1), list(qs2))
@skipUnlessDBFeature('supports_subqueries_in_group_by')
def test_annotate_with_extra(self):
"""
Regression test for #11916: Extra params + aggregation creates
incorrect SQL.
"""
# Oracle doesn't support subqueries in group by clause
shortest_book_sql = """
SELECT name
FROM aggregation_regress_book b
WHERE b.publisher_id = aggregation_regress_publisher.id
ORDER BY b.pages
LIMIT 1
"""
# tests that this query does not raise a DatabaseError due to the full
# subselect being (erroneously) added to the GROUP BY parameters
qs = Publisher.objects.extra(select={
'name_of_shortest_book': shortest_book_sql,
}).annotate(total_books=Count('book'))
# force execution of the query
list(qs)
def test_aggregate(self):
# Ordering requests are ignored
self.assertEqual(
Author.objects.order_by("name").aggregate(Avg("age")),
{"age__avg": Approximate(37.444, places=1)}
)
# Implicit ordering is also ignored
self.assertEqual(
Book.objects.aggregate(Sum("pages")),
{"pages__sum": 3703},
)
# Baseline results
self.assertEqual(
Book.objects.aggregate(Sum('pages'), Avg('pages')),
{'pages__sum': 3703, 'pages__avg': Approximate(617.166, places=2)}
)
# Empty values query doesn't affect grouping or results
self.assertEqual(
Book.objects.values().aggregate(Sum('pages'), Avg('pages')),
{'pages__sum': 3703, 'pages__avg': Approximate(617.166, places=2)}
)
# Aggregate overrides extra selected column
self.assertEqual(
Book.objects.extra(select={'price_per_page': 'price / pages'}).aggregate(Sum('pages')),
{'pages__sum': 3703}
)
def test_annotation(self):
# Annotations get combined with extra select clauses
obj = Book.objects.annotate(mean_auth_age=Avg("authors__age")).extra(
select={"manufacture_cost": "price * .5"}).get(pk=self.b2.pk)
self.assertObjectAttrs(
obj,
contact_id=self.a3.id,
isbn='067232959',
mean_auth_age=45.0,
name='Sams Teach Yourself Django in 24 Hours',
pages=528,
price=Decimal("23.09"),
pubdate=datetime.date(2008, 3, 3),
publisher_id=self.p2.id,
rating=3.0
)
# Different DB backends return different types for the extra select computation
self.assertIn(obj.manufacture_cost, (11.545, Decimal('11.545')))
# Order of the annotate/extra in the query doesn't matter
obj = Book.objects.extra(select={'manufacture_cost': 'price * .5'}).annotate(
mean_auth_age=Avg('authors__age')).get(pk=self.b2.pk)
self.assertObjectAttrs(
obj,
contact_id=self.a3.id,
isbn='067232959',
mean_auth_age=45.0,
name='Sams Teach Yourself Django in 24 Hours',
pages=528,
price=Decimal("23.09"),
pubdate=datetime.date(2008, 3, 3),
publisher_id=self.p2.id,
rating=3.0
)
# Different DB backends return different types for the extra select computation
self.assertIn(obj.manufacture_cost, (11.545, Decimal('11.545')))
# Values queries can be combined with annotate and extra
obj = Book.objects.annotate(mean_auth_age=Avg('authors__age')).extra(
select={'manufacture_cost': 'price * .5'}).values().get(pk=self.b2.pk)
manufacture_cost = obj['manufacture_cost']
self.assertIn(manufacture_cost, (11.545, Decimal('11.545')))
del obj['manufacture_cost']
self.assertEqual(obj, {
'id': self.b2.id,
'contact_id': self.a3.id,
'isbn': '067232959',
'mean_auth_age': 45.0,
'name': 'Sams Teach Yourself Django in 24 Hours',
'pages': 528,
'price': Decimal('23.09'),
'pubdate': datetime.date(2008, 3, 3),
'publisher_id': self.p2.id,
'rating': 3.0,
})
# The order of the (empty) values, annotate and extra clauses doesn't
# matter
obj = Book.objects.values().annotate(mean_auth_age=Avg('authors__age')).extra(
select={'manufacture_cost': 'price * .5'}).get(pk=self.b2.pk)
manufacture_cost = obj['manufacture_cost']
self.assertIn(manufacture_cost, (11.545, Decimal('11.545')))
del obj['manufacture_cost']
self.assertEqual(obj, {
'id': self.b2.id,
'contact_id': self.a3.id,
'isbn': '067232959',
'mean_auth_age': 45.0,
'name': 'Sams Teach Yourself Django in 24 Hours',
'pages': 528,
'price': Decimal('23.09'),
'pubdate': datetime.date(2008, 3, 3),
'publisher_id': self.p2.id,
'rating': 3.0
})
# If the annotation precedes the values clause, it won't be included
# unless it is explicitly named
obj = Book.objects.annotate(mean_auth_age=Avg('authors__age')).extra(
select={'price_per_page': 'price / pages'}).values('name').get(pk=self.b1.pk)
self.assertEqual(obj, {
"name": 'The Definitive Guide to Django: Web Development Done Right',
})
obj = Book.objects.annotate(mean_auth_age=Avg('authors__age')).extra(
select={'price_per_page': 'price / pages'}).values('name', 'mean_auth_age').get(pk=self.b1.pk)
self.assertEqual(obj, {
'mean_auth_age': 34.5,
'name': 'The Definitive Guide to Django: Web Development Done Right',
})
# If an annotation isn't included in the values, it can still be used
# in a filter
qs = Book.objects.annotate(n_authors=Count('authors')).values('name').filter(n_authors__gt=2)
self.assertQuerysetEqual(
qs, [
{"name": 'Python Web Development with Django'}
],
lambda b: b,
)
# The annotations are added to values output if values() precedes
# annotate()
obj = Book.objects.values('name').annotate(mean_auth_age=Avg('authors__age')).extra(
select={'price_per_page': 'price / pages'}).get(pk=self.b1.pk)
self.assertEqual(obj, {
'mean_auth_age': 34.5,
'name': 'The Definitive Guide to Django: Web Development Done Right',
})
# Check that all of the objects are getting counted (allow_nulls) and
# that values respects the amount of objects
self.assertEqual(
len(Author.objects.annotate(Avg('friends__age')).values()),
9
)
# Check that consecutive calls to annotate accumulate in the query
qs = Book.objects.values('price').annotate(oldest=Max('authors__age')).order_by('oldest', 'price').annotate(Max('publisher__num_awards'))
self.assertQuerysetEqual(
qs, [
{'price': Decimal("30"), 'oldest': 35, 'publisher__num_awards__max': 3},
{'price': Decimal("29.69"), 'oldest': 37, 'publisher__num_awards__max': 7},
{'price': Decimal("23.09"), 'oldest': 45, 'publisher__num_awards__max': 1},
{'price': Decimal("75"), 'oldest': 57, 'publisher__num_awards__max': 9},
{'price': Decimal("82.8"), 'oldest': 57, 'publisher__num_awards__max': 7}
],
lambda b: b,
)
def test_aggrate_annotation(self):
# Aggregates can be composed over annotations.
# The return type is derived from the composed aggregate
vals = Book.objects.all().annotate(num_authors=Count('authors__id')).aggregate(Max('pages'), Max('price'), Sum('num_authors'), Avg('num_authors'))
self.assertEqual(vals, {
'num_authors__sum': 10,
'num_authors__avg': Approximate(1.666, places=2),
'pages__max': 1132,
'price__max': Decimal("82.80")
})
# Regression for #15624 - Missing SELECT columns when using values, annotate
# and aggregate in a single query
self.assertEqual(
Book.objects.annotate(c=Count('authors')).values('c').aggregate(Max('c')),
{'c__max': 3}
)
def test_decimal_aggregate_annotation_filter(self):
"""
Filtering on an aggregate annotation with Decimal values should work.
Requires special handling on SQLite (#18247).
"""
self.assertEqual(
len(Author.objects.annotate(sum=Sum('book_contact_set__price')).filter(sum__gt=Decimal(40))),
1
)
self.assertEqual(
len(Author.objects.annotate(sum=Sum('book_contact_set__price')).filter(sum__lte=Decimal(40))),
4
)
def test_field_error(self):
# Bad field requests in aggregates are caught and reported
self.assertRaises(
FieldError,
lambda: Book.objects.all().aggregate(num_authors=Count('foo'))
)
self.assertRaises(
FieldError,
lambda: Book.objects.all().annotate(num_authors=Count('foo'))
)
self.assertRaises(
FieldError,
lambda: Book.objects.all().annotate(num_authors=Count('authors__id')).aggregate(Max('foo'))
)
def test_more(self):
# Old-style count aggregations can be mixed with new-style
self.assertEqual(
Book.objects.annotate(num_authors=Count('authors')).count(),
6
)
# Non-ordinal, non-computed Aggregates over annotations correctly
# inherit the annotation's internal type if the annotation is ordinal
# or computed
vals = Book.objects.annotate(num_authors=Count('authors')).aggregate(Max('num_authors'))
self.assertEqual(
vals,
{'num_authors__max': 3}
)
vals = Publisher.objects.annotate(avg_price=Avg('book__price')).aggregate(Max('avg_price'))
self.assertEqual(
vals,
{'avg_price__max': 75.0}
)
# Aliases are quoted to protected aliases that might be reserved names
vals = Book.objects.aggregate(number=Max('pages'), select=Max('pages'))
self.assertEqual(
vals,
{'number': 1132, 'select': 1132}
)
# Regression for #10064: select_related() plays nice with aggregates
obj = Book.objects.select_related('publisher').annotate(
num_authors=Count('authors')).values().get(isbn='013790395')
self.assertEqual(obj, {
'contact_id': self.a8.id,
'id': self.b5.id,
'isbn': '013790395',
'name': 'Artificial Intelligence: A Modern Approach',
'num_authors': 2,
'pages': 1132,
'price': Decimal("82.8"),
'pubdate': datetime.date(1995, 1, 15),
'publisher_id': self.p3.id,
'rating': 4.0,
})
# Regression for #10010: exclude on an aggregate field is correctly
# negated
self.assertEqual(
len(Book.objects.annotate(num_authors=Count('authors'))),
6
)
self.assertEqual(
len(Book.objects.annotate(num_authors=Count('authors')).filter(num_authors__gt=2)),
1
)
self.assertEqual(
len(Book.objects.annotate(num_authors=Count('authors')).exclude(num_authors__gt=2)),
5
)
self.assertEqual(
len(Book.objects.annotate(num_authors=Count('authors')).filter(num_authors__lt=3).exclude(num_authors__lt=2)),
2
)
self.assertEqual(
len(Book.objects.annotate(num_authors=Count('authors')).exclude(num_authors__lt=2).filter(num_authors__lt=3)),
2
)
def test_aggregate_fexpr(self):
# Aggregates can be used with F() expressions
# ... where the F() is pushed into the HAVING clause
qs = Publisher.objects.annotate(num_books=Count('book')).filter(num_books__lt=F('num_awards') / 2).order_by('name').values('name', 'num_books', 'num_awards')
self.assertQuerysetEqual(
qs, [
{'num_books': 1, 'name': 'Morgan Kaufmann', 'num_awards': 9},
{'num_books': 2, 'name': 'Prentice Hall', 'num_awards': 7}
],
lambda p: p,
)
qs = Publisher.objects.annotate(num_books=Count('book')).exclude(num_books__lt=F('num_awards') / 2).order_by('name').values('name', 'num_books', 'num_awards')
self.assertQuerysetEqual(
qs, [
{'num_books': 2, 'name': 'Apress', 'num_awards': 3},
{'num_books': 0, 'name': "Jonno's House of Books", 'num_awards': 0},
{'num_books': 1, 'name': 'Sams', 'num_awards': 1}
],
lambda p: p,
)
# ... and where the F() references an aggregate
qs = Publisher.objects.annotate(num_books=Count('book')).filter(num_awards__gt=2 * F('num_books')).order_by('name').values('name', 'num_books', 'num_awards')
self.assertQuerysetEqual(
qs, [
{'num_books': 1, 'name': 'Morgan Kaufmann', 'num_awards': 9},
{'num_books': 2, 'name': 'Prentice Hall', 'num_awards': 7}
],
lambda p: p,
)
qs = Publisher.objects.annotate(num_books=Count('book')).exclude(num_books__lt=F('num_awards') / 2).order_by('name').values('name', 'num_books', 'num_awards')
self.assertQuerysetEqual(
qs, [
{'num_books': 2, 'name': 'Apress', 'num_awards': 3},
{'num_books': 0, 'name': "Jonno's House of Books", 'num_awards': 0},
{'num_books': 1, 'name': 'Sams', 'num_awards': 1}
],
lambda p: p,
)
def test_db_col_table(self):
# Tests on fields with non-default table and column names.
qs = Clues.objects.values('EntryID__Entry').annotate(Appearances=Count('EntryID'), Distinct_Clues=Count('Clue', distinct=True))
self.assertQuerysetEqual(qs, [])
qs = Entries.objects.annotate(clue_count=Count('clues__ID'))
self.assertQuerysetEqual(qs, [])
def test_boolean_conversion(self):
# Aggregates mixed up ordering of columns for backend's convert_values
# method. Refs #21126.
e = Entries.objects.create(Entry='foo')
c = Clues.objects.create(EntryID=e, Clue='bar')
qs = Clues.objects.select_related('EntryID').annotate(Count('ID'))
self.assertQuerysetEqual(
qs, [c], lambda x: x)
self.assertEqual(qs[0].EntryID, e)
self.assertIs(qs[0].EntryID.Exclude, False)
def test_empty(self):
# Regression for #10089: Check handling of empty result sets with
# aggregates
self.assertEqual(
Book.objects.filter(id__in=[]).count(),
0
)
vals = Book.objects.filter(id__in=[]).aggregate(num_authors=Count('authors'), avg_authors=Avg('authors'), max_authors=Max('authors'), max_price=Max('price'), max_rating=Max('rating'))
self.assertEqual(
vals,
{'max_authors': None, 'max_rating': None, 'num_authors': 0, 'avg_authors': None, 'max_price': None}
)
qs = Publisher.objects.filter(name="Jonno's House of Books").annotate(num_authors=Count('book__authors'), avg_authors=Avg('book__authors'), max_authors=Max('book__authors'), max_price=Max('book__price'), max_rating=Max('book__rating')).values()
self.assertQuerysetEqual(
qs, [
{'max_authors': None, 'name': "Jonno's House of Books", 'num_awards': 0, 'max_price': None, 'num_authors': 0, 'max_rating': None, 'id': self.p5.id, 'avg_authors': None}
],
lambda p: p
)
def test_more_more(self):
# Regression for #10113 - Fields mentioned in order_by() must be
# included in the GROUP BY. This only becomes a problem when the
# order_by introduces a new join.
self.assertQuerysetEqual(
Book.objects.annotate(num_authors=Count('authors')).order_by('publisher__name', 'name'), [
"Practical Django Projects",
"The Definitive Guide to Django: Web Development Done Right",
"Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp",
"Artificial Intelligence: A Modern Approach",
"Python Web Development with Django",
"Sams Teach Yourself Django in 24 Hours",
],
lambda b: b.name
)
# Regression for #10127 - Empty select_related() works with annotate
qs = Book.objects.filter(rating__lt=4.5).select_related().annotate(Avg('authors__age'))
self.assertQuerysetEqual(
qs, [
('Artificial Intelligence: A Modern Approach', 51.5, 'Prentice Hall', 'Peter Norvig'),
('Practical Django Projects', 29.0, 'Apress', 'James Bennett'),
('Python Web Development with Django', Approximate(30.333, places=2), 'Prentice Hall', 'Jeffrey Forcier'),
('Sams Teach Yourself Django in 24 Hours', 45.0, 'Sams', 'Brad Dayley')
],
lambda b: (b.name, b.authors__age__avg, b.publisher.name, b.contact.name)
)
# Regression for #10132 - If the values() clause only mentioned extra
# (select=) columns, those columns are used for grouping
qs = Book.objects.extra(select={'pub': 'publisher_id'}).values('pub').annotate(Count('id')).order_by('pub')
self.assertQuerysetEqual(
qs, [
{'pub': self.b1.id, 'id__count': 2},
{'pub': self.b2.id, 'id__count': 1},
{'pub': self.b3.id, 'id__count': 2},
{'pub': self.b4.id, 'id__count': 1}
],
lambda b: b
)
qs = Book.objects.extra(select={'pub': 'publisher_id', 'foo': 'pages'}).values('pub').annotate(Count('id')).order_by('pub')
self.assertQuerysetEqual(
qs, [
{'pub': self.p1.id, 'id__count': 2},
{'pub': self.p2.id, 'id__count': 1},
{'pub': self.p3.id, 'id__count': 2},
{'pub': self.p4.id, 'id__count': 1}
],
lambda b: b
)
# Regression for #10182 - Queries with aggregate calls are correctly
# realiased when used in a subquery
ids = Book.objects.filter(pages__gt=100).annotate(n_authors=Count('authors')).filter(n_authors__gt=2).order_by('n_authors')
self.assertQuerysetEqual(
Book.objects.filter(id__in=ids), [
"Python Web Development with Django",
],
lambda b: b.name
)
# Regression for #15709 - Ensure each group_by field only exists once
# per query
qstr = str(Book.objects.values('publisher').annotate(max_pages=Max('pages')).order_by().query)
# Check that there is just one GROUP BY clause (zero commas means at
# most one clause)
self.assertEqual(qstr[qstr.index('GROUP BY'):].count(', '), 0)
def test_duplicate_alias(self):
# Regression for #11256 - duplicating a default alias raises ValueError.
self.assertRaises(ValueError, Book.objects.all().annotate, Avg('authors__age'), authors__age__avg=Avg('authors__age'))
def test_field_name_conflict(self):
# Regression for #11256 - providing an aggregate name that conflicts with a field name on the model raises ValueError
self.assertRaises(ValueError, Author.objects.annotate, age=Avg('friends__age'))
def test_m2m_name_conflict(self):
# Regression for #11256 - providing an aggregate name that conflicts with an m2m name on the model raises ValueError
self.assertRaises(ValueError, Author.objects.annotate, friends=Count('friends'))
def test_values_queryset_non_conflict(self):
# Regression for #14707 -- If you're using a values query set, some potential conflicts are avoided.
# age is a field on Author, so it shouldn't be allowed as an aggregate.
# But age isn't included in values(), so it is.
results = Author.objects.values('name').annotate(age=Count('book_contact_set')).order_by('name')
self.assertEqual(len(results), 9)
self.assertEqual(results[0]['name'], 'Adrian Holovaty')
self.assertEqual(results[0]['age'], 1)
# Same problem, but aggregating over m2m fields
results = Author.objects.values('name').annotate(age=Avg('friends__age')).order_by('name')
self.assertEqual(len(results), 9)
self.assertEqual(results[0]['name'], 'Adrian Holovaty')
self.assertEqual(results[0]['age'], 32.0)
# Same problem, but colliding with an m2m field
results = Author.objects.values('name').annotate(friends=Count('friends')).order_by('name')
self.assertEqual(len(results), 9)
self.assertEqual(results[0]['name'], 'Adrian Holovaty')
self.assertEqual(results[0]['friends'], 2)
def test_reverse_relation_name_conflict(self):
# Regression for #11256 - providing an aggregate name that conflicts with a reverse-related name on the model raises ValueError
self.assertRaises(ValueError, Author.objects.annotate, book_contact_set=Avg('friends__age'))
def test_pickle(self):
# Regression for #10197 -- Queries with aggregates can be pickled.
# First check that pickling is possible at all. No crash = success
qs = Book.objects.annotate(num_authors=Count('authors'))
pickle.dumps(qs)
# Then check that the round trip works.
query = qs.query.get_compiler(qs.db).as_sql()[0]
qs2 = pickle.loads(pickle.dumps(qs))
self.assertEqual(
qs2.query.get_compiler(qs2.db).as_sql()[0],
query,
)
def test_more_more_more(self):
# Regression for #10199 - Aggregate calls clone the original query so
# the original query can still be used
books = Book.objects.all()
books.aggregate(Avg("authors__age"))
self.assertQuerysetEqual(
books.all(), [
'Artificial Intelligence: A Modern Approach',
'Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp',
'Practical Django Projects',
'Python Web Development with Django',
'Sams Teach Yourself Django in 24 Hours',
'The Definitive Guide to Django: Web Development Done Right'
],
lambda b: b.name
)
# Regression for #10248 - Annotations work with DateQuerySets
qs = Book.objects.annotate(num_authors=Count('authors')).filter(num_authors=2).dates('pubdate', 'day')
self.assertQuerysetEqual(
qs, [
datetime.date(1995, 1, 15),
datetime.date(2007, 12, 6),
],
lambda b: b
)
# Regression for #10290 - extra selects with parameters can be used for
# grouping.
qs = Book.objects.annotate(mean_auth_age=Avg('authors__age')).extra(select={'sheets': '(pages + %s) / %s'}, select_params=[1, 2]).order_by('sheets').values('sheets')
self.assertQuerysetEqual(
qs, [
150,
175,
224,
264,
473,
566
],
lambda b: int(b["sheets"])
)
# Regression for 10425 - annotations don't get in the way of a count()
# clause
self.assertEqual(
Book.objects.values('publisher').annotate(Count('publisher')).count(),
4
)
self.assertEqual(
Book.objects.annotate(Count('publisher')).values('publisher').count(),
6
)
# Note: intentionally no order_by(), that case needs tests, too.
publishers = Publisher.objects.filter(id__in=[1, 2])
self.assertEqual(
sorted(p.name for p in publishers),
[
"Apress",
"Sams"
]
)
publishers = publishers.annotate(n_books=Count("book"))
sorted_publishers = sorted(publishers, key=lambda x: x.name)
self.assertEqual(
sorted_publishers[0].n_books,
2
)
self.assertEqual(
sorted_publishers[1].n_books,
1
)
self.assertEqual(
sorted(p.name for p in publishers),
[
"Apress",
"Sams"
]
)
books = Book.objects.filter(publisher__in=publishers)
self.assertQuerysetEqual(
books, [
"Practical Django Projects",
"Sams Teach Yourself Django in 24 Hours",
"The Definitive Guide to Django: Web Development Done Right",
],
lambda b: b.name
)
self.assertEqual(
sorted(p.name for p in publishers),
[
"Apress",
"Sams"
]
)
# Regression for 10666 - inherited fields work with annotations and
# aggregations
self.assertEqual(
HardbackBook.objects.aggregate(n_pages=Sum('book_ptr__pages')),
{'n_pages': 2078}
)
self.assertEqual(
HardbackBook.objects.aggregate(n_pages=Sum('pages')),
{'n_pages': 2078},
)
qs = HardbackBook.objects.annotate(n_authors=Count('book_ptr__authors')).values('name', 'n_authors')
self.assertQuerysetEqual(
qs, [
{'n_authors': 2, 'name': 'Artificial Intelligence: A Modern Approach'},
{'n_authors': 1, 'name': 'Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp'}
],
lambda h: h
)
qs = HardbackBook.objects.annotate(n_authors=Count('authors')).values('name', 'n_authors')
self.assertQuerysetEqual(
qs, [
{'n_authors': 2, 'name': 'Artificial Intelligence: A Modern Approach'},
{'n_authors': 1, 'name': 'Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp'}
],
lambda h: h,
)
# Regression for #10766 - Shouldn't be able to reference an aggregate
# fields in an aggregate() call.
self.assertRaises(
FieldError,
lambda: Book.objects.annotate(mean_age=Avg('authors__age')).annotate(Avg('mean_age'))
)
def test_empty_filter_count(self):
self.assertEqual(
Author.objects.filter(id__in=[]).annotate(Count("friends")).count(),
0
)
def test_empty_filter_aggregate(self):
self.assertEqual(
Author.objects.filter(id__in=[]).annotate(Count("friends")).aggregate(Count("pk")),
{"pk__count": None}
)
def test_none_call_before_aggregate(self):
# Regression for #11789
self.assertEqual(
Author.objects.none().aggregate(Avg('age')),
{'age__avg': None}
)
def test_annotate_and_join(self):
self.assertEqual(
Author.objects.annotate(c=Count("friends__name")).exclude(friends__name="Joe").count(),
Author.objects.count()
)
def test_f_expression_annotation(self):
# Books with less than 200 pages per author.
qs = Book.objects.values("name").annotate(
n_authors=Count("authors")
).filter(
pages__lt=F("n_authors") * 200
).values_list("pk")
self.assertQuerysetEqual(
Book.objects.filter(pk__in=qs), [
"Python Web Development with Django"
],
attrgetter("name")
)
def test_values_annotate_values(self):
qs = Book.objects.values("name").annotate(
n_authors=Count("authors")
).values_list("pk", flat=True)
self.assertEqual(list(qs), list(Book.objects.values_list("pk", flat=True)))
def test_having_group_by(self):
# Test that when a field occurs on the LHS of a HAVING clause that it
# appears correctly in the GROUP BY clause
qs = Book.objects.values_list("name").annotate(
n_authors=Count("authors")
).filter(
pages__gt=F("n_authors")
).values_list("name", flat=True)
# Results should be the same, all Books have more pages than authors
self.assertEqual(
list(qs), list(Book.objects.values_list("name", flat=True))
)
def test_values_list_annotation_args_ordering(self):
"""
Annotate *args ordering should be preserved in values_list results.
**kwargs comes after *args.
Regression test for #23659.
"""
books = Book.objects.values_list("publisher__name").annotate(
Count("id"), Avg("price"), Avg("authors__age"), avg_pgs=Avg("pages")
).order_by("-publisher__name")
self.assertEqual(books[0], ('Sams', 1, 23.09, 45.0, 528.0))
def test_annotation_disjunction(self):
qs = Book.objects.annotate(n_authors=Count("authors")).filter(
Q(n_authors=2) | Q(name="Python Web Development with Django")
)
self.assertQuerysetEqual(
qs, [
"Artificial Intelligence: A Modern Approach",
"Python Web Development with Django",
"The Definitive Guide to Django: Web Development Done Right",
],
attrgetter("name")
)
qs = Book.objects.annotate(n_authors=Count("authors")).filter(
Q(name="The Definitive Guide to Django: Web Development Done Right") | (Q(name="Artificial Intelligence: A Modern Approach") & Q(n_authors=3))
)
self.assertQuerysetEqual(
qs, [
"The Definitive Guide to Django: Web Development Done Right",
],
attrgetter("name")
)
qs = Publisher.objects.annotate(
rating_sum=Sum("book__rating"),
book_count=Count("book")
).filter(
Q(rating_sum__gt=5.5) | Q(rating_sum__isnull=True)
).order_by('pk')
self.assertQuerysetEqual(
qs, [
"Apress",
"Prentice Hall",
"Jonno's House of Books",
],
attrgetter("name")
)
qs = Publisher.objects.annotate(
rating_sum=Sum("book__rating"),
book_count=Count("book")
).filter(
Q(rating_sum__gt=F("book_count")) | Q(rating_sum=None)
).order_by("num_awards")
self.assertQuerysetEqual(
qs, [
"Jonno's House of Books",
"Sams",
"Apress",
"Prentice Hall",
"Morgan Kaufmann"
],
attrgetter("name")
)
def test_quoting_aggregate_order_by(self):
qs = Book.objects.filter(
name="Python Web Development with Django"
).annotate(
authorCount=Count("authors")
).order_by("authorCount")
self.assertQuerysetEqual(
qs, [
("Python Web Development with Django", 3),
],
lambda b: (b.name, b.authorCount)
)
@skipUnlessDBFeature('supports_stddev')
def test_stddev(self):
self.assertEqual(
Book.objects.aggregate(StdDev('pages')),
{'pages__stddev': Approximate(311.46, 1)}
)
self.assertEqual(
Book.objects.aggregate(StdDev('rating')),
{'rating__stddev': Approximate(0.60, 1)}
)
self.assertEqual(
Book.objects.aggregate(StdDev('price')),
{'price__stddev': Approximate(24.16, 2)}
)
self.assertEqual(
Book.objects.aggregate(StdDev('pages', sample=True)),
{'pages__stddev': Approximate(341.19, 2)}
)
self.assertEqual(
Book.objects.aggregate(StdDev('rating', sample=True)),
{'rating__stddev': Approximate(0.66, 2)}
)
self.assertEqual(
Book.objects.aggregate(StdDev('price', sample=True)),
{'price__stddev': Approximate(26.46, 1)}
)
self.assertEqual(
Book.objects.aggregate(Variance('pages')),
{'pages__variance': Approximate(97010.80, 1)}
)
self.assertEqual(
Book.objects.aggregate(Variance('rating')),
{'rating__variance': Approximate(0.36, 1)}
)
self.assertEqual(
Book.objects.aggregate(Variance('price')),
{'price__variance': Approximate(583.77, 1)}
)
self.assertEqual(
Book.objects.aggregate(Variance('pages', sample=True)),
{'pages__variance': Approximate(116412.96, 1)}
)
self.assertEqual(
Book.objects.aggregate(Variance('rating', sample=True)),
{'rating__variance': Approximate(0.44, 2)}
)
self.assertEqual(
Book.objects.aggregate(Variance('price', sample=True)),
{'price__variance': Approximate(700.53, 2)}
)
def test_filtering_by_annotation_name(self):
# Regression test for #14476
# The name of the explicitly provided annotation name in this case
# poses no problem
qs = Author.objects.annotate(book_cnt=Count('book')).filter(book_cnt=2).order_by('name')
self.assertQuerysetEqual(
qs,
['Peter Norvig'],
lambda b: b.name
)
# Neither in this case
qs = Author.objects.annotate(book_count=Count('book')).filter(book_count=2).order_by('name')
self.assertQuerysetEqual(
qs,
['Peter Norvig'],
lambda b: b.name
)
# This case used to fail because the ORM couldn't resolve the
# automatically generated annotation name `book__count`
qs = Author.objects.annotate(Count('book')).filter(book__count=2).order_by('name')
self.assertQuerysetEqual(
qs,
['Peter Norvig'],
lambda b: b.name
)
def test_annotate_joins(self):
"""
Test that the base table's join isn't promoted to LOUTER. This could
cause the query generation to fail if there is an exclude() for fk-field
in the query, too. Refs #19087.
"""
qs = Book.objects.annotate(n=Count('pk'))
self.assertIs(qs.query.alias_map['aggregation_regress_book'].join_type, None)
# Check that the query executes without problems.
self.assertEqual(len(qs.exclude(publisher=-1)), 6)
@skipUnlessAnyDBFeature('allows_group_by_pk', 'allows_group_by_selected_pks')
def test_aggregate_duplicate_columns(self):
# Regression test for #17144
results = Author.objects.annotate(num_contacts=Count('book_contact_set'))
# There should only be one GROUP BY clause, for the `id` column.
# `name` and `age` should not be grouped on.
_, _, group_by = results.query.get_compiler(using='default').pre_sql_setup()
self.assertEqual(len(group_by), 1)
self.assertIn('id', group_by[0][0])
self.assertNotIn('name', group_by[0][0])
self.assertNotIn('age', group_by[0][0])
# Ensure that we get correct results.
self.assertEqual(
[(a.name, a.num_contacts) for a in results.order_by('name')],
[
('Adrian Holovaty', 1),
('Brad Dayley', 1),
('Jacob Kaplan-Moss', 0),
('James Bennett', 1),
('Jeffrey Forcier', 1),
('Paul Bissex', 0),
('Peter Norvig', 2),
('Stuart Russell', 0),
('Wesley J. Chun', 0),
]
)
@skipUnlessAnyDBFeature('allows_group_by_pk', 'allows_group_by_selected_pks')
def test_aggregate_duplicate_columns_only(self):
# Works with only() too.
results = Author.objects.only('id', 'name').annotate(num_contacts=Count('book_contact_set'))
_, _, grouping = results.query.get_compiler(using='default').pre_sql_setup()
self.assertEqual(len(grouping), 1)
self.assertIn('id', grouping[0][0])
self.assertNotIn('name', grouping[0][0])
self.assertNotIn('age', grouping[0][0])
# Ensure that we get correct results.
self.assertEqual(
[(a.name, a.num_contacts) for a in results.order_by('name')],
[
('Adrian Holovaty', 1),
('Brad Dayley', 1),
('Jacob Kaplan-Moss', 0),
('James Bennett', 1),
('Jeffrey Forcier', 1),
('Paul Bissex', 0),
('Peter Norvig', 2),
('Stuart Russell', 0),
('Wesley J. Chun', 0),
]
)
@skipUnlessAnyDBFeature('allows_group_by_pk', 'allows_group_by_selected_pks')
def test_aggregate_duplicate_columns_select_related(self):
# And select_related()
results = Book.objects.select_related('contact').annotate(
num_authors=Count('authors'))
_, _, grouping = results.query.get_compiler(using='default').pre_sql_setup()
# In the case of `group_by_selected_pks` we also group by contact.id because of the select_related.
self.assertEqual(len(grouping), 1 if connection.features.allows_group_by_pk else 2)
self.assertIn('id', grouping[0][0])
self.assertNotIn('name', grouping[0][0])
self.assertNotIn('contact', grouping[0][0])
# Ensure that we get correct results.
self.assertEqual(
[(b.name, b.num_authors) for b in results.order_by('name')],
[
('Artificial Intelligence: A Modern Approach', 2),
('Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp', 1),
('Practical Django Projects', 1),
('Python Web Development with Django', 3),
('Sams Teach Yourself Django in 24 Hours', 1),
('The Definitive Guide to Django: Web Development Done Right', 2)
]
)
def test_reverse_join_trimming(self):
qs = Author.objects.annotate(Count('book_contact_set__contact'))
self.assertIn(' JOIN ', str(qs.query))
def test_aggregation_with_generic_reverse_relation(self):
"""
Regression test for #10870: Aggregates with joins ignore extra
filters provided by setup_joins
tests aggregations with generic reverse relations
"""
django_book = Book.objects.get(name='Practical Django Projects')
ItemTag.objects.create(object_id=django_book.id, tag='intermediate',
content_type=ContentType.objects.get_for_model(django_book))
ItemTag.objects.create(object_id=django_book.id, tag='django',
content_type=ContentType.objects.get_for_model(django_book))
# Assign a tag to model with same PK as the book above. If the JOIN
# used in aggregation doesn't have content type as part of the
# condition the annotation will also count the 'hi mom' tag for b.
wmpk = WithManualPK.objects.create(id=django_book.pk)
ItemTag.objects.create(object_id=wmpk.id, tag='hi mom',
content_type=ContentType.objects.get_for_model(wmpk))
ai_book = Book.objects.get(name__startswith='Paradigms of Artificial Intelligence')
ItemTag.objects.create(object_id=ai_book.id, tag='intermediate',
content_type=ContentType.objects.get_for_model(ai_book))
self.assertEqual(Book.objects.aggregate(Count('tags')), {'tags__count': 3})
results = Book.objects.annotate(Count('tags')).order_by('-tags__count', 'name')
self.assertEqual(
[(b.name, b.tags__count) for b in results],
[
('Practical Django Projects', 2),
('Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp', 1),
('Artificial Intelligence: A Modern Approach', 0),
('Python Web Development with Django', 0),
('Sams Teach Yourself Django in 24 Hours', 0),
('The Definitive Guide to Django: Web Development Done Right', 0)
]
)
def test_negated_aggregation(self):
expected_results = Author.objects.exclude(
pk__in=Author.objects.annotate(book_cnt=Count('book')).filter(book_cnt=2)
).order_by('name')
expected_results = [a.name for a in expected_results]
qs = Author.objects.annotate(book_cnt=Count('book')).exclude(
Q(book_cnt=2), Q(book_cnt=2)).order_by('name')
self.assertQuerysetEqual(
qs,
expected_results,
lambda b: b.name
)
expected_results = Author.objects.exclude(
pk__in=Author.objects.annotate(book_cnt=Count('book')).filter(book_cnt=2)
).order_by('name')
expected_results = [a.name for a in expected_results]
qs = Author.objects.annotate(book_cnt=Count('book')).exclude(Q(book_cnt=2) | Q(book_cnt=2)).order_by('name')
self.assertQuerysetEqual(
qs,
expected_results,
lambda b: b.name
)
def test_name_filters(self):
qs = Author.objects.annotate(Count('book')).filter(
Q(book__count__exact=2) | Q(name='Adrian Holovaty')
).order_by('name')
self.assertQuerysetEqual(
qs,
['Adrian Holovaty', 'Peter Norvig'],
lambda b: b.name
)
def test_name_expressions(self):
# Test that aggregates are spotted correctly from F objects.
# Note that Adrian's age is 34 in the fixtures, and he has one book
# so both conditions match one author.
qs = Author.objects.annotate(Count('book')).filter(
Q(name='Peter Norvig') | Q(age=F('book__count') + 33)
).order_by('name')
self.assertQuerysetEqual(
qs,
['Adrian Holovaty', 'Peter Norvig'],
lambda b: b.name
)
def test_ticket_11293(self):
q1 = Q(price__gt=50)
q2 = Q(authors__count__gt=1)
query = Book.objects.annotate(Count('authors')).filter(
q1 | q2).order_by('pk')
self.assertQuerysetEqual(
query, [1, 4, 5, 6],
lambda b: b.pk)
def test_ticket_11293_q_immutable(self):
"""
Check that splitting a q object to parts for where/having doesn't alter
the original q-object.
"""
q1 = Q(isbn='')
q2 = Q(authors__count__gt=1)
query = Book.objects.annotate(Count('authors'))
query.filter(q1 | q2)
self.assertEqual(len(q2.children), 1)
def test_fobj_group_by(self):
"""
Check that an F() object referring to related column works correctly
in group by.
"""
qs = Book.objects.annotate(
acount=Count('authors')
).filter(
acount=F('publisher__num_awards')
)
self.assertQuerysetEqual(
qs, ['Sams Teach Yourself Django in 24 Hours'],
lambda b: b.name)
def test_annotate_reserved_word(self):
"""
Regression #18333 - Ensure annotated column name is properly quoted.
"""
vals = Book.objects.annotate(select=Count('authors__id')).aggregate(Sum('select'), Avg('select'))
self.assertEqual(vals, {
'select__sum': 10,
'select__avg': Approximate(1.666, places=2),
})
def test_annotate_on_relation(self):
book = Book.objects.annotate(avg_price=Avg('price'), publisher_name=F('publisher__name')).get(pk=self.b1.pk)
self.assertEqual(book.avg_price, 30.00)
self.assertEqual(book.publisher_name, "Apress")
def test_aggregate_on_relation(self):
# A query with an existing annotation aggregation on a relation should
# succeed.
qs = Book.objects.annotate(avg_price=Avg('price')).aggregate(
publisher_awards=Sum('publisher__num_awards')
)
self.assertEqual(qs['publisher_awards'], 30)
def test_annotate_distinct_aggregate(self):
# There are three books with rating of 4.0 and two of the books have
# the same price. Hence, the distinct removes one rating of 4.0
# from the results.
vals1 = Book.objects.values('rating', 'price').distinct().aggregate(result=Sum('rating'))
vals2 = Book.objects.aggregate(result=Sum('rating') - Value(4.0))
self.assertEqual(vals1, vals2)
class JoinPromotionTests(TestCase):
def test_ticket_21150(self):
b = Bravo.objects.create()
c = Charlie.objects.create(bravo=b)
qs = Charlie.objects.select_related('alfa').annotate(Count('bravo__charlie'))
self.assertQuerysetEqual(
qs, [c], lambda x: x)
self.assertIs(qs[0].alfa, None)
a = Alfa.objects.create()
c.alfa = a
c.save()
# Force re-evaluation
qs = qs.all()
self.assertQuerysetEqual(
qs, [c], lambda x: x)
self.assertEqual(qs[0].alfa, a)
def test_existing_join_not_promoted(self):
# No promotion for existing joins
qs = Charlie.objects.filter(alfa__name__isnull=False).annotate(Count('alfa__name'))
self.assertIn(' INNER JOIN ', str(qs.query))
# Also, the existing join is unpromoted when doing filtering for already
# promoted join.
qs = Charlie.objects.annotate(Count('alfa__name')).filter(alfa__name__isnull=False)
self.assertIn(' INNER JOIN ', str(qs.query))
# But, as the join is nullable first use by annotate will be LOUTER
qs = Charlie.objects.annotate(Count('alfa__name'))
self.assertIn(' LEFT OUTER JOIN ', str(qs.query))
def test_non_nullable_fk_not_promoted(self):
qs = Book.objects.annotate(Count('contact__name'))
self.assertIn(' INNER JOIN ', str(qs.query))
class SelfReferentialFKTests(TestCase):
def test_ticket_24748(self):
t1 = SelfRefFK.objects.create(name='t1')
SelfRefFK.objects.create(name='t2', parent=t1)
SelfRefFK.objects.create(name='t3', parent=t1)
self.assertQuerysetEqual(
SelfRefFK.objects.annotate(num_children=Count('children')).order_by('name'),
[('t1', 2), ('t2', 0), ('t3', 0)],
lambda x: (x.name, x.num_children)
)
| bsd-3-clause |
M3nin0/supreme-broccoli | Web/Flask/site_/lib/python3.5/site-packages/sqlalchemy/ext/declarative/clsregistry.py | 55 | 10817 | # ext/declarative/clsregistry.py
# Copyright (C) 2005-2016 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Routines to handle the string class registry used by declarative.
This system allows specification of classes and expressions used in
:func:`.relationship` using strings.
"""
from ...orm.properties import ColumnProperty, RelationshipProperty, \
SynonymProperty
from ...schema import _get_table_key
from ...orm import class_mapper, interfaces
from ... import util
from ... import inspection
from ... import exc
import weakref
# strong references to registries which we place in
# the _decl_class_registry, which is usually weak referencing.
# the internal registries here link to classes with weakrefs and remove
# themselves when all references to contained classes are removed.
_registries = set()
def add_class(classname, cls):
"""Add a class to the _decl_class_registry associated with the
given declarative class.
"""
if classname in cls._decl_class_registry:
# class already exists.
existing = cls._decl_class_registry[classname]
if not isinstance(existing, _MultipleClassMarker):
existing = \
cls._decl_class_registry[classname] = \
_MultipleClassMarker([cls, existing])
else:
cls._decl_class_registry[classname] = cls
try:
root_module = cls._decl_class_registry['_sa_module_registry']
except KeyError:
cls._decl_class_registry['_sa_module_registry'] = \
root_module = _ModuleMarker('_sa_module_registry', None)
tokens = cls.__module__.split(".")
# build up a tree like this:
# modulename: myapp.snacks.nuts
#
# myapp->snack->nuts->(classes)
# snack->nuts->(classes)
# nuts->(classes)
#
# this allows partial token paths to be used.
while tokens:
token = tokens.pop(0)
module = root_module.get_module(token)
for token in tokens:
module = module.get_module(token)
module.add_class(classname, cls)
class _MultipleClassMarker(object):
"""refers to multiple classes of the same name
within _decl_class_registry.
"""
__slots__ = 'on_remove', 'contents', '__weakref__'
def __init__(self, classes, on_remove=None):
self.on_remove = on_remove
self.contents = set([
weakref.ref(item, self._remove_item) for item in classes])
_registries.add(self)
def __iter__(self):
return (ref() for ref in self.contents)
def attempt_get(self, path, key):
if len(self.contents) > 1:
raise exc.InvalidRequestError(
"Multiple classes found for path \"%s\" "
"in the registry of this declarative "
"base. Please use a fully module-qualified path." %
(".".join(path + [key]))
)
else:
ref = list(self.contents)[0]
cls = ref()
if cls is None:
raise NameError(key)
return cls
def _remove_item(self, ref):
self.contents.remove(ref)
if not self.contents:
_registries.discard(self)
if self.on_remove:
self.on_remove()
def add_item(self, item):
# protect against class registration race condition against
# asynchronous garbage collection calling _remove_item,
# [ticket:3208]
modules = set([
cls.__module__ for cls in
[ref() for ref in self.contents] if cls is not None])
if item.__module__ in modules:
util.warn(
"This declarative base already contains a class with the "
"same class name and module name as %s.%s, and will "
"be replaced in the string-lookup table." % (
item.__module__,
item.__name__
)
)
self.contents.add(weakref.ref(item, self._remove_item))
class _ModuleMarker(object):
""""refers to a module name within
_decl_class_registry.
"""
__slots__ = 'parent', 'name', 'contents', 'mod_ns', 'path', '__weakref__'
def __init__(self, name, parent):
self.parent = parent
self.name = name
self.contents = {}
self.mod_ns = _ModNS(self)
if self.parent:
self.path = self.parent.path + [self.name]
else:
self.path = []
_registries.add(self)
def __contains__(self, name):
return name in self.contents
def __getitem__(self, name):
return self.contents[name]
def _remove_item(self, name):
self.contents.pop(name, None)
if not self.contents and self.parent is not None:
self.parent._remove_item(self.name)
_registries.discard(self)
def resolve_attr(self, key):
return getattr(self.mod_ns, key)
def get_module(self, name):
if name not in self.contents:
marker = _ModuleMarker(name, self)
self.contents[name] = marker
else:
marker = self.contents[name]
return marker
def add_class(self, name, cls):
if name in self.contents:
existing = self.contents[name]
existing.add_item(cls)
else:
existing = self.contents[name] = \
_MultipleClassMarker([cls],
on_remove=lambda: self._remove_item(name))
class _ModNS(object):
__slots__ = '__parent',
def __init__(self, parent):
self.__parent = parent
def __getattr__(self, key):
try:
value = self.__parent.contents[key]
except KeyError:
pass
else:
if value is not None:
if isinstance(value, _ModuleMarker):
return value.mod_ns
else:
assert isinstance(value, _MultipleClassMarker)
return value.attempt_get(self.__parent.path, key)
raise AttributeError("Module %r has no mapped classes "
"registered under the name %r" % (
self.__parent.name, key))
class _GetColumns(object):
__slots__ = 'cls',
def __init__(self, cls):
self.cls = cls
def __getattr__(self, key):
mp = class_mapper(self.cls, configure=False)
if mp:
if key not in mp.all_orm_descriptors:
raise exc.InvalidRequestError(
"Class %r does not have a mapped column named %r"
% (self.cls, key))
desc = mp.all_orm_descriptors[key]
if desc.extension_type is interfaces.NOT_EXTENSION:
prop = desc.property
if isinstance(prop, SynonymProperty):
key = prop.name
elif not isinstance(prop, ColumnProperty):
raise exc.InvalidRequestError(
"Property %r is not an instance of"
" ColumnProperty (i.e. does not correspond"
" directly to a Column)." % key)
return getattr(self.cls, key)
inspection._inspects(_GetColumns)(
lambda target: inspection.inspect(target.cls))
class _GetTable(object):
__slots__ = 'key', 'metadata'
def __init__(self, key, metadata):
self.key = key
self.metadata = metadata
def __getattr__(self, key):
return self.metadata.tables[
_get_table_key(key, self.key)
]
def _determine_container(key, value):
if isinstance(value, _MultipleClassMarker):
value = value.attempt_get([], key)
return _GetColumns(value)
class _class_resolver(object):
def __init__(self, cls, prop, fallback, arg):
self.cls = cls
self.prop = prop
self.arg = self._declarative_arg = arg
self.fallback = fallback
self._dict = util.PopulateDict(self._access_cls)
self._resolvers = ()
def _access_cls(self, key):
cls = self.cls
if key in cls._decl_class_registry:
return _determine_container(key, cls._decl_class_registry[key])
elif key in cls.metadata.tables:
return cls.metadata.tables[key]
elif key in cls.metadata._schemas:
return _GetTable(key, cls.metadata)
elif '_sa_module_registry' in cls._decl_class_registry and \
key in cls._decl_class_registry['_sa_module_registry']:
registry = cls._decl_class_registry['_sa_module_registry']
return registry.resolve_attr(key)
elif self._resolvers:
for resolv in self._resolvers:
value = resolv(key)
if value is not None:
return value
return self.fallback[key]
def __call__(self):
try:
x = eval(self.arg, globals(), self._dict)
if isinstance(x, _GetColumns):
return x.cls
else:
return x
except NameError as n:
raise exc.InvalidRequestError(
"When initializing mapper %s, expression %r failed to "
"locate a name (%r). If this is a class name, consider "
"adding this relationship() to the %r class after "
"both dependent classes have been defined." %
(self.prop.parent, self.arg, n.args[0], self.cls)
)
def _resolver(cls, prop):
import sqlalchemy
from sqlalchemy.orm import foreign, remote
fallback = sqlalchemy.__dict__.copy()
fallback.update({'foreign': foreign, 'remote': remote})
def resolve_arg(arg):
return _class_resolver(cls, prop, fallback, arg)
return resolve_arg
def _deferred_relationship(cls, prop):
if isinstance(prop, RelationshipProperty):
resolve_arg = _resolver(cls, prop)
for attr in ('argument', 'order_by', 'primaryjoin', 'secondaryjoin',
'secondary', '_user_defined_foreign_keys', 'remote_side'):
v = getattr(prop, attr)
if isinstance(v, util.string_types):
setattr(prop, attr, resolve_arg(v))
if prop.backref and isinstance(prop.backref, tuple):
key, kwargs = prop.backref
for attr in ('primaryjoin', 'secondaryjoin', 'secondary',
'foreign_keys', 'remote_side', 'order_by'):
if attr in kwargs and isinstance(kwargs[attr],
util.string_types):
kwargs[attr] = resolve_arg(kwargs[attr])
return prop
| apache-2.0 |
poo12138/gem5-stable | src/sim/probe/Probe.py | 62 | 2370 | # -*- mode:python -*-
# Copyright (c) 2013 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Matt Horsnell
from m5.SimObject import SimObject
from m5.params import *
from m5.proxy import *
class ProbeListenerObject(SimObject):
type = 'ProbeListenerObject'
cxx_header = 'sim/probe/probe.hh'
manager = Param.SimObject(Parent.any, "ProbeManager")
| bsd-3-clause |
mattesno1/CouchPotatoServer | couchpotato/core/settings.py | 42 | 8457 | from __future__ import with_statement
import ConfigParser
from hashlib import md5
from CodernityDB.hash_index import HashIndex
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import mergeDicts, tryInt, tryFloat
class Settings(object):
options = {}
types = {}
def __init__(self):
addApiView('settings', self.view, docs = {
'desc': 'Return the options and its values of settings.conf. Including the default values and group ordering used on the settings page.',
'return': {'type': 'object', 'example': """{
// objects like in __init__.py of plugin
"options": {
"moovee" : {
"groups" : [{
"description" : "SD movies only",
"name" : "#alt.binaries.moovee",
"options" : [{
"default" : false,
"name" : "enabled",
"type" : "enabler"
}],
"tab" : "providers"
}],
"name" : "moovee"
}
},
// object structured like settings.conf
"values": {
"moovee": {
"enabled": false
}
}
}"""}
})
addApiView('settings.save', self.saveView, docs = {
'desc': 'Save setting to config file (settings.conf)',
'params': {
'section': {'desc': 'The section name in settings.conf'},
'name': {'desc': 'The option name'},
'value': {'desc': 'The value you want to save'},
}
})
addEvent('database.setup', self.databaseSetup)
self.file = None
self.p = None
self.log = None
def setFile(self, config_file):
self.file = config_file
self.p = ConfigParser.RawConfigParser()
self.p.read(config_file)
from couchpotato.core.logger import CPLog
self.log = CPLog(__name__)
self.connectEvents()
def databaseSetup(self):
fireEvent('database.setup_index', 'property', PropertyIndex)
def parser(self):
return self.p
def sections(self):
return self.p.sections()
def connectEvents(self):
addEvent('settings.options', self.addOptions)
addEvent('settings.register', self.registerDefaults)
addEvent('settings.save', self.save)
def registerDefaults(self, section_name, options = None, save = True):
if not options: options = {}
self.addSection(section_name)
for option_name, option in options.items():
self.setDefault(section_name, option_name, option.get('default', ''))
# Migrate old settings from old location to the new location
if option.get('migrate_from'):
if self.p.has_option(option.get('migrate_from'), option_name):
previous_value = self.p.get(option.get('migrate_from'), option_name)
self.p.set(section_name, option_name, previous_value)
self.p.remove_option(option.get('migrate_from'), option_name)
if option.get('type'):
self.setType(section_name, option_name, option.get('type'))
if save:
self.save()
def set(self, section, option, value):
return self.p.set(section, option, value)
def get(self, option = '', section = 'core', default = None, type = None):
try:
try: type = self.types[section][option]
except: type = 'unicode' if not type else type
if hasattr(self, 'get%s' % type.capitalize()):
return getattr(self, 'get%s' % type.capitalize())(section, option)
else:
return self.getUnicode(section, option)
except:
return default
def delete(self, option = '', section = 'core'):
self.p.remove_option(section, option)
self.save()
def getEnabler(self, section, option):
return self.getBool(section, option)
def getBool(self, section, option):
try:
return self.p.getboolean(section, option)
except:
return self.p.get(section, option) == 1
def getInt(self, section, option):
try:
return self.p.getint(section, option)
except:
return tryInt(self.p.get(section, option))
def getFloat(self, section, option):
try:
return self.p.getfloat(section, option)
except:
return tryFloat(self.p.get(section, option))
def getUnicode(self, section, option):
value = self.p.get(section, option).decode('unicode_escape')
return toUnicode(value).strip()
def getValues(self):
values = {}
for section in self.sections():
values[section] = {}
for option in self.p.items(section):
(option_name, option_value) = option
is_password = False
try: is_password = self.types[section][option_name] == 'password'
except: pass
values[section][option_name] = self.get(option_name, section)
if is_password and values[section][option_name]:
values[section][option_name] = len(values[section][option_name]) * '*'
return values
def save(self):
with open(self.file, 'wb') as configfile:
self.p.write(configfile)
self.log.debug('Saved settings')
def addSection(self, section):
if not self.p.has_section(section):
self.p.add_section(section)
def setDefault(self, section, option, value):
if not self.p.has_option(section, option):
self.p.set(section, option, value)
def setType(self, section, option, type):
if not self.types.get(section):
self.types[section] = {}
self.types[section][option] = type
def addOptions(self, section_name, options):
if not self.options.get(section_name):
self.options[section_name] = options
else:
self.options[section_name] = mergeDicts(self.options[section_name], options)
def getOptions(self):
return self.options
def view(self, **kwargs):
return {
'options': self.getOptions(),
'values': self.getValues()
}
def saveView(self, **kwargs):
section = kwargs.get('section')
option = kwargs.get('name')
value = kwargs.get('value')
# See if a value handler is attached, use that as value
new_value = fireEvent('setting.save.%s.%s' % (section, option), value, single = True)
self.set(section, option, (new_value if new_value else value).encode('unicode_escape'))
self.save()
# After save (for re-interval etc)
fireEvent('setting.save.%s.%s.after' % (section, option), single = True)
fireEvent('setting.save.%s.*.after' % section, single = True)
return {
'success': True,
}
def getProperty(self, identifier):
from couchpotato import get_db
db = get_db()
prop = None
try:
propert = db.get('property', identifier, with_doc = True)
prop = propert['doc']['value']
except:
pass # self.log.debug('Property "%s" doesn\'t exist: %s', (identifier, traceback.format_exc(0)))
return prop
def setProperty(self, identifier, value = ''):
from couchpotato import get_db
db = get_db()
try:
p = db.get('property', identifier, with_doc = True)
p['doc'].update({
'identifier': identifier,
'value': toUnicode(value),
})
db.update(p['doc'])
except:
db.insert({
'_t': 'property',
'identifier': identifier,
'value': toUnicode(value),
})
class PropertyIndex(HashIndex):
_version = 1
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(PropertyIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return md5(key).hexdigest()
def make_key_value(self, data):
if data.get('_t') == 'property':
return md5(data['identifier']).hexdigest(), None
| gpl-3.0 |
naokimiyasaka/sublime-text | Backup/20140106101521/ConvertToUTF8/chardet/langhungarianmodel.py | 2763 | 12536 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
# 255: Control characters that usually does not exist in any text
# 254: Carriage/Return
# 253: symbol (punctuation) that does not belong to word
# 252: 0 - 9
# Character Mapping Table:
Latin2_HungarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47,
46, 71, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253,
253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8,
23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253,
159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,
175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,
191,192,193,194,195,196,197, 75,198,199,200,201,202,203,204,205,
79,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,
221, 51, 81,222, 78,223,224,225,226, 44,227,228,229, 61,230,231,
232,233,234, 58,235, 66, 59,236,237,238, 60, 69, 63,239,240,241,
82, 14, 74,242, 70, 80,243, 72,244, 15, 83, 77, 84, 30, 76, 85,
245,246,247, 25, 73, 42, 24,248,249,250, 31, 56, 29,251,252,253,
)
win1250HungarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47,
46, 72, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253,
253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8,
23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253,
161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,
177,178,179,180, 78,181, 69,182,183,184,185,186,187,188,189,190,
191,192,193,194,195,196,197, 76,198,199,200,201,202,203,204,205,
81,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,
221, 51, 83,222, 80,223,224,225,226, 44,227,228,229, 61,230,231,
232,233,234, 58,235, 66, 59,236,237,238, 60, 70, 63,239,240,241,
84, 14, 75,242, 71, 82,243, 73,244, 15, 85, 79, 86, 30, 77, 87,
245,246,247, 25, 74, 42, 24,248,249,250, 31, 56, 29,251,252,253,
)
# Model Table:
# total sequences: 100%
# first 512 sequences: 94.7368%
# first 1024 sequences:5.2623%
# rest sequences: 0.8894%
# negative sequences: 0.0009%
HungarianLangModel = (
0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,
3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,2,3,3,1,1,2,2,2,2,2,1,2,
3,2,2,3,3,3,3,3,2,3,3,3,3,3,3,1,2,3,3,3,3,2,3,3,1,1,3,3,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,
3,2,1,3,3,3,3,3,2,3,3,3,3,3,1,1,2,3,3,3,3,3,3,3,1,1,3,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,1,1,2,3,3,3,1,3,3,3,3,3,1,3,3,2,2,0,3,2,3,
0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,3,3,2,3,3,2,2,3,2,3,2,0,3,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,3,3,3,3,2,3,3,3,1,2,3,2,2,3,1,2,3,3,2,2,0,3,3,3,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,3,2,3,3,3,3,2,3,3,3,3,0,2,3,2,
0,0,0,1,1,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,3,1,1,1,3,3,2,1,3,2,2,3,2,1,3,2,2,1,0,3,3,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,2,2,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,3,2,2,3,1,1,3,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,1,3,3,3,3,3,2,2,1,3,3,3,0,1,1,2,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,2,0,3,2,3,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,1,3,2,2,2,3,1,1,3,3,1,1,0,3,3,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,2,3,3,3,3,3,1,2,3,2,2,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,2,2,3,1,3,3,2,2,1,3,3,3,1,1,3,1,2,3,2,3,2,2,2,1,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,2,2,3,2,1,0,3,2,0,1,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,1,0,3,3,3,3,0,2,3,0,0,2,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,2,2,3,3,2,2,2,2,3,3,0,1,2,3,2,3,2,2,3,2,1,2,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,3,3,3,3,3,1,2,3,3,3,2,1,2,3,3,2,2,2,3,2,3,3,1,3,3,1,1,0,2,3,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,1,2,2,2,2,3,3,3,1,1,1,3,3,1,1,3,1,1,3,2,1,2,3,1,1,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,1,2,1,1,3,3,1,1,1,1,3,3,1,1,2,2,1,2,1,1,2,2,1,1,0,2,2,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,1,1,2,1,1,3,3,1,0,1,1,3,3,2,0,1,1,2,3,1,0,2,2,1,0,0,1,3,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,2,1,3,3,3,3,3,1,2,3,2,3,3,2,1,1,3,2,3,2,1,2,2,0,1,2,1,0,0,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,2,2,2,2,3,1,2,2,1,1,3,3,0,3,2,1,2,3,2,1,3,3,1,1,0,2,1,3,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,2,2,3,2,3,3,3,2,1,1,3,3,1,1,1,2,2,3,2,3,2,2,2,1,0,2,2,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
1,0,0,3,3,3,3,3,0,0,3,3,2,3,0,0,0,2,3,3,1,0,1,2,0,0,1,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,2,3,3,3,3,3,1,2,3,3,2,2,1,1,0,3,3,2,2,1,2,2,1,0,2,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,2,2,1,3,1,2,3,3,2,2,1,1,2,2,1,1,1,1,3,2,1,1,1,1,2,1,0,1,2,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,
2,3,3,1,1,1,1,1,3,3,3,0,1,1,3,3,1,1,1,1,1,2,2,0,3,1,1,2,0,2,1,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,1,0,1,2,1,2,2,0,1,2,3,1,2,0,0,0,2,1,1,1,1,1,2,0,0,1,1,0,0,0,0,
1,2,1,2,2,2,1,2,1,2,0,2,0,2,2,1,1,2,1,1,2,1,1,1,0,1,0,0,0,1,1,0,
1,1,1,2,3,2,3,3,0,1,2,2,3,1,0,1,0,2,1,2,2,0,1,1,0,0,1,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,3,3,2,2,1,0,0,3,2,3,2,0,0,0,1,1,3,0,0,1,1,0,0,2,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,2,2,3,3,1,0,1,3,2,3,1,1,1,0,1,1,1,1,1,3,1,0,0,2,2,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,1,2,2,2,1,0,1,2,3,3,2,0,0,0,2,1,1,1,2,1,1,1,0,1,1,1,0,0,0,
1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,2,1,1,1,1,1,1,0,1,1,1,0,0,1,1,
3,2,2,1,0,0,1,1,2,2,0,3,0,1,2,1,1,0,0,1,1,1,0,1,1,1,1,0,2,1,1,1,
2,2,1,1,1,2,1,2,1,1,1,1,1,1,1,2,1,1,1,2,3,1,1,1,1,1,1,1,1,1,0,1,
2,3,3,0,1,0,0,0,3,3,1,0,0,1,2,2,1,0,0,0,0,2,0,0,1,1,1,0,2,1,1,1,
2,1,1,1,1,1,1,2,1,1,0,1,1,0,1,1,1,0,1,2,1,1,0,1,1,1,1,1,1,1,0,1,
2,3,3,0,1,0,0,0,2,2,0,0,0,0,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,1,0,
2,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1,
3,2,2,0,1,0,1,0,2,3,2,0,0,1,2,2,1,0,0,1,1,1,0,0,2,1,0,1,2,2,1,1,
2,1,1,1,1,1,1,2,1,1,1,1,1,1,0,2,1,0,1,1,0,1,1,1,0,1,1,2,1,1,0,1,
2,2,2,0,0,1,0,0,2,2,1,1,0,0,2,1,1,0,0,0,1,2,0,0,2,1,0,0,2,1,1,1,
2,1,1,1,1,2,1,2,1,1,1,2,2,1,1,2,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1,
1,2,3,0,0,0,1,0,3,2,1,0,0,1,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,2,1,
1,1,0,0,0,1,0,1,1,1,1,1,2,0,0,1,0,0,0,2,0,0,1,1,1,1,1,1,1,1,0,1,
3,0,0,2,1,2,2,1,0,0,2,1,2,2,0,0,0,2,1,1,1,0,1,1,0,0,1,1,2,0,0,0,
1,2,1,2,2,1,1,2,1,2,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,0,0,1,
1,3,2,0,0,0,1,0,2,2,2,0,0,0,2,2,1,0,0,0,0,3,1,1,1,1,0,0,2,1,1,1,
2,1,0,1,1,1,0,1,1,1,1,1,1,1,0,2,1,0,0,1,0,1,1,0,1,1,1,1,1,1,0,1,
2,3,2,0,0,0,1,0,2,2,0,0,0,0,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,1,0,
2,1,1,1,1,2,1,2,1,2,0,1,1,1,0,2,1,1,1,2,1,1,1,1,0,1,1,1,1,1,0,1,
3,1,1,2,2,2,3,2,1,1,2,2,1,1,0,1,0,2,2,1,1,1,1,1,0,0,1,1,0,1,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,0,0,0,0,0,2,2,0,0,0,0,2,2,1,0,0,0,1,1,0,0,1,2,0,0,2,1,1,1,
2,2,1,1,1,2,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,1,1,0,1,2,1,1,1,0,1,
1,0,0,1,2,3,2,1,0,0,2,0,1,1,0,0,0,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0,
1,2,1,2,1,2,1,1,1,2,0,2,1,1,1,0,1,2,0,0,1,1,1,0,0,0,0,0,0,0,0,0,
2,3,2,0,0,0,0,0,1,1,2,1,0,0,1,1,1,0,0,0,0,2,0,0,1,1,0,0,2,1,1,1,
2,1,1,1,1,1,1,2,1,0,1,1,1,1,0,2,1,1,1,1,1,1,0,1,0,1,1,1,1,1,0,1,
1,2,2,0,1,1,1,0,2,2,2,0,0,0,3,2,1,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,
1,1,0,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,0,0,1,1,1,0,1,0,1,
2,1,0,2,1,1,2,2,1,1,2,1,1,1,0,0,0,1,1,0,1,1,1,1,0,0,1,1,1,0,0,0,
1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,1,0,
1,2,3,0,0,0,1,0,2,2,0,0,0,0,2,2,0,0,0,0,0,1,0,0,1,0,0,0,2,0,1,0,
2,1,1,1,1,1,0,2,0,0,0,1,2,1,1,1,1,0,1,2,0,1,0,1,0,1,1,1,0,1,0,1,
2,2,2,0,0,0,1,0,2,1,2,0,0,0,1,1,2,0,0,0,0,1,0,0,1,1,0,0,2,1,0,1,
2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1,
1,2,2,0,0,0,1,0,2,2,2,0,0,0,1,1,0,0,0,0,0,1,1,0,2,0,0,1,1,1,0,1,
1,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,0,0,1,1,0,1,0,1,1,1,1,1,0,0,0,1,
1,0,0,1,0,1,2,1,0,0,1,1,1,2,0,0,0,1,1,0,1,0,1,1,0,0,1,0,0,0,0,0,
0,2,1,2,1,1,1,1,1,2,0,2,0,1,1,0,1,2,1,0,1,1,1,0,0,0,0,0,0,1,0,0,
2,1,1,0,1,2,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,2,1,0,1,
2,2,1,1,1,1,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,0,1,0,1,1,1,1,1,0,1,
1,2,2,0,0,0,0,0,1,1,0,0,0,0,2,1,0,0,0,0,0,2,0,0,2,2,0,0,2,0,0,1,
2,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,0,0,0,1,1,1,1,0,0,1,1,1,1,0,0,1,
1,1,2,0,0,3,1,0,2,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,0,0,1,0,1,0,
1,2,1,0,1,1,1,2,1,1,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,1,0,0,0,1,0,0,
2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,2,0,0,0,
2,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,1,0,1,
2,1,1,1,2,1,1,1,0,1,1,2,1,0,0,0,0,1,1,1,1,0,1,0,0,0,0,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,0,1,1,1,1,1,0,0,1,1,2,1,0,0,0,1,1,0,0,0,1,1,0,0,1,0,1,0,0,0,
1,2,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0,
2,0,0,0,1,1,1,1,0,0,1,1,0,0,0,0,0,1,1,1,2,0,0,1,0,0,1,0,1,0,0,0,
0,1,1,1,1,1,1,1,1,2,0,1,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,1,1,0,0,2,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,
0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,
0,1,1,1,1,1,0,0,1,1,0,1,0,1,0,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
0,0,0,1,0,0,0,0,0,0,1,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,1,1,1,0,1,0,0,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
2,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0,0,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,1,1,1,1,1,1,0,1,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,
)
Latin2HungarianModel = {
'charToOrderMap': Latin2_HungarianCharToOrderMap,
'precedenceMatrix': HungarianLangModel,
'mTypicalPositiveRatio': 0.947368,
'keepEnglishLetter': True,
'charsetName': "ISO-8859-2"
}
Win1250HungarianModel = {
'charToOrderMap': win1250HungarianCharToOrderMap,
'precedenceMatrix': HungarianLangModel,
'mTypicalPositiveRatio': 0.947368,
'keepEnglishLetter': True,
'charsetName': "windows-1250"
}
# flake8: noqa
| mit |
Kazade/NeHe-Website | google_appengine/lib/cherrypy/cherrypy/process/win32.py | 93 | 5870 | """Windows service. Requires pywin32."""
import os
import win32api
import win32con
import win32event
import win32service
import win32serviceutil
from cherrypy.process import wspbus, plugins
class ConsoleCtrlHandler(plugins.SimplePlugin):
"""A WSPBus plugin for handling Win32 console events (like Ctrl-C)."""
def __init__(self, bus):
self.is_set = False
plugins.SimplePlugin.__init__(self, bus)
def start(self):
if self.is_set:
self.bus.log('Handler for console events already set.', level=40)
return
result = win32api.SetConsoleCtrlHandler(self.handle, 1)
if result == 0:
self.bus.log('Could not SetConsoleCtrlHandler (error %r)' %
win32api.GetLastError(), level=40)
else:
self.bus.log('Set handler for console events.', level=40)
self.is_set = True
def stop(self):
if not self.is_set:
self.bus.log('Handler for console events already off.', level=40)
return
try:
result = win32api.SetConsoleCtrlHandler(self.handle, 0)
except ValueError:
# "ValueError: The object has not been registered"
result = 1
if result == 0:
self.bus.log('Could not remove SetConsoleCtrlHandler (error %r)' %
win32api.GetLastError(), level=40)
else:
self.bus.log('Removed handler for console events.', level=40)
self.is_set = False
def handle(self, event):
"""Handle console control events (like Ctrl-C)."""
if event in (win32con.CTRL_C_EVENT, win32con.CTRL_LOGOFF_EVENT,
win32con.CTRL_BREAK_EVENT, win32con.CTRL_SHUTDOWN_EVENT,
win32con.CTRL_CLOSE_EVENT):
self.bus.log('Console event %s: shutting down bus' % event)
# Remove self immediately so repeated Ctrl-C doesn't re-call it.
try:
self.stop()
except ValueError:
pass
self.bus.exit()
# 'First to return True stops the calls'
return 1
return 0
class Win32Bus(wspbus.Bus):
"""A Web Site Process Bus implementation for Win32.
Instead of time.sleep, this bus blocks using native win32event objects.
"""
def __init__(self):
self.events = {}
wspbus.Bus.__init__(self)
def _get_state_event(self, state):
"""Return a win32event for the given state (creating it if needed)."""
try:
return self.events[state]
except KeyError:
event = win32event.CreateEvent(None, 0, 0,
"WSPBus %s Event (pid=%r)" %
(state.name, os.getpid()))
self.events[state] = event
return event
def _get_state(self):
return self._state
def _set_state(self, value):
self._state = value
event = self._get_state_event(value)
win32event.PulseEvent(event)
state = property(_get_state, _set_state)
def wait(self, state, interval=0.1, channel=None):
"""Wait for the given state(s), KeyboardInterrupt or SystemExit.
Since this class uses native win32event objects, the interval
argument is ignored.
"""
if isinstance(state, (tuple, list)):
# Don't wait for an event that beat us to the punch ;)
if self.state not in state:
events = tuple([self._get_state_event(s) for s in state])
win32event.WaitForMultipleObjects(events, 0, win32event.INFINITE)
else:
# Don't wait for an event that beat us to the punch ;)
if self.state != state:
event = self._get_state_event(state)
win32event.WaitForSingleObject(event, win32event.INFINITE)
class _ControlCodes(dict):
"""Control codes used to "signal" a service via ControlService.
User-defined control codes are in the range 128-255. We generally use
the standard Python value for the Linux signal and add 128. Example:
>>> signal.SIGUSR1
10
control_codes['graceful'] = 128 + 10
"""
def key_for(self, obj):
"""For the given value, return its corresponding key."""
for key, val in self.items():
if val is obj:
return key
raise ValueError("The given object could not be found: %r" % obj)
control_codes = _ControlCodes({'graceful': 138})
def signal_child(service, command):
if command == 'stop':
win32serviceutil.StopService(service)
elif command == 'restart':
win32serviceutil.RestartService(service)
else:
win32serviceutil.ControlService(service, control_codes[command])
class PyWebService(win32serviceutil.ServiceFramework):
"""Python Web Service."""
_svc_name_ = "Python Web Service"
_svc_display_name_ = "Python Web Service"
_svc_deps_ = None # sequence of service names on which this depends
_exe_name_ = "pywebsvc"
_exe_args_ = None # Default to no arguments
# Only exists on Windows 2000 or later, ignored on windows NT
_svc_description_ = "Python Web Service"
def SvcDoRun(self):
from cherrypy import process
process.bus.start()
process.bus.block()
def SvcStop(self):
from cherrypy import process
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
process.bus.exit()
def SvcOther(self, control):
process.bus.publish(control_codes.key_for(control))
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(PyWebService)
| bsd-3-clause |
Cuuuurzel/KiPyCalc | sympy/diffgeom/tests/test_hyperbolic_space.py | 74 | 2447 | '''
unit test describing the hyperbolic half-plane with the Poincare metric. This
is a basic model of hyperbolic geometry on the (positive) half-space
{(x,y) \in R^2 | y > 0}
with the Riemannian metric
ds^2 = (dx^2 + dy^2)/y^2
It has constant negative scalar curvature = -2
https://en.wikipedia.org/wiki/Poincare_half-plane_model
'''
from sympy import diag
from sympy.diffgeom import (twoform_to_matrix,
metric_to_Christoffel_1st, metric_to_Christoffel_2nd,
metric_to_Riemann_components, metric_to_Ricci_components)
import sympy.diffgeom.rn
def test_H2():
TP = sympy.diffgeom.TensorProduct
R2 = sympy.diffgeom.rn.R2
y = R2.y
dy = R2.dy
dx = R2.dx
g = (TP(dx, dx) + TP(dy, dy))*y**(-2)
automat = twoform_to_matrix(g)
mat = diag(y**(-2), y**(-2))
assert mat == automat
gamma1 = metric_to_Christoffel_1st(g)
assert gamma1[0][0][0] == 0
assert gamma1[0][0][1] == -y**(-3)
assert gamma1[0][1][0] == -y**(-3)
assert gamma1[0][1][1] == 0
assert gamma1[1][1][1] == -y**(-3)
assert gamma1[1][1][0] == 0
assert gamma1[1][0][1] == 0
assert gamma1[1][0][0] == y**(-3)
gamma2 = metric_to_Christoffel_2nd(g)
assert gamma2[0][0][0] == 0
assert gamma2[0][0][1] == -y**(-1)
assert gamma2[0][1][0] == -y**(-1)
assert gamma2[0][1][1] == 0
assert gamma2[1][1][1] == -y**(-1)
assert gamma2[1][1][0] == 0
assert gamma2[1][0][1] == 0
assert gamma2[1][0][0] == y**(-1)
Rm = metric_to_Riemann_components(g)
assert Rm[0][0][0][0] == 0
assert Rm[0][0][0][1] == 0
assert Rm[0][0][1][0] == 0
assert Rm[0][0][1][1] == 0
assert Rm[0][1][0][0] == 0
assert Rm[0][1][0][1] == -y**(-2)
assert Rm[0][1][1][0] == y**(-2)
assert Rm[0][1][1][1] == 0
assert Rm[1][0][0][0] == 0
assert Rm[1][0][0][1] == y**(-2)
assert Rm[1][0][1][0] == -y**(-2)
assert Rm[1][0][1][1] == 0
assert Rm[1][1][0][0] == 0
assert Rm[1][1][0][1] == 0
assert Rm[1][1][1][0] == 0
assert Rm[1][1][1][1] == 0
Ric = metric_to_Ricci_components(g)
assert Ric[0][0] == -y**(-2)
assert Ric[0][1] == 0
assert Ric[1][0] == 0
assert Ric[0][0] == -y**(-2)
## scalar curvature is -2
#TODO - it would be nice to have index contraction built-in
R = (Ric[0][0] + Ric[1][1])*y**2
assert R == -2
## Gauss curvature is -1
assert R/2 == -1
| mit |
1974kpkpkp/pygments.rb | vendor/pygments-main/pygments/styles/bw.py | 364 | 1355 | # -*- coding: utf-8 -*-
"""
pygments.styles.bw
~~~~~~~~~~~~~~~~~~
Simple black/white only style.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Operator, Generic
class BlackWhiteStyle(Style):
background_color = "#ffffff"
default_style = ""
styles = {
Comment: "italic",
Comment.Preproc: "noitalic",
Keyword: "bold",
Keyword.Pseudo: "nobold",
Keyword.Type: "nobold",
Operator.Word: "bold",
Name.Class: "bold",
Name.Namespace: "bold",
Name.Exception: "bold",
Name.Entity: "bold",
Name.Tag: "bold",
String: "italic",
String.Interpol: "bold",
String.Escape: "bold",
Generic.Heading: "bold",
Generic.Subheading: "bold",
Generic.Emph: "italic",
Generic.Strong: "bold",
Generic.Prompt: "bold",
Error: "border:#FF0000"
}
| mit |
ak2703/edx-platform | common/lib/symmath/symmath/symmath_check.py | 126 | 12542 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# File: symmath_check.py
# Date: 02-May-12 (creation)
#
# Symbolic mathematical expression checker for edX. Uses sympy to check for expression equality.
#
# Takes in math expressions given as Presentation MathML (from ASCIIMathML), converts to Content MathML using SnuggleTeX
import traceback
from .formula import *
import logging
log = logging.getLogger(__name__)
#-----------------------------------------------------------------------------
# check function interface
#
# This is one of the main entry points to call.
def symmath_check_simple(expect, ans, adict={}, symtab=None, extra_options=None):
"""
Check a symbolic mathematical expression using sympy.
The input is an ascii string (not MathML) converted to math using sympy.sympify.
"""
options = {'__MATRIX__': False, '__ABC__': False, '__LOWER__': False}
if extra_options:
options.update(extra_options)
for op in options: # find options in expect string
if op in expect:
expect = expect.replace(op, '')
options[op] = True
expect = expect.replace('__OR__', '__or__') # backwards compatibility
if options['__LOWER__']:
expect = expect.lower()
ans = ans.lower()
try:
ret = check(expect, ans,
matrix=options['__MATRIX__'],
abcsym=options['__ABC__'],
symtab=symtab,
)
except Exception, err:
return {'ok': False,
'msg': 'Error %s<br/>Failed in evaluating check(%s,%s)' % (err, expect, ans)
}
return ret
#-----------------------------------------------------------------------------
# pretty generic checking function
def check(expect, given, numerical=False, matrix=False, normphase=False, abcsym=False, do_qubit=True, symtab=None, dosimplify=False):
"""
Returns dict with
'ok': True if check is good, False otherwise
'msg': response message (in HTML)
"expect" may have multiple possible acceptable answers, separated by "__OR__"
"""
if "__or__" in expect: # if multiple acceptable answers
eset = expect.split('__or__') # then see if any match
for eone in eset:
ret = check(eone, given, numerical, matrix, normphase, abcsym, do_qubit, symtab, dosimplify)
if ret['ok']:
return ret
return ret
flags = {}
if "__autonorm__" in expect:
flags['autonorm'] = True
expect = expect.replace('__autonorm__', '')
matrix = True
threshold = 1.0e-3
if "__threshold__" in expect:
(expect, st) = expect.split('__threshold__')
threshold = float(st)
numerical = True
if str(given) == '' and not str(expect) == '':
return {'ok': False, 'msg': ''}
try:
xgiven = my_sympify(given, normphase, matrix, do_qubit=do_qubit, abcsym=abcsym, symtab=symtab)
except Exception, err:
return {'ok': False, 'msg': 'Error %s<br/> in evaluating your expression "%s"' % (err, given)}
try:
xexpect = my_sympify(expect, normphase, matrix, do_qubit=do_qubit, abcsym=abcsym, symtab=symtab)
except Exception, err:
return {'ok': False, 'msg': 'Error %s<br/> in evaluating OUR expression "%s"' % (err, expect)}
if 'autonorm' in flags: # normalize trace of matrices
try:
xgiven /= xgiven.trace()
except Exception, err:
return {'ok': False, 'msg': 'Error %s<br/> in normalizing trace of your expression %s' % (err, to_latex(xgiven))}
try:
xexpect /= xexpect.trace()
except Exception, err:
return {'ok': False, 'msg': 'Error %s<br/> in normalizing trace of OUR expression %s' % (err, to_latex(xexpect))}
msg = 'Your expression was evaluated as ' + to_latex(xgiven)
# msg += '<br/>Expected ' + to_latex(xexpect)
# msg += "<br/>flags=%s" % flags
if matrix and numerical:
xgiven = my_evalf(xgiven, chop=True)
dm = my_evalf(sympy.Matrix(xexpect) - sympy.Matrix(xgiven), chop=True)
msg += " = " + to_latex(xgiven)
if abs(dm.vec().norm().evalf()) < threshold:
return {'ok': True, 'msg': msg}
else:
pass
#msg += "dm = " + to_latex(dm) + " diff = " + str(abs(dm.vec().norm().evalf()))
#msg += "expect = " + to_latex(xexpect)
elif dosimplify:
if sympy.simplify(xexpect) == sympy.simplify(xgiven):
return {'ok': True, 'msg': msg}
elif numerical:
if abs((xexpect - xgiven).evalf(chop=True)) < threshold:
return {'ok': True, 'msg': msg}
elif xexpect == xgiven:
return {'ok': True, 'msg': msg}
#msg += "<p/>expect='%s', given='%s'" % (expect,given) # debugging
# msg += "<p/> dot test " + to_latex(dot(sympy.Symbol('x'),sympy.Symbol('y')))
return {'ok': False, 'msg': msg}
#-----------------------------------------------------------------------------
# helper function to convert all <p> to <span class='inline-error'>
def make_error_message(msg):
# msg = msg.replace('<p>','<p><span class="inline-error">').replace('</p>','</span></p>')
msg = '<div class="capa_alert">%s</div>' % msg
return msg
def is_within_tolerance(expected, actual, tolerance):
if expected == 0:
return abs(actual) < tolerance
else:
return abs(abs(actual - expected) / expected) < tolerance
#-----------------------------------------------------------------------------
# Check function interface, which takes pmathml input
#
# This is one of the main entry points to call.
def symmath_check(expect, ans, dynamath=None, options=None, debug=None, xml=None):
"""
Check a symbolic mathematical expression using sympy.
The input may be presentation MathML. Uses formula.
This is the default Symbolic Response checking function
Desc of args:
expect is a sympy string representing the correct answer. It is interpreted
using my_sympify (from formula.py), which reads strings as sympy input
(e.g. 'integrate(x^2, (x,1,2))' would be valid, and evaluate to give 1.5)
ans is student-typed answer. It is expected to be ascii math, but the code
below would support a sympy string.
dynamath is the PMathML string converted by MathJax. It is used if
evaluation with ans is not sufficient.
options is a string with these possible substrings, set as an xml property
of the problem:
-matrix - make a sympy matrix, rather than a list of lists, if possible
-qubit - passed to my_sympify
-imaginary - used in formla, presumably to signal to use i as sqrt(-1)?
-numerical - force numerical comparison.
"""
msg = ''
# msg += '<p/>abname=%s' % abname
# msg += '<p/>adict=%s' % (repr(adict).replace('<','<'))
threshold = 1.0e-3 # for numerical comparison (also with matrices)
DEBUG = debug
if xml is not None:
DEBUG = xml.get('debug', False) # override debug flag using attribute in symbolicmath xml
if DEBUG in ['0', 'False']:
DEBUG = False
# options
if options is None:
options = ''
do_matrix = 'matrix' in options
do_qubit = 'qubit' in options
do_numerical = 'numerical' in options
# parse expected answer
try:
fexpect = my_sympify(str(expect), matrix=do_matrix, do_qubit=do_qubit)
except Exception, err:
msg += '<p>Error %s in parsing OUR expected answer "%s"</p>' % (err, expect)
return {'ok': False, 'msg': make_error_message(msg)}
###### Sympy input #######
# if expected answer is a number, try parsing provided answer as a number also
try:
fans = my_sympify(str(ans), matrix=do_matrix, do_qubit=do_qubit)
except Exception, err:
fans = None
# do a numerical comparison if both expected and answer are numbers
if hasattr(fexpect, 'is_number') and fexpect.is_number \
and hasattr(fans, 'is_number') and fans.is_number:
if is_within_tolerance(fexpect, fans, threshold):
return {'ok': True, 'msg': msg}
else:
msg += '<p>You entered: %s</p>' % to_latex(fans)
return {'ok': False, 'msg': msg}
if do_numerical: # numerical answer expected - force numerical comparison
if is_within_tolerance(fexpect, fans, threshold):
return {'ok': True, 'msg': msg}
else:
msg += '<p>You entered: %s (note that a numerical answer is expected)</p>' % to_latex(fans)
return {'ok': False, 'msg': msg}
if fexpect == fans:
msg += '<p>You entered: %s</p>' % to_latex(fans)
return {'ok': True, 'msg': msg}
###### PMathML input ######
# convert mathml answer to formula
try:
mmlans = dynamath[0] if dynamath else None
except Exception, err:
mmlans = None
if not mmlans:
return {'ok': False, 'msg': '[symmath_check] failed to get MathML for input; dynamath=%s' % dynamath}
f = formula(mmlans, options=options)
# get sympy representation of the formula
# if DEBUG: msg += '<p/> mmlans=%s' % repr(mmlans).replace('<','<')
try:
fsym = f.sympy
msg += '<p>You entered: %s</p>' % to_latex(f.sympy)
except Exception, err:
log.exception("Error evaluating expression '%s' as a valid equation", ans)
msg += "<p>Error in evaluating your expression '%s' as a valid equation</p>" % (ans)
if "Illegal math" in str(err):
msg += "<p>Illegal math expression</p>"
if DEBUG:
msg += 'Error: %s' % str(err).replace('<', '<')
msg += '<hr>'
msg += '<p><font color="blue">DEBUG messages:</p>'
msg += "<p><pre>%s</pre></p>" % traceback.format_exc()
msg += '<p>cmathml=<pre>%s</pre></p>' % f.cmathml.replace('<', '<')
msg += '<p>pmathml=<pre>%s</pre></p>' % mmlans.replace('<', '<')
msg += '<hr>'
return {'ok': False, 'msg': make_error_message(msg)}
# do numerical comparison with expected
if hasattr(fexpect, 'is_number') and fexpect.is_number:
if hasattr(fsym, 'is_number') and fsym.is_number:
if abs(abs(fsym - fexpect) / fexpect) < threshold:
return {'ok': True, 'msg': msg}
return {'ok': False, 'msg': msg}
msg += "<p>Expecting a numerical answer!</p>"
msg += "<p>given = %s</p>" % repr(ans)
msg += "<p>fsym = %s</p>" % repr(fsym)
# msg += "<p>cmathml = <pre>%s</pre></p>" % str(f.cmathml).replace('<','<')
return {'ok': False, 'msg': make_error_message(msg)}
# Here is a good spot for adding calls to X.simplify() or X.expand(),
# allowing equivalence over binomial expansion or trig identities
# exactly the same?
if fexpect == fsym:
return {'ok': True, 'msg': msg}
if isinstance(fexpect, list):
try:
xgiven = my_evalf(fsym, chop=True)
dm = my_evalf(sympy.Matrix(fexpect) - sympy.Matrix(xgiven), chop=True)
if abs(dm.vec().norm().evalf()) < threshold:
return {'ok': True, 'msg': msg}
except sympy.ShapeError:
msg += "<p>Error - your input vector or matrix has the wrong dimensions"
return {'ok': False, 'msg': make_error_message(msg)}
except Exception, err:
msg += "<p>Error %s in comparing expected (a list) and your answer</p>" % str(err).replace('<', '<')
if DEBUG:
msg += "<p/><pre>%s</pre>" % traceback.format_exc()
return {'ok': False, 'msg': make_error_message(msg)}
#diff = (fexpect-fsym).simplify()
#fsym = fsym.simplify()
#fexpect = fexpect.simplify()
try:
diff = (fexpect - fsym)
except Exception, err:
diff = None
if DEBUG:
msg += '<hr>'
msg += '<p><font color="blue">DEBUG messages:</p>'
msg += "<p>Got: %s</p>" % repr(fsym)
# msg += "<p/>Got: %s" % str([type(x) for x in fsym.atoms()]).replace('<','<')
msg += "<p>Expecting: %s</p>" % repr(fexpect).replace('**', '^').replace('hat(I)', 'hat(i)')
# msg += "<p/>Expecting: %s" % str([type(x) for x in fexpect.atoms()]).replace('<','<')
if diff:
msg += "<p>Difference: %s</p>" % to_latex(diff)
msg += '<hr>'
# Used to return more keys: 'ex': fexpect, 'got': fsym
return {'ok': False, 'msg': msg}
| agpl-3.0 |
blaggacao/OpenUpgrade | addons/mrp_byproduct/__openerp__.py | 259 | 1819 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': 'MRP Byproducts',
'version': '1.0',
'category': 'Manufacturing',
'description': """
This module allows you to produce several products from one production order.
=============================================================================
You can configure by-products in the bill of material.
Without this module:
--------------------
A + B + C -> D
With this module:
-----------------
A + B + C -> D + E
""",
'author': 'OpenERP SA',
'website': 'https://www.odoo.com/page/manufacturing',
'depends': ['base', 'mrp'],
'data': [
'security/ir.model.access.csv',
'mrp_byproduct_view.xml'
],
'demo': [],
'test': ['test/mrp_byproduct.yml'],
'installable': True,
'auto_install': False,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.