repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
---|---|---|---|---|---|
thomasrstorey/Inter-pre-in-vention
|
backend/node_modules/fast-levenshtein/node_modules/grunt-npm-install/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/xcodeproj_file.py
|
505
|
118664
|
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Xcode project file generator.
This module is both an Xcode project file generator and a documentation of the
Xcode project file format. Knowledge of the project file format was gained
based on extensive experience with Xcode, and by making changes to projects in
Xcode.app and observing the resultant changes in the associated project files.
XCODE PROJECT FILES
The generator targets the file format as written by Xcode 3.2 (specifically,
3.2.6), but past experience has taught that the format has not changed
significantly in the past several years, and future versions of Xcode are able
to read older project files.
Xcode project files are "bundled": the project "file" from an end-user's
perspective is actually a directory with an ".xcodeproj" extension. The
project file from this module's perspective is actually a file inside this
directory, always named "project.pbxproj". This file contains a complete
description of the project and is all that is needed to use the xcodeproj.
Other files contained in the xcodeproj directory are simply used to store
per-user settings, such as the state of various UI elements in the Xcode
application.
The project.pbxproj file is a property list, stored in a format almost
identical to the NeXTstep property list format. The file is able to carry
Unicode data, and is encoded in UTF-8. The root element in the property list
is a dictionary that contains several properties of minimal interest, and two
properties of immense interest. The most important property is a dictionary
named "objects". The entire structure of the project is represented by the
children of this property. The objects dictionary is keyed by unique 96-bit
values represented by 24 uppercase hexadecimal characters. Each value in the
objects dictionary is itself a dictionary, describing an individual object.
Each object in the dictionary is a member of a class, which is identified by
the "isa" property of each object. A variety of classes are represented in a
project file. Objects can refer to other objects by ID, using the 24-character
hexadecimal object key. A project's objects form a tree, with a root object
of class PBXProject at the root. As an example, the PBXProject object serves
as parent to an XCConfigurationList object defining the build configurations
used in the project, a PBXGroup object serving as a container for all files
referenced in the project, and a list of target objects, each of which defines
a target in the project. There are several different types of target object,
such as PBXNativeTarget and PBXAggregateTarget. In this module, this
relationship is expressed by having each target type derive from an abstract
base named XCTarget.
The project.pbxproj file's root dictionary also contains a property, sibling to
the "objects" dictionary, named "rootObject". The value of rootObject is a
24-character object key referring to the root PBXProject object in the
objects dictionary.
In Xcode, every file used as input to a target or produced as a final product
of a target must appear somewhere in the hierarchy rooted at the PBXGroup
object referenced by the PBXProject's mainGroup property. A PBXGroup is
generally represented as a folder in the Xcode application. PBXGroups can
contain other PBXGroups as well as PBXFileReferences, which are pointers to
actual files.
Each XCTarget contains a list of build phases, represented in this module by
the abstract base XCBuildPhase. Examples of concrete XCBuildPhase derivations
are PBXSourcesBuildPhase and PBXFrameworksBuildPhase, which correspond to the
"Compile Sources" and "Link Binary With Libraries" phases displayed in the
Xcode application. Files used as input to these phases (for example, source
files in the former case and libraries and frameworks in the latter) are
represented by PBXBuildFile objects, referenced by elements of "files" lists
in XCTarget objects. Each PBXBuildFile object refers to a PBXBuildFile
object as a "weak" reference: it does not "own" the PBXBuildFile, which is
owned by the root object's mainGroup or a descendant group. In most cases, the
layer of indirection between an XCBuildPhase and a PBXFileReference via a
PBXBuildFile appears extraneous, but there's actually one reason for this:
file-specific compiler flags are added to the PBXBuildFile object so as to
allow a single file to be a member of multiple targets while having distinct
compiler flags for each. These flags can be modified in the Xcode applciation
in the "Build" tab of a File Info window.
When a project is open in the Xcode application, Xcode will rewrite it. As
such, this module is careful to adhere to the formatting used by Xcode, to
avoid insignificant changes appearing in the file when it is used in the
Xcode application. This will keep version control repositories happy, and
makes it possible to compare a project file used in Xcode to one generated by
this module to determine if any significant changes were made in the
application.
Xcode has its own way of assigning 24-character identifiers to each object,
which is not duplicated here. Because the identifier only is only generated
once, when an object is created, and is then left unchanged, there is no need
to attempt to duplicate Xcode's behavior in this area. The generator is free
to select any identifier, even at random, to refer to the objects it creates,
and Xcode will retain those identifiers and use them when subsequently
rewriting the project file. However, the generator would choose new random
identifiers each time the project files are generated, leading to difficulties
comparing "used" project files to "pristine" ones produced by this module,
and causing the appearance of changes as every object identifier is changed
when updated projects are checked in to a version control repository. To
mitigate this problem, this module chooses identifiers in a more deterministic
way, by hashing a description of each object as well as its parent and ancestor
objects. This strategy should result in minimal "shift" in IDs as successive
generations of project files are produced.
THIS MODULE
This module introduces several classes, all derived from the XCObject class.
Nearly all of the "brains" are built into the XCObject class, which understands
how to create and modify objects, maintain the proper tree structure, compute
identifiers, and print objects. For the most part, classes derived from
XCObject need only provide a _schema class object, a dictionary that
expresses what properties objects of the class may contain.
Given this structure, it's possible to build a minimal project file by creating
objects of the appropriate types and making the proper connections:
config_list = XCConfigurationList()
group = PBXGroup()
project = PBXProject({'buildConfigurationList': config_list,
'mainGroup': group})
With the project object set up, it can be added to an XCProjectFile object.
XCProjectFile is a pseudo-class in the sense that it is a concrete XCObject
subclass that does not actually correspond to a class type found in a project
file. Rather, it is used to represent the project file's root dictionary.
Printing an XCProjectFile will print the entire project file, including the
full "objects" dictionary.
project_file = XCProjectFile({'rootObject': project})
project_file.ComputeIDs()
project_file.Print()
Xcode project files are always encoded in UTF-8. This module will accept
strings of either the str class or the unicode class. Strings of class str
are assumed to already be encoded in UTF-8. Obviously, if you're just using
ASCII, you won't encounter difficulties because ASCII is a UTF-8 subset.
Strings of class unicode are handled properly and encoded in UTF-8 when
a project file is output.
"""
import gyp.common
import posixpath
import re
import struct
import sys
# hashlib is supplied as of Python 2.5 as the replacement interface for sha
# and other secure hashes. In 2.6, sha is deprecated. Import hashlib if
# available, avoiding a deprecation warning under 2.6. Import sha otherwise,
# preserving 2.4 compatibility.
try:
import hashlib
_new_sha1 = hashlib.sha1
except ImportError:
import sha
_new_sha1 = sha.new
# See XCObject._EncodeString. This pattern is used to determine when a string
# can be printed unquoted. Strings that match this pattern may be printed
# unquoted. Strings that do not match must be quoted and may be further
# transformed to be properly encoded. Note that this expression matches the
# characters listed with "+", for 1 or more occurrences: if a string is empty,
# it must not match this pattern, because it needs to be encoded as "".
_unquoted = re.compile('^[A-Za-z0-9$./_]+$')
# Strings that match this pattern are quoted regardless of what _unquoted says.
# Oddly, Xcode will quote any string with a run of three or more underscores.
_quoted = re.compile('___')
# This pattern should match any character that needs to be escaped by
# XCObject._EncodeString. See that function.
_escaped = re.compile('[\\\\"]|[\x00-\x1f]')
# Used by SourceTreeAndPathFromPath
_path_leading_variable = re.compile('^\$\((.*?)\)(/(.*))?$')
def SourceTreeAndPathFromPath(input_path):
"""Given input_path, returns a tuple with sourceTree and path values.
Examples:
input_path (source_tree, output_path)
'$(VAR)/path' ('VAR', 'path')
'$(VAR)' ('VAR', None)
'path' (None, 'path')
"""
source_group_match = _path_leading_variable.match(input_path)
if source_group_match:
source_tree = source_group_match.group(1)
output_path = source_group_match.group(3) # This may be None.
else:
source_tree = None
output_path = input_path
return (source_tree, output_path)
def ConvertVariablesToShellSyntax(input_string):
return re.sub('\$\((.*?)\)', '${\\1}', input_string)
class XCObject(object):
"""The abstract base of all class types used in Xcode project files.
Class variables:
_schema: A dictionary defining the properties of this class. The keys to
_schema are string property keys as used in project files. Values
are a list of four or five elements:
[ is_list, property_type, is_strong, is_required, default ]
is_list: True if the property described is a list, as opposed
to a single element.
property_type: The type to use as the value of the property,
or if is_list is True, the type to use for each
element of the value's list. property_type must
be an XCObject subclass, or one of the built-in
types str, int, or dict.
is_strong: If property_type is an XCObject subclass, is_strong
is True to assert that this class "owns," or serves
as parent, to the property value (or, if is_list is
True, values). is_strong must be False if
property_type is not an XCObject subclass.
is_required: True if the property is required for the class.
Note that is_required being True does not preclude
an empty string ("", in the case of property_type
str) or list ([], in the case of is_list True) from
being set for the property.
default: Optional. If is_requried is True, default may be set
to provide a default value for objects that do not supply
their own value. If is_required is True and default
is not provided, users of the class must supply their own
value for the property.
Note that although the values of the array are expressed in
boolean terms, subclasses provide values as integers to conserve
horizontal space.
_should_print_single_line: False in XCObject. Subclasses whose objects
should be written to the project file in the
alternate single-line format, such as
PBXFileReference and PBXBuildFile, should
set this to True.
_encode_transforms: Used by _EncodeString to encode unprintable characters.
The index into this list is the ordinal of the
character to transform; each value is a string
used to represent the character in the output. XCObject
provides an _encode_transforms list suitable for most
XCObject subclasses.
_alternate_encode_transforms: Provided for subclasses that wish to use
the alternate encoding rules. Xcode seems
to use these rules when printing objects in
single-line format. Subclasses that desire
this behavior should set _encode_transforms
to _alternate_encode_transforms.
_hashables: A list of XCObject subclasses that can be hashed by ComputeIDs
to construct this object's ID. Most classes that need custom
hashing behavior should do it by overriding Hashables,
but in some cases an object's parent may wish to push a
hashable value into its child, and it can do so by appending
to _hashables.
Attributes:
id: The object's identifier, a 24-character uppercase hexadecimal string.
Usually, objects being created should not set id until the entire
project file structure is built. At that point, UpdateIDs() should
be called on the root object to assign deterministic values for id to
each object in the tree.
parent: The object's parent. This is set by a parent XCObject when a child
object is added to it.
_properties: The object's property dictionary. An object's properties are
described by its class' _schema variable.
"""
_schema = {}
_should_print_single_line = False
# See _EncodeString.
_encode_transforms = []
i = 0
while i < ord(' '):
_encode_transforms.append('\\U%04x' % i)
i = i + 1
_encode_transforms[7] = '\\a'
_encode_transforms[8] = '\\b'
_encode_transforms[9] = '\\t'
_encode_transforms[10] = '\\n'
_encode_transforms[11] = '\\v'
_encode_transforms[12] = '\\f'
_encode_transforms[13] = '\\n'
_alternate_encode_transforms = list(_encode_transforms)
_alternate_encode_transforms[9] = chr(9)
_alternate_encode_transforms[10] = chr(10)
_alternate_encode_transforms[11] = chr(11)
def __init__(self, properties=None, id=None, parent=None):
self.id = id
self.parent = parent
self._properties = {}
self._hashables = []
self._SetDefaultsFromSchema()
self.UpdateProperties(properties)
def __repr__(self):
try:
name = self.Name()
except NotImplementedError:
return '<%s at 0x%x>' % (self.__class__.__name__, id(self))
return '<%s %r at 0x%x>' % (self.__class__.__name__, name, id(self))
def Copy(self):
"""Make a copy of this object.
The new object will have its own copy of lists and dicts. Any XCObject
objects owned by this object (marked "strong") will be copied in the
new object, even those found in lists. If this object has any weak
references to other XCObjects, the same references are added to the new
object without making a copy.
"""
that = self.__class__(id=self.id, parent=self.parent)
for key, value in self._properties.iteritems():
is_strong = self._schema[key][2]
if isinstance(value, XCObject):
if is_strong:
new_value = value.Copy()
new_value.parent = that
that._properties[key] = new_value
else:
that._properties[key] = value
elif isinstance(value, str) or isinstance(value, unicode) or \
isinstance(value, int):
that._properties[key] = value
elif isinstance(value, list):
if is_strong:
# If is_strong is True, each element is an XCObject, so it's safe to
# call Copy.
that._properties[key] = []
for item in value:
new_item = item.Copy()
new_item.parent = that
that._properties[key].append(new_item)
else:
that._properties[key] = value[:]
elif isinstance(value, dict):
# dicts are never strong.
if is_strong:
raise TypeError, 'Strong dict for key ' + key + ' in ' + \
self.__class__.__name__
else:
that._properties[key] = value.copy()
else:
raise TypeError, 'Unexpected type ' + value.__class__.__name__ + \
' for key ' + key + ' in ' + self.__class__.__name__
return that
def Name(self):
"""Return the name corresponding to an object.
Not all objects necessarily need to be nameable, and not all that do have
a "name" property. Override as needed.
"""
# If the schema indicates that "name" is required, try to access the
# property even if it doesn't exist. This will result in a KeyError
# being raised for the property that should be present, which seems more
# appropriate than NotImplementedError in this case.
if 'name' in self._properties or \
('name' in self._schema and self._schema['name'][3]):
return self._properties['name']
raise NotImplementedError, \
self.__class__.__name__ + ' must implement Name'
def Comment(self):
"""Return a comment string for the object.
Most objects just use their name as the comment, but PBXProject uses
different values.
The returned comment is not escaped and does not have any comment marker
strings applied to it.
"""
return self.Name()
def Hashables(self):
hashables = [self.__class__.__name__]
name = self.Name()
if name != None:
hashables.append(name)
hashables.extend(self._hashables)
return hashables
def HashablesForChild(self):
return None
def ComputeIDs(self, recursive=True, overwrite=True, seed_hash=None):
"""Set "id" properties deterministically.
An object's "id" property is set based on a hash of its class type and
name, as well as the class type and name of all ancestor objects. As
such, it is only advisable to call ComputeIDs once an entire project file
tree is built.
If recursive is True, recurse into all descendant objects and update their
hashes.
If overwrite is True, any existing value set in the "id" property will be
replaced.
"""
def _HashUpdate(hash, data):
"""Update hash with data's length and contents.
If the hash were updated only with the value of data, it would be
possible for clowns to induce collisions by manipulating the names of
their objects. By adding the length, it's exceedingly less likely that
ID collisions will be encountered, intentionally or not.
"""
hash.update(struct.pack('>i', len(data)))
hash.update(data)
if seed_hash is None:
seed_hash = _new_sha1()
hash = seed_hash.copy()
hashables = self.Hashables()
assert len(hashables) > 0
for hashable in hashables:
_HashUpdate(hash, hashable)
if recursive:
hashables_for_child = self.HashablesForChild()
if hashables_for_child is None:
child_hash = hash
else:
assert len(hashables_for_child) > 0
child_hash = seed_hash.copy()
for hashable in hashables_for_child:
_HashUpdate(child_hash, hashable)
for child in self.Children():
child.ComputeIDs(recursive, overwrite, child_hash)
if overwrite or self.id is None:
# Xcode IDs are only 96 bits (24 hex characters), but a SHA-1 digest is
# is 160 bits. Instead of throwing out 64 bits of the digest, xor them
# into the portion that gets used.
assert hash.digest_size % 4 == 0
digest_int_count = hash.digest_size / 4
digest_ints = struct.unpack('>' + 'I' * digest_int_count, hash.digest())
id_ints = [0, 0, 0]
for index in xrange(0, digest_int_count):
id_ints[index % 3] ^= digest_ints[index]
self.id = '%08X%08X%08X' % tuple(id_ints)
def EnsureNoIDCollisions(self):
"""Verifies that no two objects have the same ID. Checks all descendants.
"""
ids = {}
descendants = self.Descendants()
for descendant in descendants:
if descendant.id in ids:
other = ids[descendant.id]
raise KeyError, \
'Duplicate ID %s, objects "%s" and "%s" in "%s"' % \
(descendant.id, str(descendant._properties),
str(other._properties), self._properties['rootObject'].Name())
ids[descendant.id] = descendant
def Children(self):
"""Returns a list of all of this object's owned (strong) children."""
children = []
for property, attributes in self._schema.iteritems():
(is_list, property_type, is_strong) = attributes[0:3]
if is_strong and property in self._properties:
if not is_list:
children.append(self._properties[property])
else:
children.extend(self._properties[property])
return children
def Descendants(self):
"""Returns a list of all of this object's descendants, including this
object.
"""
children = self.Children()
descendants = [self]
for child in children:
descendants.extend(child.Descendants())
return descendants
def PBXProjectAncestor(self):
# The base case for recursion is defined at PBXProject.PBXProjectAncestor.
if self.parent:
return self.parent.PBXProjectAncestor()
return None
def _EncodeComment(self, comment):
"""Encodes a comment to be placed in the project file output, mimicing
Xcode behavior.
"""
# This mimics Xcode behavior by wrapping the comment in "/*" and "*/". If
# the string already contains a "*/", it is turned into "(*)/". This keeps
# the file writer from outputting something that would be treated as the
# end of a comment in the middle of something intended to be entirely a
# comment.
return '/* ' + comment.replace('*/', '(*)/') + ' */'
def _EncodeTransform(self, match):
# This function works closely with _EncodeString. It will only be called
# by re.sub with match.group(0) containing a character matched by the
# the _escaped expression.
char = match.group(0)
# Backslashes (\) and quotation marks (") are always replaced with a
# backslash-escaped version of the same. Everything else gets its
# replacement from the class' _encode_transforms array.
if char == '\\':
return '\\\\'
if char == '"':
return '\\"'
return self._encode_transforms[ord(char)]
def _EncodeString(self, value):
"""Encodes a string to be placed in the project file output, mimicing
Xcode behavior.
"""
# Use quotation marks when any character outside of the range A-Z, a-z, 0-9,
# $ (dollar sign), . (period), and _ (underscore) is present. Also use
# quotation marks to represent empty strings.
#
# Escape " (double-quote) and \ (backslash) by preceding them with a
# backslash.
#
# Some characters below the printable ASCII range are encoded specially:
# 7 ^G BEL is encoded as "\a"
# 8 ^H BS is encoded as "\b"
# 11 ^K VT is encoded as "\v"
# 12 ^L NP is encoded as "\f"
# 127 ^? DEL is passed through as-is without escaping
# - In PBXFileReference and PBXBuildFile objects:
# 9 ^I HT is passed through as-is without escaping
# 10 ^J NL is passed through as-is without escaping
# 13 ^M CR is passed through as-is without escaping
# - In other objects:
# 9 ^I HT is encoded as "\t"
# 10 ^J NL is encoded as "\n"
# 13 ^M CR is encoded as "\n" rendering it indistinguishable from
# 10 ^J NL
# All other characters within the ASCII control character range (0 through
# 31 inclusive) are encoded as "\U001f" referring to the Unicode code point
# in hexadecimal. For example, character 14 (^N SO) is encoded as "\U000e".
# Characters above the ASCII range are passed through to the output encoded
# as UTF-8 without any escaping. These mappings are contained in the
# class' _encode_transforms list.
if _unquoted.search(value) and not _quoted.search(value):
return value
return '"' + _escaped.sub(self._EncodeTransform, value) + '"'
def _XCPrint(self, file, tabs, line):
file.write('\t' * tabs + line)
def _XCPrintableValue(self, tabs, value, flatten_list=False):
"""Returns a representation of value that may be printed in a project file,
mimicing Xcode's behavior.
_XCPrintableValue can handle str and int values, XCObjects (which are
made printable by returning their id property), and list and dict objects
composed of any of the above types. When printing a list or dict, and
_should_print_single_line is False, the tabs parameter is used to determine
how much to indent the lines corresponding to the items in the list or
dict.
If flatten_list is True, single-element lists will be transformed into
strings.
"""
printable = ''
comment = None
if self._should_print_single_line:
sep = ' '
element_tabs = ''
end_tabs = ''
else:
sep = '\n'
element_tabs = '\t' * (tabs + 1)
end_tabs = '\t' * tabs
if isinstance(value, XCObject):
printable += value.id
comment = value.Comment()
elif isinstance(value, str):
printable += self._EncodeString(value)
elif isinstance(value, unicode):
printable += self._EncodeString(value.encode('utf-8'))
elif isinstance(value, int):
printable += str(value)
elif isinstance(value, list):
if flatten_list and len(value) <= 1:
if len(value) == 0:
printable += self._EncodeString('')
else:
printable += self._EncodeString(value[0])
else:
printable = '(' + sep
for item in value:
printable += element_tabs + \
self._XCPrintableValue(tabs + 1, item, flatten_list) + \
',' + sep
printable += end_tabs + ')'
elif isinstance(value, dict):
printable = '{' + sep
for item_key, item_value in sorted(value.iteritems()):
printable += element_tabs + \
self._XCPrintableValue(tabs + 1, item_key, flatten_list) + ' = ' + \
self._XCPrintableValue(tabs + 1, item_value, flatten_list) + ';' + \
sep
printable += end_tabs + '}'
else:
raise TypeError, "Can't make " + value.__class__.__name__ + ' printable'
if comment != None:
printable += ' ' + self._EncodeComment(comment)
return printable
def _XCKVPrint(self, file, tabs, key, value):
"""Prints a key and value, members of an XCObject's _properties dictionary,
to file.
tabs is an int identifying the indentation level. If the class'
_should_print_single_line variable is True, tabs is ignored and the
key-value pair will be followed by a space insead of a newline.
"""
if self._should_print_single_line:
printable = ''
after_kv = ' '
else:
printable = '\t' * tabs
after_kv = '\n'
# Xcode usually prints remoteGlobalIDString values in PBXContainerItemProxy
# objects without comments. Sometimes it prints them with comments, but
# the majority of the time, it doesn't. To avoid unnecessary changes to
# the project file after Xcode opens it, don't write comments for
# remoteGlobalIDString. This is a sucky hack and it would certainly be
# cleaner to extend the schema to indicate whether or not a comment should
# be printed, but since this is the only case where the problem occurs and
# Xcode itself can't seem to make up its mind, the hack will suffice.
#
# Also see PBXContainerItemProxy._schema['remoteGlobalIDString'].
if key == 'remoteGlobalIDString' and isinstance(self,
PBXContainerItemProxy):
value_to_print = value.id
else:
value_to_print = value
# PBXBuildFile's settings property is represented in the output as a dict,
# but a hack here has it represented as a string. Arrange to strip off the
# quotes so that it shows up in the output as expected.
if key == 'settings' and isinstance(self, PBXBuildFile):
strip_value_quotes = True
else:
strip_value_quotes = False
# In another one-off, let's set flatten_list on buildSettings properties
# of XCBuildConfiguration objects, because that's how Xcode treats them.
if key == 'buildSettings' and isinstance(self, XCBuildConfiguration):
flatten_list = True
else:
flatten_list = False
try:
printable_key = self._XCPrintableValue(tabs, key, flatten_list)
printable_value = self._XCPrintableValue(tabs, value_to_print,
flatten_list)
if strip_value_quotes and len(printable_value) > 1 and \
printable_value[0] == '"' and printable_value[-1] == '"':
printable_value = printable_value[1:-1]
printable += printable_key + ' = ' + printable_value + ';' + after_kv
except TypeError, e:
gyp.common.ExceptionAppend(e,
'while printing key "%s"' % key)
raise
self._XCPrint(file, 0, printable)
def Print(self, file=sys.stdout):
"""Prints a reprentation of this object to file, adhering to Xcode output
formatting.
"""
self.VerifyHasRequiredProperties()
if self._should_print_single_line:
# When printing an object in a single line, Xcode doesn't put any space
# between the beginning of a dictionary (or presumably a list) and the
# first contained item, so you wind up with snippets like
# ...CDEF = {isa = PBXFileReference; fileRef = 0123...
# If it were me, I would have put a space in there after the opening
# curly, but I guess this is just another one of those inconsistencies
# between how Xcode prints PBXFileReference and PBXBuildFile objects as
# compared to other objects. Mimic Xcode's behavior here by using an
# empty string for sep.
sep = ''
end_tabs = 0
else:
sep = '\n'
end_tabs = 2
# Start the object. For example, '\t\tPBXProject = {\n'.
self._XCPrint(file, 2, self._XCPrintableValue(2, self) + ' = {' + sep)
# "isa" isn't in the _properties dictionary, it's an intrinsic property
# of the class which the object belongs to. Xcode always outputs "isa"
# as the first element of an object dictionary.
self._XCKVPrint(file, 3, 'isa', self.__class__.__name__)
# The remaining elements of an object dictionary are sorted alphabetically.
for property, value in sorted(self._properties.iteritems()):
self._XCKVPrint(file, 3, property, value)
# End the object.
self._XCPrint(file, end_tabs, '};\n')
def UpdateProperties(self, properties, do_copy=False):
"""Merge the supplied properties into the _properties dictionary.
The input properties must adhere to the class schema or a KeyError or
TypeError exception will be raised. If adding an object of an XCObject
subclass and the schema indicates a strong relationship, the object's
parent will be set to this object.
If do_copy is True, then lists, dicts, strong-owned XCObjects, and
strong-owned XCObjects in lists will be copied instead of having their
references added.
"""
if properties is None:
return
for property, value in properties.iteritems():
# Make sure the property is in the schema.
if not property in self._schema:
raise KeyError, property + ' not in ' + self.__class__.__name__
# Make sure the property conforms to the schema.
(is_list, property_type, is_strong) = self._schema[property][0:3]
if is_list:
if value.__class__ != list:
raise TypeError, \
property + ' of ' + self.__class__.__name__ + \
' must be list, not ' + value.__class__.__name__
for item in value:
if not isinstance(item, property_type) and \
not (item.__class__ == unicode and property_type == str):
# Accept unicode where str is specified. str is treated as
# UTF-8-encoded.
raise TypeError, \
'item of ' + property + ' of ' + self.__class__.__name__ + \
' must be ' + property_type.__name__ + ', not ' + \
item.__class__.__name__
elif not isinstance(value, property_type) and \
not (value.__class__ == unicode and property_type == str):
# Accept unicode where str is specified. str is treated as
# UTF-8-encoded.
raise TypeError, \
property + ' of ' + self.__class__.__name__ + ' must be ' + \
property_type.__name__ + ', not ' + value.__class__.__name__
# Checks passed, perform the assignment.
if do_copy:
if isinstance(value, XCObject):
if is_strong:
self._properties[property] = value.Copy()
else:
self._properties[property] = value
elif isinstance(value, str) or isinstance(value, unicode) or \
isinstance(value, int):
self._properties[property] = value
elif isinstance(value, list):
if is_strong:
# If is_strong is True, each element is an XCObject, so it's safe
# to call Copy.
self._properties[property] = []
for item in value:
self._properties[property].append(item.Copy())
else:
self._properties[property] = value[:]
elif isinstance(value, dict):
self._properties[property] = value.copy()
else:
raise TypeError, "Don't know how to copy a " + \
value.__class__.__name__ + ' object for ' + \
property + ' in ' + self.__class__.__name__
else:
self._properties[property] = value
# Set up the child's back-reference to this object. Don't use |value|
# any more because it may not be right if do_copy is true.
if is_strong:
if not is_list:
self._properties[property].parent = self
else:
for item in self._properties[property]:
item.parent = self
def HasProperty(self, key):
return key in self._properties
def GetProperty(self, key):
return self._properties[key]
def SetProperty(self, key, value):
self.UpdateProperties({key: value})
def DelProperty(self, key):
if key in self._properties:
del self._properties[key]
def AppendProperty(self, key, value):
# TODO(mark): Support ExtendProperty too (and make this call that)?
# Schema validation.
if not key in self._schema:
raise KeyError, key + ' not in ' + self.__class__.__name__
(is_list, property_type, is_strong) = self._schema[key][0:3]
if not is_list:
raise TypeError, key + ' of ' + self.__class__.__name__ + ' must be list'
if not isinstance(value, property_type):
raise TypeError, 'item of ' + key + ' of ' + self.__class__.__name__ + \
' must be ' + property_type.__name__ + ', not ' + \
value.__class__.__name__
# If the property doesn't exist yet, create a new empty list to receive the
# item.
if not key in self._properties:
self._properties[key] = []
# Set up the ownership link.
if is_strong:
value.parent = self
# Store the item.
self._properties[key].append(value)
def VerifyHasRequiredProperties(self):
"""Ensure that all properties identified as required by the schema are
set.
"""
# TODO(mark): A stronger verification mechanism is needed. Some
# subclasses need to perform validation beyond what the schema can enforce.
for property, attributes in self._schema.iteritems():
(is_list, property_type, is_strong, is_required) = attributes[0:4]
if is_required and not property in self._properties:
raise KeyError, self.__class__.__name__ + ' requires ' + property
def _SetDefaultsFromSchema(self):
"""Assign object default values according to the schema. This will not
overwrite properties that have already been set."""
defaults = {}
for property, attributes in self._schema.iteritems():
(is_list, property_type, is_strong, is_required) = attributes[0:4]
if is_required and len(attributes) >= 5 and \
not property in self._properties:
default = attributes[4]
defaults[property] = default
if len(defaults) > 0:
# Use do_copy=True so that each new object gets its own copy of strong
# objects, lists, and dicts.
self.UpdateProperties(defaults, do_copy=True)
class XCHierarchicalElement(XCObject):
"""Abstract base for PBXGroup and PBXFileReference. Not represented in a
project file."""
# TODO(mark): Do name and path belong here? Probably so.
# If path is set and name is not, name may have a default value. Name will
# be set to the basename of path, if the basename of path is different from
# the full value of path. If path is already just a leaf name, name will
# not be set.
_schema = XCObject._schema.copy()
_schema.update({
'comments': [0, str, 0, 0],
'fileEncoding': [0, str, 0, 0],
'includeInIndex': [0, int, 0, 0],
'indentWidth': [0, int, 0, 0],
'lineEnding': [0, int, 0, 0],
'sourceTree': [0, str, 0, 1, '<group>'],
'tabWidth': [0, int, 0, 0],
'usesTabs': [0, int, 0, 0],
'wrapsLines': [0, int, 0, 0],
})
def __init__(self, properties=None, id=None, parent=None):
# super
XCObject.__init__(self, properties, id, parent)
if 'path' in self._properties and not 'name' in self._properties:
path = self._properties['path']
name = posixpath.basename(path)
if name != '' and path != name:
self.SetProperty('name', name)
if 'path' in self._properties and \
(not 'sourceTree' in self._properties or \
self._properties['sourceTree'] == '<group>'):
# If the pathname begins with an Xcode variable like "$(SDKROOT)/", take
# the variable out and make the path be relative to that variable by
# assigning the variable name as the sourceTree.
(source_tree, path) = SourceTreeAndPathFromPath(self._properties['path'])
if source_tree != None:
self._properties['sourceTree'] = source_tree
if path != None:
self._properties['path'] = path
if source_tree != None and path is None and \
not 'name' in self._properties:
# The path was of the form "$(SDKROOT)" with no path following it.
# This object is now relative to that variable, so it has no path
# attribute of its own. It does, however, keep a name.
del self._properties['path']
self._properties['name'] = source_tree
def Name(self):
if 'name' in self._properties:
return self._properties['name']
elif 'path' in self._properties:
return self._properties['path']
else:
# This happens in the case of the root PBXGroup.
return None
def Hashables(self):
"""Custom hashables for XCHierarchicalElements.
XCHierarchicalElements are special. Generally, their hashes shouldn't
change if the paths don't change. The normal XCObject implementation of
Hashables adds a hashable for each object, which means that if
the hierarchical structure changes (possibly due to changes caused when
TakeOverOnlyChild runs and encounters slight changes in the hierarchy),
the hashes will change. For example, if a project file initially contains
a/b/f1 and a/b becomes collapsed into a/b, f1 will have a single parent
a/b. If someone later adds a/f2 to the project file, a/b can no longer be
collapsed, and f1 winds up with parent b and grandparent a. That would
be sufficient to change f1's hash.
To counteract this problem, hashables for all XCHierarchicalElements except
for the main group (which has neither a name nor a path) are taken to be
just the set of path components. Because hashables are inherited from
parents, this provides assurance that a/b/f1 has the same set of hashables
whether its parent is b or a/b.
The main group is a special case. As it is permitted to have no name or
path, it is permitted to use the standard XCObject hash mechanism. This
is not considered a problem because there can be only one main group.
"""
if self == self.PBXProjectAncestor()._properties['mainGroup']:
# super
return XCObject.Hashables(self)
hashables = []
# Put the name in first, ensuring that if TakeOverOnlyChild collapses
# children into a top-level group like "Source", the name always goes
# into the list of hashables without interfering with path components.
if 'name' in self._properties:
# Make it less likely for people to manipulate hashes by following the
# pattern of always pushing an object type value onto the list first.
hashables.append(self.__class__.__name__ + '.name')
hashables.append(self._properties['name'])
# NOTE: This still has the problem that if an absolute path is encountered,
# including paths with a sourceTree, they'll still inherit their parents'
# hashables, even though the paths aren't relative to their parents. This
# is not expected to be much of a problem in practice.
path = self.PathFromSourceTreeAndPath()
if path != None:
components = path.split(posixpath.sep)
for component in components:
hashables.append(self.__class__.__name__ + '.path')
hashables.append(component)
hashables.extend(self._hashables)
return hashables
def Compare(self, other):
# Allow comparison of these types. PBXGroup has the highest sort rank;
# PBXVariantGroup is treated as equal to PBXFileReference.
valid_class_types = {
PBXFileReference: 'file',
PBXGroup: 'group',
PBXVariantGroup: 'file',
}
self_type = valid_class_types[self.__class__]
other_type = valid_class_types[other.__class__]
if self_type == other_type:
# If the two objects are of the same sort rank, compare their names.
return cmp(self.Name(), other.Name())
# Otherwise, sort groups before everything else.
if self_type == 'group':
return -1
return 1
def CompareRootGroup(self, other):
# This function should be used only to compare direct children of the
# containing PBXProject's mainGroup. These groups should appear in the
# listed order.
# TODO(mark): "Build" is used by gyp.generator.xcode, perhaps the
# generator should have a way of influencing this list rather than having
# to hardcode for the generator here.
order = ['Source', 'Intermediates', 'Projects', 'Frameworks', 'Products',
'Build']
# If the groups aren't in the listed order, do a name comparison.
# Otherwise, groups in the listed order should come before those that
# aren't.
self_name = self.Name()
other_name = other.Name()
self_in = isinstance(self, PBXGroup) and self_name in order
other_in = isinstance(self, PBXGroup) and other_name in order
if not self_in and not other_in:
return self.Compare(other)
if self_name in order and not other_name in order:
return -1
if other_name in order and not self_name in order:
return 1
# If both groups are in the listed order, go by the defined order.
self_index = order.index(self_name)
other_index = order.index(other_name)
if self_index < other_index:
return -1
if self_index > other_index:
return 1
return 0
def PathFromSourceTreeAndPath(self):
# Turn the object's sourceTree and path properties into a single flat
# string of a form comparable to the path parameter. If there's a
# sourceTree property other than "<group>", wrap it in $(...) for the
# comparison.
components = []
if self._properties['sourceTree'] != '<group>':
components.append('$(' + self._properties['sourceTree'] + ')')
if 'path' in self._properties:
components.append(self._properties['path'])
if len(components) > 0:
return posixpath.join(*components)
return None
def FullPath(self):
# Returns a full path to self relative to the project file, or relative
# to some other source tree. Start with self, and walk up the chain of
# parents prepending their paths, if any, until no more parents are
# available (project-relative path) or until a path relative to some
# source tree is found.
xche = self
path = None
while isinstance(xche, XCHierarchicalElement) and \
(path is None or \
(not path.startswith('/') and not path.startswith('$'))):
this_path = xche.PathFromSourceTreeAndPath()
if this_path != None and path != None:
path = posixpath.join(this_path, path)
elif this_path != None:
path = this_path
xche = xche.parent
return path
class PBXGroup(XCHierarchicalElement):
"""
Attributes:
_children_by_path: Maps pathnames of children of this PBXGroup to the
actual child XCHierarchicalElement objects.
_variant_children_by_name_and_path: Maps (name, path) tuples of
PBXVariantGroup children to the actual child PBXVariantGroup objects.
"""
_schema = XCHierarchicalElement._schema.copy()
_schema.update({
'children': [1, XCHierarchicalElement, 1, 1, []],
'name': [0, str, 0, 0],
'path': [0, str, 0, 0],
})
def __init__(self, properties=None, id=None, parent=None):
# super
XCHierarchicalElement.__init__(self, properties, id, parent)
self._children_by_path = {}
self._variant_children_by_name_and_path = {}
for child in self._properties.get('children', []):
self._AddChildToDicts(child)
def Hashables(self):
# super
hashables = XCHierarchicalElement.Hashables(self)
# It is not sufficient to just rely on name and parent to build a unique
# hashable : a node could have two child PBXGroup sharing a common name.
# To add entropy the hashable is enhanced with the names of all its
# children.
for child in self._properties.get('children', []):
child_name = child.Name()
if child_name != None:
hashables.append(child_name)
return hashables
def HashablesForChild(self):
# To avoid a circular reference the hashables used to compute a child id do
# not include the child names.
return XCHierarchicalElement.Hashables(self)
def _AddChildToDicts(self, child):
# Sets up this PBXGroup object's dicts to reference the child properly.
child_path = child.PathFromSourceTreeAndPath()
if child_path:
if child_path in self._children_by_path:
raise ValueError, 'Found multiple children with path ' + child_path
self._children_by_path[child_path] = child
if isinstance(child, PBXVariantGroup):
child_name = child._properties.get('name', None)
key = (child_name, child_path)
if key in self._variant_children_by_name_and_path:
raise ValueError, 'Found multiple PBXVariantGroup children with ' + \
'name ' + str(child_name) + ' and path ' + \
str(child_path)
self._variant_children_by_name_and_path[key] = child
def AppendChild(self, child):
# Callers should use this instead of calling
# AppendProperty('children', child) directly because this function
# maintains the group's dicts.
self.AppendProperty('children', child)
self._AddChildToDicts(child)
def GetChildByName(self, name):
# This is not currently optimized with a dict as GetChildByPath is because
# it has few callers. Most callers probably want GetChildByPath. This
# function is only useful to get children that have names but no paths,
# which is rare. The children of the main group ("Source", "Products",
# etc.) is pretty much the only case where this likely to come up.
#
# TODO(mark): Maybe this should raise an error if more than one child is
# present with the same name.
if not 'children' in self._properties:
return None
for child in self._properties['children']:
if child.Name() == name:
return child
return None
def GetChildByPath(self, path):
if not path:
return None
if path in self._children_by_path:
return self._children_by_path[path]
return None
def GetChildByRemoteObject(self, remote_object):
# This method is a little bit esoteric. Given a remote_object, which
# should be a PBXFileReference in another project file, this method will
# return this group's PBXReferenceProxy object serving as a local proxy
# for the remote PBXFileReference.
#
# This function might benefit from a dict optimization as GetChildByPath
# for some workloads, but profiling shows that it's not currently a
# problem.
if not 'children' in self._properties:
return None
for child in self._properties['children']:
if not isinstance(child, PBXReferenceProxy):
continue
container_proxy = child._properties['remoteRef']
if container_proxy._properties['remoteGlobalIDString'] == remote_object:
return child
return None
def AddOrGetFileByPath(self, path, hierarchical):
"""Returns an existing or new file reference corresponding to path.
If hierarchical is True, this method will create or use the necessary
hierarchical group structure corresponding to path. Otherwise, it will
look in and create an item in the current group only.
If an existing matching reference is found, it is returned, otherwise, a
new one will be created, added to the correct group, and returned.
If path identifies a directory by virtue of carrying a trailing slash,
this method returns a PBXFileReference of "folder" type. If path
identifies a variant, by virtue of it identifying a file inside a directory
with an ".lproj" extension, this method returns a PBXVariantGroup
containing the variant named by path, and possibly other variants. For
all other paths, a "normal" PBXFileReference will be returned.
"""
# Adding or getting a directory? Directories end with a trailing slash.
is_dir = False
if path.endswith('/'):
is_dir = True
path = posixpath.normpath(path)
if is_dir:
path = path + '/'
# Adding or getting a variant? Variants are files inside directories
# with an ".lproj" extension. Xcode uses variants for localization. For
# a variant path/to/Language.lproj/MainMenu.nib, put a variant group named
# MainMenu.nib inside path/to, and give it a variant named Language. In
# this example, grandparent would be set to path/to and parent_root would
# be set to Language.
variant_name = None
parent = posixpath.dirname(path)
grandparent = posixpath.dirname(parent)
parent_basename = posixpath.basename(parent)
(parent_root, parent_ext) = posixpath.splitext(parent_basename)
if parent_ext == '.lproj':
variant_name = parent_root
if grandparent == '':
grandparent = None
# Putting a directory inside a variant group is not currently supported.
assert not is_dir or variant_name is None
path_split = path.split(posixpath.sep)
if len(path_split) == 1 or \
((is_dir or variant_name != None) and len(path_split) == 2) or \
not hierarchical:
# The PBXFileReference or PBXVariantGroup will be added to or gotten from
# this PBXGroup, no recursion necessary.
if variant_name is None:
# Add or get a PBXFileReference.
file_ref = self.GetChildByPath(path)
if file_ref != None:
assert file_ref.__class__ == PBXFileReference
else:
file_ref = PBXFileReference({'path': path})
self.AppendChild(file_ref)
else:
# Add or get a PBXVariantGroup. The variant group name is the same
# as the basename (MainMenu.nib in the example above). grandparent
# specifies the path to the variant group itself, and path_split[-2:]
# is the path of the specific variant relative to its group.
variant_group_name = posixpath.basename(path)
variant_group_ref = self.AddOrGetVariantGroupByNameAndPath(
variant_group_name, grandparent)
variant_path = posixpath.sep.join(path_split[-2:])
variant_ref = variant_group_ref.GetChildByPath(variant_path)
if variant_ref != None:
assert variant_ref.__class__ == PBXFileReference
else:
variant_ref = PBXFileReference({'name': variant_name,
'path': variant_path})
variant_group_ref.AppendChild(variant_ref)
# The caller is interested in the variant group, not the specific
# variant file.
file_ref = variant_group_ref
return file_ref
else:
# Hierarchical recursion. Add or get a PBXGroup corresponding to the
# outermost path component, and then recurse into it, chopping off that
# path component.
next_dir = path_split[0]
group_ref = self.GetChildByPath(next_dir)
if group_ref != None:
assert group_ref.__class__ == PBXGroup
else:
group_ref = PBXGroup({'path': next_dir})
self.AppendChild(group_ref)
return group_ref.AddOrGetFileByPath(posixpath.sep.join(path_split[1:]),
hierarchical)
def AddOrGetVariantGroupByNameAndPath(self, name, path):
"""Returns an existing or new PBXVariantGroup for name and path.
If a PBXVariantGroup identified by the name and path arguments is already
present as a child of this object, it is returned. Otherwise, a new
PBXVariantGroup with the correct properties is created, added as a child,
and returned.
This method will generally be called by AddOrGetFileByPath, which knows
when to create a variant group based on the structure of the pathnames
passed to it.
"""
key = (name, path)
if key in self._variant_children_by_name_and_path:
variant_group_ref = self._variant_children_by_name_and_path[key]
assert variant_group_ref.__class__ == PBXVariantGroup
return variant_group_ref
variant_group_properties = {'name': name}
if path != None:
variant_group_properties['path'] = path
variant_group_ref = PBXVariantGroup(variant_group_properties)
self.AppendChild(variant_group_ref)
return variant_group_ref
def TakeOverOnlyChild(self, recurse=False):
"""If this PBXGroup has only one child and it's also a PBXGroup, take
it over by making all of its children this object's children.
This function will continue to take over only children when those children
are groups. If there are three PBXGroups representing a, b, and c, with
c inside b and b inside a, and a and b have no other children, this will
result in a taking over both b and c, forming a PBXGroup for a/b/c.
If recurse is True, this function will recurse into children and ask them
to collapse themselves by taking over only children as well. Assuming
an example hierarchy with files at a/b/c/d1, a/b/c/d2, and a/b/c/d3/e/f
(d1, d2, and f are files, the rest are groups), recursion will result in
a group for a/b/c containing a group for d3/e.
"""
# At this stage, check that child class types are PBXGroup exactly,
# instead of using isinstance. The only subclass of PBXGroup,
# PBXVariantGroup, should not participate in reparenting in the same way:
# reparenting by merging different object types would be wrong.
while len(self._properties['children']) == 1 and \
self._properties['children'][0].__class__ == PBXGroup:
# Loop to take over the innermost only-child group possible.
child = self._properties['children'][0]
# Assume the child's properties, including its children. Save a copy
# of this object's old properties, because they'll still be needed.
# This object retains its existing id and parent attributes.
old_properties = self._properties
self._properties = child._properties
self._children_by_path = child._children_by_path
if not 'sourceTree' in self._properties or \
self._properties['sourceTree'] == '<group>':
# The child was relative to its parent. Fix up the path. Note that
# children with a sourceTree other than "<group>" are not relative to
# their parents, so no path fix-up is needed in that case.
if 'path' in old_properties:
if 'path' in self._properties:
# Both the original parent and child have paths set.
self._properties['path'] = posixpath.join(old_properties['path'],
self._properties['path'])
else:
# Only the original parent has a path, use it.
self._properties['path'] = old_properties['path']
if 'sourceTree' in old_properties:
# The original parent had a sourceTree set, use it.
self._properties['sourceTree'] = old_properties['sourceTree']
# If the original parent had a name set, keep using it. If the original
# parent didn't have a name but the child did, let the child's name
# live on. If the name attribute seems unnecessary now, get rid of it.
if 'name' in old_properties and old_properties['name'] != None and \
old_properties['name'] != self.Name():
self._properties['name'] = old_properties['name']
if 'name' in self._properties and 'path' in self._properties and \
self._properties['name'] == self._properties['path']:
del self._properties['name']
# Notify all children of their new parent.
for child in self._properties['children']:
child.parent = self
# If asked to recurse, recurse.
if recurse:
for child in self._properties['children']:
if child.__class__ == PBXGroup:
child.TakeOverOnlyChild(recurse)
def SortGroup(self):
self._properties['children'] = \
sorted(self._properties['children'], cmp=lambda x,y: x.Compare(y))
# Recurse.
for child in self._properties['children']:
if isinstance(child, PBXGroup):
child.SortGroup()
class XCFileLikeElement(XCHierarchicalElement):
# Abstract base for objects that can be used as the fileRef property of
# PBXBuildFile.
def PathHashables(self):
# A PBXBuildFile that refers to this object will call this method to
# obtain additional hashables specific to this XCFileLikeElement. Don't
# just use this object's hashables, they're not specific and unique enough
# on their own (without access to the parent hashables.) Instead, provide
# hashables that identify this object by path by getting its hashables as
# well as the hashables of ancestor XCHierarchicalElement objects.
hashables = []
xche = self
while xche != None and isinstance(xche, XCHierarchicalElement):
xche_hashables = xche.Hashables()
for index in xrange(0, len(xche_hashables)):
hashables.insert(index, xche_hashables[index])
xche = xche.parent
return hashables
class XCContainerPortal(XCObject):
# Abstract base for objects that can be used as the containerPortal property
# of PBXContainerItemProxy.
pass
class XCRemoteObject(XCObject):
# Abstract base for objects that can be used as the remoteGlobalIDString
# property of PBXContainerItemProxy.
pass
class PBXFileReference(XCFileLikeElement, XCContainerPortal, XCRemoteObject):
_schema = XCFileLikeElement._schema.copy()
_schema.update({
'explicitFileType': [0, str, 0, 0],
'lastKnownFileType': [0, str, 0, 0],
'name': [0, str, 0, 0],
'path': [0, str, 0, 1],
})
# Weird output rules for PBXFileReference.
_should_print_single_line = True
# super
_encode_transforms = XCFileLikeElement._alternate_encode_transforms
def __init__(self, properties=None, id=None, parent=None):
# super
XCFileLikeElement.__init__(self, properties, id, parent)
if 'path' in self._properties and self._properties['path'].endswith('/'):
self._properties['path'] = self._properties['path'][:-1]
is_dir = True
else:
is_dir = False
if 'path' in self._properties and \
not 'lastKnownFileType' in self._properties and \
not 'explicitFileType' in self._properties:
# TODO(mark): This is the replacement for a replacement for a quick hack.
# It is no longer incredibly sucky, but this list needs to be extended.
extension_map = {
'a': 'archive.ar',
'app': 'wrapper.application',
'bdic': 'file',
'bundle': 'wrapper.cfbundle',
'c': 'sourcecode.c.c',
'cc': 'sourcecode.cpp.cpp',
'cpp': 'sourcecode.cpp.cpp',
'css': 'text.css',
'cxx': 'sourcecode.cpp.cpp',
'dart': 'sourcecode',
'dylib': 'compiled.mach-o.dylib',
'framework': 'wrapper.framework',
'gyp': 'sourcecode',
'gypi': 'sourcecode',
'h': 'sourcecode.c.h',
'hxx': 'sourcecode.cpp.h',
'icns': 'image.icns',
'java': 'sourcecode.java',
'js': 'sourcecode.javascript',
'm': 'sourcecode.c.objc',
'mm': 'sourcecode.cpp.objcpp',
'nib': 'wrapper.nib',
'o': 'compiled.mach-o.objfile',
'pdf': 'image.pdf',
'pl': 'text.script.perl',
'plist': 'text.plist.xml',
'pm': 'text.script.perl',
'png': 'image.png',
'py': 'text.script.python',
'r': 'sourcecode.rez',
'rez': 'sourcecode.rez',
's': 'sourcecode.asm',
'storyboard': 'file.storyboard',
'strings': 'text.plist.strings',
'ttf': 'file',
'xcconfig': 'text.xcconfig',
'xcdatamodel': 'wrapper.xcdatamodel',
'xib': 'file.xib',
'y': 'sourcecode.yacc',
}
prop_map = {
'dart': 'explicitFileType',
'gyp': 'explicitFileType',
'gypi': 'explicitFileType',
}
if is_dir:
file_type = 'folder'
prop_name = 'lastKnownFileType'
else:
basename = posixpath.basename(self._properties['path'])
(root, ext) = posixpath.splitext(basename)
# Check the map using a lowercase extension.
# TODO(mark): Maybe it should try with the original case first and fall
# back to lowercase, in case there are any instances where case
# matters. There currently aren't.
if ext != '':
ext = ext[1:].lower()
# TODO(mark): "text" is the default value, but "file" is appropriate
# for unrecognized files not containing text. Xcode seems to choose
# based on content.
file_type = extension_map.get(ext, 'text')
prop_name = prop_map.get(ext, 'lastKnownFileType')
self._properties[prop_name] = file_type
class PBXVariantGroup(PBXGroup, XCFileLikeElement):
"""PBXVariantGroup is used by Xcode to represent localizations."""
# No additions to the schema relative to PBXGroup.
pass
# PBXReferenceProxy is also an XCFileLikeElement subclass. It is defined below
# because it uses PBXContainerItemProxy, defined below.
class XCBuildConfiguration(XCObject):
_schema = XCObject._schema.copy()
_schema.update({
'baseConfigurationReference': [0, PBXFileReference, 0, 0],
'buildSettings': [0, dict, 0, 1, {}],
'name': [0, str, 0, 1],
})
def HasBuildSetting(self, key):
return key in self._properties['buildSettings']
def GetBuildSetting(self, key):
return self._properties['buildSettings'][key]
def SetBuildSetting(self, key, value):
# TODO(mark): If a list, copy?
self._properties['buildSettings'][key] = value
def AppendBuildSetting(self, key, value):
if not key in self._properties['buildSettings']:
self._properties['buildSettings'][key] = []
self._properties['buildSettings'][key].append(value)
def DelBuildSetting(self, key):
if key in self._properties['buildSettings']:
del self._properties['buildSettings'][key]
def SetBaseConfiguration(self, value):
self._properties['baseConfigurationReference'] = value
class XCConfigurationList(XCObject):
# _configs is the default list of configurations.
_configs = [ XCBuildConfiguration({'name': 'Debug'}),
XCBuildConfiguration({'name': 'Release'}) ]
_schema = XCObject._schema.copy()
_schema.update({
'buildConfigurations': [1, XCBuildConfiguration, 1, 1, _configs],
'defaultConfigurationIsVisible': [0, int, 0, 1, 1],
'defaultConfigurationName': [0, str, 0, 1, 'Release'],
})
def Name(self):
return 'Build configuration list for ' + \
self.parent.__class__.__name__ + ' "' + self.parent.Name() + '"'
def ConfigurationNamed(self, name):
"""Convenience accessor to obtain an XCBuildConfiguration by name."""
for configuration in self._properties['buildConfigurations']:
if configuration._properties['name'] == name:
return configuration
raise KeyError, name
def DefaultConfiguration(self):
"""Convenience accessor to obtain the default XCBuildConfiguration."""
return self.ConfigurationNamed(self._properties['defaultConfigurationName'])
def HasBuildSetting(self, key):
"""Determines the state of a build setting in all XCBuildConfiguration
child objects.
If all child objects have key in their build settings, and the value is the
same in all child objects, returns 1.
If no child objects have the key in their build settings, returns 0.
If some, but not all, child objects have the key in their build settings,
or if any children have different values for the key, returns -1.
"""
has = None
value = None
for configuration in self._properties['buildConfigurations']:
configuration_has = configuration.HasBuildSetting(key)
if has is None:
has = configuration_has
elif has != configuration_has:
return -1
if configuration_has:
configuration_value = configuration.GetBuildSetting(key)
if value is None:
value = configuration_value
elif value != configuration_value:
return -1
if not has:
return 0
return 1
def GetBuildSetting(self, key):
"""Gets the build setting for key.
All child XCConfiguration objects must have the same value set for the
setting, or a ValueError will be raised.
"""
# TODO(mark): This is wrong for build settings that are lists. The list
# contents should be compared (and a list copy returned?)
value = None
for configuration in self._properties['buildConfigurations']:
configuration_value = configuration.GetBuildSetting(key)
if value is None:
value = configuration_value
else:
if value != configuration_value:
raise ValueError, 'Variant values for ' + key
return value
def SetBuildSetting(self, key, value):
"""Sets the build setting for key to value in all child
XCBuildConfiguration objects.
"""
for configuration in self._properties['buildConfigurations']:
configuration.SetBuildSetting(key, value)
def AppendBuildSetting(self, key, value):
"""Appends value to the build setting for key, which is treated as a list,
in all child XCBuildConfiguration objects.
"""
for configuration in self._properties['buildConfigurations']:
configuration.AppendBuildSetting(key, value)
def DelBuildSetting(self, key):
"""Deletes the build setting key from all child XCBuildConfiguration
objects.
"""
for configuration in self._properties['buildConfigurations']:
configuration.DelBuildSetting(key)
def SetBaseConfiguration(self, value):
"""Sets the build configuration in all child XCBuildConfiguration objects.
"""
for configuration in self._properties['buildConfigurations']:
configuration.SetBaseConfiguration(value)
class PBXBuildFile(XCObject):
_schema = XCObject._schema.copy()
_schema.update({
'fileRef': [0, XCFileLikeElement, 0, 1],
'settings': [0, str, 0, 0], # hack, it's a dict
})
# Weird output rules for PBXBuildFile.
_should_print_single_line = True
_encode_transforms = XCObject._alternate_encode_transforms
def Name(self):
# Example: "main.cc in Sources"
return self._properties['fileRef'].Name() + ' in ' + self.parent.Name()
def Hashables(self):
# super
hashables = XCObject.Hashables(self)
# It is not sufficient to just rely on Name() to get the
# XCFileLikeElement's name, because that is not a complete pathname.
# PathHashables returns hashables unique enough that no two
# PBXBuildFiles should wind up with the same set of hashables, unless
# someone adds the same file multiple times to the same target. That
# would be considered invalid anyway.
hashables.extend(self._properties['fileRef'].PathHashables())
return hashables
class XCBuildPhase(XCObject):
"""Abstract base for build phase classes. Not represented in a project
file.
Attributes:
_files_by_path: A dict mapping each path of a child in the files list by
path (keys) to the corresponding PBXBuildFile children (values).
_files_by_xcfilelikeelement: A dict mapping each XCFileLikeElement (keys)
to the corresponding PBXBuildFile children (values).
"""
# TODO(mark): Some build phase types, like PBXShellScriptBuildPhase, don't
# actually have a "files" list. XCBuildPhase should not have "files" but
# another abstract subclass of it should provide this, and concrete build
# phase types that do have "files" lists should be derived from that new
# abstract subclass. XCBuildPhase should only provide buildActionMask and
# runOnlyForDeploymentPostprocessing, and not files or the various
# file-related methods and attributes.
_schema = XCObject._schema.copy()
_schema.update({
'buildActionMask': [0, int, 0, 1, 0x7fffffff],
'files': [1, PBXBuildFile, 1, 1, []],
'runOnlyForDeploymentPostprocessing': [0, int, 0, 1, 0],
})
def __init__(self, properties=None, id=None, parent=None):
# super
XCObject.__init__(self, properties, id, parent)
self._files_by_path = {}
self._files_by_xcfilelikeelement = {}
for pbxbuildfile in self._properties.get('files', []):
self._AddBuildFileToDicts(pbxbuildfile)
def FileGroup(self, path):
# Subclasses must override this by returning a two-element tuple. The
# first item in the tuple should be the PBXGroup to which "path" should be
# added, either as a child or deeper descendant. The second item should
# be a boolean indicating whether files should be added into hierarchical
# groups or one single flat group.
raise NotImplementedError, \
self.__class__.__name__ + ' must implement FileGroup'
def _AddPathToDict(self, pbxbuildfile, path):
"""Adds path to the dict tracking paths belonging to this build phase.
If the path is already a member of this build phase, raises an exception.
"""
if path in self._files_by_path:
raise ValueError, 'Found multiple build files with path ' + path
self._files_by_path[path] = pbxbuildfile
def _AddBuildFileToDicts(self, pbxbuildfile, path=None):
"""Maintains the _files_by_path and _files_by_xcfilelikeelement dicts.
If path is specified, then it is the path that is being added to the
phase, and pbxbuildfile must contain either a PBXFileReference directly
referencing that path, or it must contain a PBXVariantGroup that itself
contains a PBXFileReference referencing the path.
If path is not specified, either the PBXFileReference's path or the paths
of all children of the PBXVariantGroup are taken as being added to the
phase.
If the path is already present in the phase, raises an exception.
If the PBXFileReference or PBXVariantGroup referenced by pbxbuildfile
are already present in the phase, referenced by a different PBXBuildFile
object, raises an exception. This does not raise an exception when
a PBXFileReference or PBXVariantGroup reappear and are referenced by the
same PBXBuildFile that has already introduced them, because in the case
of PBXVariantGroup objects, they may correspond to multiple paths that are
not all added simultaneously. When this situation occurs, the path needs
to be added to _files_by_path, but nothing needs to change in
_files_by_xcfilelikeelement, and the caller should have avoided adding
the PBXBuildFile if it is already present in the list of children.
"""
xcfilelikeelement = pbxbuildfile._properties['fileRef']
paths = []
if path != None:
# It's best when the caller provides the path.
if isinstance(xcfilelikeelement, PBXVariantGroup):
paths.append(path)
else:
# If the caller didn't provide a path, there can be either multiple
# paths (PBXVariantGroup) or one.
if isinstance(xcfilelikeelement, PBXVariantGroup):
for variant in xcfilelikeelement._properties['children']:
paths.append(variant.FullPath())
else:
paths.append(xcfilelikeelement.FullPath())
# Add the paths first, because if something's going to raise, the
# messages provided by _AddPathToDict are more useful owing to its
# having access to a real pathname and not just an object's Name().
for a_path in paths:
self._AddPathToDict(pbxbuildfile, a_path)
# If another PBXBuildFile references this XCFileLikeElement, there's a
# problem.
if xcfilelikeelement in self._files_by_xcfilelikeelement and \
self._files_by_xcfilelikeelement[xcfilelikeelement] != pbxbuildfile:
raise ValueError, 'Found multiple build files for ' + \
xcfilelikeelement.Name()
self._files_by_xcfilelikeelement[xcfilelikeelement] = pbxbuildfile
def AppendBuildFile(self, pbxbuildfile, path=None):
# Callers should use this instead of calling
# AppendProperty('files', pbxbuildfile) directly because this function
# maintains the object's dicts. Better yet, callers can just call AddFile
# with a pathname and not worry about building their own PBXBuildFile
# objects.
self.AppendProperty('files', pbxbuildfile)
self._AddBuildFileToDicts(pbxbuildfile, path)
def AddFile(self, path, settings=None):
(file_group, hierarchical) = self.FileGroup(path)
file_ref = file_group.AddOrGetFileByPath(path, hierarchical)
if file_ref in self._files_by_xcfilelikeelement and \
isinstance(file_ref, PBXVariantGroup):
# There's already a PBXBuildFile in this phase corresponding to the
# PBXVariantGroup. path just provides a new variant that belongs to
# the group. Add the path to the dict.
pbxbuildfile = self._files_by_xcfilelikeelement[file_ref]
self._AddBuildFileToDicts(pbxbuildfile, path)
else:
# Add a new PBXBuildFile to get file_ref into the phase.
if settings is None:
pbxbuildfile = PBXBuildFile({'fileRef': file_ref})
else:
pbxbuildfile = PBXBuildFile({'fileRef': file_ref, 'settings': settings})
self.AppendBuildFile(pbxbuildfile, path)
class PBXHeadersBuildPhase(XCBuildPhase):
# No additions to the schema relative to XCBuildPhase.
def Name(self):
return 'Headers'
def FileGroup(self, path):
return self.PBXProjectAncestor().RootGroupForPath(path)
class PBXResourcesBuildPhase(XCBuildPhase):
# No additions to the schema relative to XCBuildPhase.
def Name(self):
return 'Resources'
def FileGroup(self, path):
return self.PBXProjectAncestor().RootGroupForPath(path)
class PBXSourcesBuildPhase(XCBuildPhase):
# No additions to the schema relative to XCBuildPhase.
def Name(self):
return 'Sources'
def FileGroup(self, path):
return self.PBXProjectAncestor().RootGroupForPath(path)
class PBXFrameworksBuildPhase(XCBuildPhase):
# No additions to the schema relative to XCBuildPhase.
def Name(self):
return 'Frameworks'
def FileGroup(self, path):
(root, ext) = posixpath.splitext(path)
if ext != '':
ext = ext[1:].lower()
if ext == 'o':
# .o files are added to Xcode Frameworks phases, but conceptually aren't
# frameworks, they're more like sources or intermediates. Redirect them
# to show up in one of those other groups.
return self.PBXProjectAncestor().RootGroupForPath(path)
else:
return (self.PBXProjectAncestor().FrameworksGroup(), False)
class PBXShellScriptBuildPhase(XCBuildPhase):
_schema = XCBuildPhase._schema.copy()
_schema.update({
'inputPaths': [1, str, 0, 1, []],
'name': [0, str, 0, 0],
'outputPaths': [1, str, 0, 1, []],
'shellPath': [0, str, 0, 1, '/bin/sh'],
'shellScript': [0, str, 0, 1],
'showEnvVarsInLog': [0, int, 0, 0],
})
def Name(self):
if 'name' in self._properties:
return self._properties['name']
return 'ShellScript'
class PBXCopyFilesBuildPhase(XCBuildPhase):
_schema = XCBuildPhase._schema.copy()
_schema.update({
'dstPath': [0, str, 0, 1],
'dstSubfolderSpec': [0, int, 0, 1],
'name': [0, str, 0, 0],
})
# path_tree_re matches "$(DIR)/path" or just "$(DIR)". Match group 1 is
# "DIR", match group 3 is "path" or None.
path_tree_re = re.compile('^\\$\\((.*)\\)(/(.*)|)$')
# path_tree_to_subfolder maps names of Xcode variables to the associated
# dstSubfolderSpec property value used in a PBXCopyFilesBuildPhase object.
path_tree_to_subfolder = {
'BUILT_PRODUCTS_DIR': 16, # Products Directory
# Other types that can be chosen via the Xcode UI.
# TODO(mark): Map Xcode variable names to these.
# : 1, # Wrapper
# : 6, # Executables: 6
# : 7, # Resources
# : 15, # Java Resources
# : 10, # Frameworks
# : 11, # Shared Frameworks
# : 12, # Shared Support
# : 13, # PlugIns
}
def Name(self):
if 'name' in self._properties:
return self._properties['name']
return 'CopyFiles'
def FileGroup(self, path):
return self.PBXProjectAncestor().RootGroupForPath(path)
def SetDestination(self, path):
"""Set the dstSubfolderSpec and dstPath properties from path.
path may be specified in the same notation used for XCHierarchicalElements,
specifically, "$(DIR)/path".
"""
path_tree_match = self.path_tree_re.search(path)
if path_tree_match:
# Everything else needs to be relative to an Xcode variable.
path_tree = path_tree_match.group(1)
relative_path = path_tree_match.group(3)
if path_tree in self.path_tree_to_subfolder:
subfolder = self.path_tree_to_subfolder[path_tree]
if relative_path is None:
relative_path = ''
else:
# The path starts with an unrecognized Xcode variable
# name like $(SRCROOT). Xcode will still handle this
# as an "absolute path" that starts with the variable.
subfolder = 0
relative_path = path
elif path.startswith('/'):
# Special case. Absolute paths are in dstSubfolderSpec 0.
subfolder = 0
relative_path = path[1:]
else:
raise ValueError, 'Can\'t use path %s in a %s' % \
(path, self.__class__.__name__)
self._properties['dstPath'] = relative_path
self._properties['dstSubfolderSpec'] = subfolder
class PBXBuildRule(XCObject):
_schema = XCObject._schema.copy()
_schema.update({
'compilerSpec': [0, str, 0, 1],
'filePatterns': [0, str, 0, 0],
'fileType': [0, str, 0, 1],
'isEditable': [0, int, 0, 1, 1],
'outputFiles': [1, str, 0, 1, []],
'script': [0, str, 0, 0],
})
def Name(self):
# Not very inspired, but it's what Xcode uses.
return self.__class__.__name__
def Hashables(self):
# super
hashables = XCObject.Hashables(self)
# Use the hashables of the weak objects that this object refers to.
hashables.append(self._properties['fileType'])
if 'filePatterns' in self._properties:
hashables.append(self._properties['filePatterns'])
return hashables
class PBXContainerItemProxy(XCObject):
# When referencing an item in this project file, containerPortal is the
# PBXProject root object of this project file. When referencing an item in
# another project file, containerPortal is a PBXFileReference identifying
# the other project file.
#
# When serving as a proxy to an XCTarget (in this project file or another),
# proxyType is 1. When serving as a proxy to a PBXFileReference (in another
# project file), proxyType is 2. Type 2 is used for references to the
# producs of the other project file's targets.
#
# Xcode is weird about remoteGlobalIDString. Usually, it's printed without
# a comment, indicating that it's tracked internally simply as a string, but
# sometimes it's printed with a comment (usually when the object is initially
# created), indicating that it's tracked as a project file object at least
# sometimes. This module always tracks it as an object, but contains a hack
# to prevent it from printing the comment in the project file output. See
# _XCKVPrint.
_schema = XCObject._schema.copy()
_schema.update({
'containerPortal': [0, XCContainerPortal, 0, 1],
'proxyType': [0, int, 0, 1],
'remoteGlobalIDString': [0, XCRemoteObject, 0, 1],
'remoteInfo': [0, str, 0, 1],
})
def __repr__(self):
props = self._properties
name = '%s.gyp:%s' % (props['containerPortal'].Name(), props['remoteInfo'])
return '<%s %r at 0x%x>' % (self.__class__.__name__, name, id(self))
def Name(self):
# Admittedly not the best name, but it's what Xcode uses.
return self.__class__.__name__
def Hashables(self):
# super
hashables = XCObject.Hashables(self)
# Use the hashables of the weak objects that this object refers to.
hashables.extend(self._properties['containerPortal'].Hashables())
hashables.extend(self._properties['remoteGlobalIDString'].Hashables())
return hashables
class PBXTargetDependency(XCObject):
# The "target" property accepts an XCTarget object, and obviously not
# NoneType. But XCTarget is defined below, so it can't be put into the
# schema yet. The definition of PBXTargetDependency can't be moved below
# XCTarget because XCTarget's own schema references PBXTargetDependency.
# Python doesn't deal well with this circular relationship, and doesn't have
# a real way to do forward declarations. To work around, the type of
# the "target" property is reset below, after XCTarget is defined.
#
# At least one of "name" and "target" is required.
_schema = XCObject._schema.copy()
_schema.update({
'name': [0, str, 0, 0],
'target': [0, None.__class__, 0, 0],
'targetProxy': [0, PBXContainerItemProxy, 1, 1],
})
def __repr__(self):
name = self._properties.get('name') or self._properties['target'].Name()
return '<%s %r at 0x%x>' % (self.__class__.__name__, name, id(self))
def Name(self):
# Admittedly not the best name, but it's what Xcode uses.
return self.__class__.__name__
def Hashables(self):
# super
hashables = XCObject.Hashables(self)
# Use the hashables of the weak objects that this object refers to.
hashables.extend(self._properties['targetProxy'].Hashables())
return hashables
class PBXReferenceProxy(XCFileLikeElement):
_schema = XCFileLikeElement._schema.copy()
_schema.update({
'fileType': [0, str, 0, 1],
'path': [0, str, 0, 1],
'remoteRef': [0, PBXContainerItemProxy, 1, 1],
})
class XCTarget(XCRemoteObject):
# An XCTarget is really just an XCObject, the XCRemoteObject thing is just
# to allow PBXProject to be used in the remoteGlobalIDString property of
# PBXContainerItemProxy.
#
# Setting a "name" property at instantiation may also affect "productName",
# which may in turn affect the "PRODUCT_NAME" build setting in children of
# "buildConfigurationList". See __init__ below.
_schema = XCRemoteObject._schema.copy()
_schema.update({
'buildConfigurationList': [0, XCConfigurationList, 1, 1,
XCConfigurationList()],
'buildPhases': [1, XCBuildPhase, 1, 1, []],
'dependencies': [1, PBXTargetDependency, 1, 1, []],
'name': [0, str, 0, 1],
'productName': [0, str, 0, 1],
})
def __init__(self, properties=None, id=None, parent=None,
force_outdir=None, force_prefix=None, force_extension=None):
# super
XCRemoteObject.__init__(self, properties, id, parent)
# Set up additional defaults not expressed in the schema. If a "name"
# property was supplied, set "productName" if it is not present. Also set
# the "PRODUCT_NAME" build setting in each configuration, but only if
# the setting is not present in any build configuration.
if 'name' in self._properties:
if not 'productName' in self._properties:
self.SetProperty('productName', self._properties['name'])
if 'productName' in self._properties:
if 'buildConfigurationList' in self._properties:
configs = self._properties['buildConfigurationList']
if configs.HasBuildSetting('PRODUCT_NAME') == 0:
configs.SetBuildSetting('PRODUCT_NAME',
self._properties['productName'])
def AddDependency(self, other):
pbxproject = self.PBXProjectAncestor()
other_pbxproject = other.PBXProjectAncestor()
if pbxproject == other_pbxproject:
# Add a dependency to another target in the same project file.
container = PBXContainerItemProxy({'containerPortal': pbxproject,
'proxyType': 1,
'remoteGlobalIDString': other,
'remoteInfo': other.Name()})
dependency = PBXTargetDependency({'target': other,
'targetProxy': container})
self.AppendProperty('dependencies', dependency)
else:
# Add a dependency to a target in a different project file.
other_project_ref = \
pbxproject.AddOrGetProjectReference(other_pbxproject)[1]
container = PBXContainerItemProxy({
'containerPortal': other_project_ref,
'proxyType': 1,
'remoteGlobalIDString': other,
'remoteInfo': other.Name(),
})
dependency = PBXTargetDependency({'name': other.Name(),
'targetProxy': container})
self.AppendProperty('dependencies', dependency)
# Proxy all of these through to the build configuration list.
def ConfigurationNamed(self, name):
return self._properties['buildConfigurationList'].ConfigurationNamed(name)
def DefaultConfiguration(self):
return self._properties['buildConfigurationList'].DefaultConfiguration()
def HasBuildSetting(self, key):
return self._properties['buildConfigurationList'].HasBuildSetting(key)
def GetBuildSetting(self, key):
return self._properties['buildConfigurationList'].GetBuildSetting(key)
def SetBuildSetting(self, key, value):
return self._properties['buildConfigurationList'].SetBuildSetting(key, \
value)
def AppendBuildSetting(self, key, value):
return self._properties['buildConfigurationList'].AppendBuildSetting(key, \
value)
def DelBuildSetting(self, key):
return self._properties['buildConfigurationList'].DelBuildSetting(key)
# Redefine the type of the "target" property. See PBXTargetDependency._schema
# above.
PBXTargetDependency._schema['target'][1] = XCTarget
class PBXNativeTarget(XCTarget):
# buildPhases is overridden in the schema to be able to set defaults.
#
# NOTE: Contrary to most objects, it is advisable to set parent when
# constructing PBXNativeTarget. A parent of an XCTarget must be a PBXProject
# object. A parent reference is required for a PBXNativeTarget during
# construction to be able to set up the target defaults for productReference,
# because a PBXBuildFile object must be created for the target and it must
# be added to the PBXProject's mainGroup hierarchy.
_schema = XCTarget._schema.copy()
_schema.update({
'buildPhases': [1, XCBuildPhase, 1, 1,
[PBXSourcesBuildPhase(), PBXFrameworksBuildPhase()]],
'buildRules': [1, PBXBuildRule, 1, 1, []],
'productReference': [0, PBXFileReference, 0, 1],
'productType': [0, str, 0, 1],
})
# Mapping from Xcode product-types to settings. The settings are:
# filetype : used for explicitFileType in the project file
# prefix : the prefix for the file name
# suffix : the suffix for the filen ame
_product_filetypes = {
'com.apple.product-type.application': ['wrapper.application',
'', '.app'],
'com.apple.product-type.bundle': ['wrapper.cfbundle',
'', '.bundle'],
'com.apple.product-type.framework': ['wrapper.framework',
'', '.framework'],
'com.apple.product-type.library.dynamic': ['compiled.mach-o.dylib',
'lib', '.dylib'],
'com.apple.product-type.library.static': ['archive.ar',
'lib', '.a'],
'com.apple.product-type.tool': ['compiled.mach-o.executable',
'', ''],
'com.apple.product-type.bundle.unit-test': ['wrapper.cfbundle',
'', '.xctest'],
'com.googlecode.gyp.xcode.bundle': ['compiled.mach-o.dylib',
'', '.so'],
}
def __init__(self, properties=None, id=None, parent=None,
force_outdir=None, force_prefix=None, force_extension=None):
# super
XCTarget.__init__(self, properties, id, parent)
if 'productName' in self._properties and \
'productType' in self._properties and \
not 'productReference' in self._properties and \
self._properties['productType'] in self._product_filetypes:
products_group = None
pbxproject = self.PBXProjectAncestor()
if pbxproject != None:
products_group = pbxproject.ProductsGroup()
if products_group != None:
(filetype, prefix, suffix) = \
self._product_filetypes[self._properties['productType']]
# Xcode does not have a distinct type for loadable modules that are
# pure BSD targets (not in a bundle wrapper). GYP allows such modules
# to be specified by setting a target type to loadable_module without
# having mac_bundle set. These are mapped to the pseudo-product type
# com.googlecode.gyp.xcode.bundle.
#
# By picking up this special type and converting it to a dynamic
# library (com.apple.product-type.library.dynamic) with fix-ups,
# single-file loadable modules can be produced.
#
# MACH_O_TYPE is changed to mh_bundle to produce the proper file type
# (as opposed to mh_dylib). In order for linking to succeed,
# DYLIB_CURRENT_VERSION and DYLIB_COMPATIBILITY_VERSION must be
# cleared. They are meaningless for type mh_bundle.
#
# Finally, the .so extension is forcibly applied over the default
# (.dylib), unless another forced extension is already selected.
# .dylib is plainly wrong, and .bundle is used by loadable_modules in
# bundle wrappers (com.apple.product-type.bundle). .so seems an odd
# choice because it's used as the extension on many other systems that
# don't distinguish between linkable shared libraries and non-linkable
# loadable modules, but there's precedent: Python loadable modules on
# Mac OS X use an .so extension.
if self._properties['productType'] == 'com.googlecode.gyp.xcode.bundle':
self._properties['productType'] = \
'com.apple.product-type.library.dynamic'
self.SetBuildSetting('MACH_O_TYPE', 'mh_bundle')
self.SetBuildSetting('DYLIB_CURRENT_VERSION', '')
self.SetBuildSetting('DYLIB_COMPATIBILITY_VERSION', '')
if force_extension is None:
force_extension = suffix[1:]
if self._properties['productType'] == \
'com.apple.product-type-bundle.unit.test':
if force_extension is None:
force_extension = suffix[1:]
if force_extension is not None:
# If it's a wrapper (bundle), set WRAPPER_EXTENSION.
if filetype.startswith('wrapper.'):
self.SetBuildSetting('WRAPPER_EXTENSION', force_extension)
else:
# Extension override.
suffix = '.' + force_extension
self.SetBuildSetting('EXECUTABLE_EXTENSION', force_extension)
if filetype.startswith('compiled.mach-o.executable'):
product_name = self._properties['productName']
product_name += suffix
suffix = ''
self.SetProperty('productName', product_name)
self.SetBuildSetting('PRODUCT_NAME', product_name)
# Xcode handles most prefixes based on the target type, however there
# are exceptions. If a "BSD Dynamic Library" target is added in the
# Xcode UI, Xcode sets EXECUTABLE_PREFIX. This check duplicates that
# behavior.
if force_prefix is not None:
prefix = force_prefix
if filetype.startswith('wrapper.'):
self.SetBuildSetting('WRAPPER_PREFIX', prefix)
else:
self.SetBuildSetting('EXECUTABLE_PREFIX', prefix)
if force_outdir is not None:
self.SetBuildSetting('TARGET_BUILD_DIR', force_outdir)
# TODO(tvl): Remove the below hack.
# http://code.google.com/p/gyp/issues/detail?id=122
# Some targets include the prefix in the target_name. These targets
# really should just add a product_name setting that doesn't include
# the prefix. For example:
# target_name = 'libevent', product_name = 'event'
# This check cleans up for them.
product_name = self._properties['productName']
prefix_len = len(prefix)
if prefix_len and (product_name[:prefix_len] == prefix):
product_name = product_name[prefix_len:]
self.SetProperty('productName', product_name)
self.SetBuildSetting('PRODUCT_NAME', product_name)
ref_props = {
'explicitFileType': filetype,
'includeInIndex': 0,
'path': prefix + product_name + suffix,
'sourceTree': 'BUILT_PRODUCTS_DIR',
}
file_ref = PBXFileReference(ref_props)
products_group.AppendChild(file_ref)
self.SetProperty('productReference', file_ref)
def GetBuildPhaseByType(self, type):
if not 'buildPhases' in self._properties:
return None
the_phase = None
for phase in self._properties['buildPhases']:
if isinstance(phase, type):
# Some phases may be present in multiples in a well-formed project file,
# but phases like PBXSourcesBuildPhase may only be present singly, and
# this function is intended as an aid to GetBuildPhaseByType. Loop
# over the entire list of phases and assert if more than one of the
# desired type is found.
assert the_phase is None
the_phase = phase
return the_phase
def HeadersPhase(self):
headers_phase = self.GetBuildPhaseByType(PBXHeadersBuildPhase)
if headers_phase is None:
headers_phase = PBXHeadersBuildPhase()
# The headers phase should come before the resources, sources, and
# frameworks phases, if any.
insert_at = len(self._properties['buildPhases'])
for index in xrange(0, len(self._properties['buildPhases'])):
phase = self._properties['buildPhases'][index]
if isinstance(phase, PBXResourcesBuildPhase) or \
isinstance(phase, PBXSourcesBuildPhase) or \
isinstance(phase, PBXFrameworksBuildPhase):
insert_at = index
break
self._properties['buildPhases'].insert(insert_at, headers_phase)
headers_phase.parent = self
return headers_phase
def ResourcesPhase(self):
resources_phase = self.GetBuildPhaseByType(PBXResourcesBuildPhase)
if resources_phase is None:
resources_phase = PBXResourcesBuildPhase()
# The resources phase should come before the sources and frameworks
# phases, if any.
insert_at = len(self._properties['buildPhases'])
for index in xrange(0, len(self._properties['buildPhases'])):
phase = self._properties['buildPhases'][index]
if isinstance(phase, PBXSourcesBuildPhase) or \
isinstance(phase, PBXFrameworksBuildPhase):
insert_at = index
break
self._properties['buildPhases'].insert(insert_at, resources_phase)
resources_phase.parent = self
return resources_phase
def SourcesPhase(self):
sources_phase = self.GetBuildPhaseByType(PBXSourcesBuildPhase)
if sources_phase is None:
sources_phase = PBXSourcesBuildPhase()
self.AppendProperty('buildPhases', sources_phase)
return sources_phase
def FrameworksPhase(self):
frameworks_phase = self.GetBuildPhaseByType(PBXFrameworksBuildPhase)
if frameworks_phase is None:
frameworks_phase = PBXFrameworksBuildPhase()
self.AppendProperty('buildPhases', frameworks_phase)
return frameworks_phase
def AddDependency(self, other):
# super
XCTarget.AddDependency(self, other)
static_library_type = 'com.apple.product-type.library.static'
shared_library_type = 'com.apple.product-type.library.dynamic'
framework_type = 'com.apple.product-type.framework'
if isinstance(other, PBXNativeTarget) and \
'productType' in self._properties and \
self._properties['productType'] != static_library_type and \
'productType' in other._properties and \
(other._properties['productType'] == static_library_type or \
((other._properties['productType'] == shared_library_type or \
other._properties['productType'] == framework_type) and \
((not other.HasBuildSetting('MACH_O_TYPE')) or
other.GetBuildSetting('MACH_O_TYPE') != 'mh_bundle'))):
file_ref = other.GetProperty('productReference')
pbxproject = self.PBXProjectAncestor()
other_pbxproject = other.PBXProjectAncestor()
if pbxproject != other_pbxproject:
other_project_product_group = \
pbxproject.AddOrGetProjectReference(other_pbxproject)[0]
file_ref = other_project_product_group.GetChildByRemoteObject(file_ref)
self.FrameworksPhase().AppendProperty('files',
PBXBuildFile({'fileRef': file_ref}))
class PBXAggregateTarget(XCTarget):
pass
class PBXProject(XCContainerPortal):
# A PBXProject is really just an XCObject, the XCContainerPortal thing is
# just to allow PBXProject to be used in the containerPortal property of
# PBXContainerItemProxy.
"""
Attributes:
path: "sample.xcodeproj". TODO(mark) Document me!
_other_pbxprojects: A dictionary, keyed by other PBXProject objects. Each
value is a reference to the dict in the
projectReferences list associated with the keyed
PBXProject.
"""
_schema = XCContainerPortal._schema.copy()
_schema.update({
'attributes': [0, dict, 0, 0],
'buildConfigurationList': [0, XCConfigurationList, 1, 1,
XCConfigurationList()],
'compatibilityVersion': [0, str, 0, 1, 'Xcode 3.2'],
'hasScannedForEncodings': [0, int, 0, 1, 1],
'mainGroup': [0, PBXGroup, 1, 1, PBXGroup()],
'projectDirPath': [0, str, 0, 1, ''],
'projectReferences': [1, dict, 0, 0],
'projectRoot': [0, str, 0, 1, ''],
'targets': [1, XCTarget, 1, 1, []],
})
def __init__(self, properties=None, id=None, parent=None, path=None):
self.path = path
self._other_pbxprojects = {}
# super
return XCContainerPortal.__init__(self, properties, id, parent)
def Name(self):
name = self.path
if name[-10:] == '.xcodeproj':
name = name[:-10]
return posixpath.basename(name)
def Path(self):
return self.path
def Comment(self):
return 'Project object'
def Children(self):
# super
children = XCContainerPortal.Children(self)
# Add children that the schema doesn't know about. Maybe there's a more
# elegant way around this, but this is the only case where we need to own
# objects in a dictionary (that is itself in a list), and three lines for
# a one-off isn't that big a deal.
if 'projectReferences' in self._properties:
for reference in self._properties['projectReferences']:
children.append(reference['ProductGroup'])
return children
def PBXProjectAncestor(self):
return self
def _GroupByName(self, name):
if not 'mainGroup' in self._properties:
self.SetProperty('mainGroup', PBXGroup())
main_group = self._properties['mainGroup']
group = main_group.GetChildByName(name)
if group is None:
group = PBXGroup({'name': name})
main_group.AppendChild(group)
return group
# SourceGroup and ProductsGroup are created by default in Xcode's own
# templates.
def SourceGroup(self):
return self._GroupByName('Source')
def ProductsGroup(self):
return self._GroupByName('Products')
# IntermediatesGroup is used to collect source-like files that are generated
# by rules or script phases and are placed in intermediate directories such
# as DerivedSources.
def IntermediatesGroup(self):
return self._GroupByName('Intermediates')
# FrameworksGroup and ProjectsGroup are top-level groups used to collect
# frameworks and projects.
def FrameworksGroup(self):
return self._GroupByName('Frameworks')
def ProjectsGroup(self):
return self._GroupByName('Projects')
def RootGroupForPath(self, path):
"""Returns a PBXGroup child of this object to which path should be added.
This method is intended to choose between SourceGroup and
IntermediatesGroup on the basis of whether path is present in a source
directory or an intermediates directory. For the purposes of this
determination, any path located within a derived file directory such as
PROJECT_DERIVED_FILE_DIR is treated as being in an intermediates
directory.
The returned value is a two-element tuple. The first element is the
PBXGroup, and the second element specifies whether that group should be
organized hierarchically (True) or as a single flat list (False).
"""
# TODO(mark): make this a class variable and bind to self on call?
# Also, this list is nowhere near exhaustive.
# INTERMEDIATE_DIR and SHARED_INTERMEDIATE_DIR are used by
# gyp.generator.xcode. There should probably be some way for that module
# to push the names in, rather than having to hard-code them here.
source_tree_groups = {
'DERIVED_FILE_DIR': (self.IntermediatesGroup, True),
'INTERMEDIATE_DIR': (self.IntermediatesGroup, True),
'PROJECT_DERIVED_FILE_DIR': (self.IntermediatesGroup, True),
'SHARED_INTERMEDIATE_DIR': (self.IntermediatesGroup, True),
}
(source_tree, path) = SourceTreeAndPathFromPath(path)
if source_tree != None and source_tree in source_tree_groups:
(group_func, hierarchical) = source_tree_groups[source_tree]
group = group_func()
return (group, hierarchical)
# TODO(mark): make additional choices based on file extension.
return (self.SourceGroup(), True)
def AddOrGetFileInRootGroup(self, path):
"""Returns a PBXFileReference corresponding to path in the correct group
according to RootGroupForPath's heuristics.
If an existing PBXFileReference for path exists, it will be returned.
Otherwise, one will be created and returned.
"""
(group, hierarchical) = self.RootGroupForPath(path)
return group.AddOrGetFileByPath(path, hierarchical)
def RootGroupsTakeOverOnlyChildren(self, recurse=False):
"""Calls TakeOverOnlyChild for all groups in the main group."""
for group in self._properties['mainGroup']._properties['children']:
if isinstance(group, PBXGroup):
group.TakeOverOnlyChild(recurse)
def SortGroups(self):
# Sort the children of the mainGroup (like "Source" and "Products")
# according to their defined order.
self._properties['mainGroup']._properties['children'] = \
sorted(self._properties['mainGroup']._properties['children'],
cmp=lambda x,y: x.CompareRootGroup(y))
# Sort everything else by putting group before files, and going
# alphabetically by name within sections of groups and files. SortGroup
# is recursive.
for group in self._properties['mainGroup']._properties['children']:
if not isinstance(group, PBXGroup):
continue
if group.Name() == 'Products':
# The Products group is a special case. Instead of sorting
# alphabetically, sort things in the order of the targets that
# produce the products. To do this, just build up a new list of
# products based on the targets.
products = []
for target in self._properties['targets']:
if not isinstance(target, PBXNativeTarget):
continue
product = target._properties['productReference']
# Make sure that the product is already in the products group.
assert product in group._properties['children']
products.append(product)
# Make sure that this process doesn't miss anything that was already
# in the products group.
assert len(products) == len(group._properties['children'])
group._properties['children'] = products
else:
group.SortGroup()
def AddOrGetProjectReference(self, other_pbxproject):
"""Add a reference to another project file (via PBXProject object) to this
one.
Returns [ProductGroup, ProjectRef]. ProductGroup is a PBXGroup object in
this project file that contains a PBXReferenceProxy object for each
product of each PBXNativeTarget in the other project file. ProjectRef is
a PBXFileReference to the other project file.
If this project file already references the other project file, the
existing ProductGroup and ProjectRef are returned. The ProductGroup will
still be updated if necessary.
"""
if not 'projectReferences' in self._properties:
self._properties['projectReferences'] = []
product_group = None
project_ref = None
if not other_pbxproject in self._other_pbxprojects:
# This project file isn't yet linked to the other one. Establish the
# link.
product_group = PBXGroup({'name': 'Products'})
# ProductGroup is strong.
product_group.parent = self
# There's nothing unique about this PBXGroup, and if left alone, it will
# wind up with the same set of hashables as all other PBXGroup objects
# owned by the projectReferences list. Add the hashables of the
# remote PBXProject that it's related to.
product_group._hashables.extend(other_pbxproject.Hashables())
# The other project reports its path as relative to the same directory
# that this project's path is relative to. The other project's path
# is not necessarily already relative to this project. Figure out the
# pathname that this project needs to use to refer to the other one.
this_path = posixpath.dirname(self.Path())
projectDirPath = self.GetProperty('projectDirPath')
if projectDirPath:
if posixpath.isabs(projectDirPath[0]):
this_path = projectDirPath
else:
this_path = posixpath.join(this_path, projectDirPath)
other_path = gyp.common.RelativePath(other_pbxproject.Path(), this_path)
# ProjectRef is weak (it's owned by the mainGroup hierarchy).
project_ref = PBXFileReference({
'lastKnownFileType': 'wrapper.pb-project',
'path': other_path,
'sourceTree': 'SOURCE_ROOT',
})
self.ProjectsGroup().AppendChild(project_ref)
ref_dict = {'ProductGroup': product_group, 'ProjectRef': project_ref}
self._other_pbxprojects[other_pbxproject] = ref_dict
self.AppendProperty('projectReferences', ref_dict)
# Xcode seems to sort this list case-insensitively
self._properties['projectReferences'] = \
sorted(self._properties['projectReferences'], cmp=lambda x,y:
cmp(x['ProjectRef'].Name().lower(),
y['ProjectRef'].Name().lower()))
else:
# The link already exists. Pull out the relevnt data.
project_ref_dict = self._other_pbxprojects[other_pbxproject]
product_group = project_ref_dict['ProductGroup']
project_ref = project_ref_dict['ProjectRef']
self._SetUpProductReferences(other_pbxproject, product_group, project_ref)
return [product_group, project_ref]
def _SetUpProductReferences(self, other_pbxproject, product_group,
project_ref):
# TODO(mark): This only adds references to products in other_pbxproject
# when they don't exist in this pbxproject. Perhaps it should also
# remove references from this pbxproject that are no longer present in
# other_pbxproject. Perhaps it should update various properties if they
# change.
for target in other_pbxproject._properties['targets']:
if not isinstance(target, PBXNativeTarget):
continue
other_fileref = target._properties['productReference']
if product_group.GetChildByRemoteObject(other_fileref) is None:
# Xcode sets remoteInfo to the name of the target and not the name
# of its product, despite this proxy being a reference to the product.
container_item = PBXContainerItemProxy({
'containerPortal': project_ref,
'proxyType': 2,
'remoteGlobalIDString': other_fileref,
'remoteInfo': target.Name()
})
# TODO(mark): Does sourceTree get copied straight over from the other
# project? Can the other project ever have lastKnownFileType here
# instead of explicitFileType? (Use it if so?) Can path ever be
# unset? (I don't think so.) Can other_fileref have name set, and
# does it impact the PBXReferenceProxy if so? These are the questions
# that perhaps will be answered one day.
reference_proxy = PBXReferenceProxy({
'fileType': other_fileref._properties['explicitFileType'],
'path': other_fileref._properties['path'],
'sourceTree': other_fileref._properties['sourceTree'],
'remoteRef': container_item,
})
product_group.AppendChild(reference_proxy)
def SortRemoteProductReferences(self):
# For each remote project file, sort the associated ProductGroup in the
# same order that the targets are sorted in the remote project file. This
# is the sort order used by Xcode.
def CompareProducts(x, y, remote_products):
# x and y are PBXReferenceProxy objects. Go through their associated
# PBXContainerItem to get the remote PBXFileReference, which will be
# present in the remote_products list.
x_remote = x._properties['remoteRef']._properties['remoteGlobalIDString']
y_remote = y._properties['remoteRef']._properties['remoteGlobalIDString']
x_index = remote_products.index(x_remote)
y_index = remote_products.index(y_remote)
# Use the order of each remote PBXFileReference in remote_products to
# determine the sort order.
return cmp(x_index, y_index)
for other_pbxproject, ref_dict in self._other_pbxprojects.iteritems():
# Build up a list of products in the remote project file, ordered the
# same as the targets that produce them.
remote_products = []
for target in other_pbxproject._properties['targets']:
if not isinstance(target, PBXNativeTarget):
continue
remote_products.append(target._properties['productReference'])
# Sort the PBXReferenceProxy children according to the list of remote
# products.
product_group = ref_dict['ProductGroup']
product_group._properties['children'] = sorted(
product_group._properties['children'],
cmp=lambda x, y: CompareProducts(x, y, remote_products))
class XCProjectFile(XCObject):
_schema = XCObject._schema.copy()
_schema.update({
'archiveVersion': [0, int, 0, 1, 1],
'classes': [0, dict, 0, 1, {}],
'objectVersion': [0, int, 0, 1, 45],
'rootObject': [0, PBXProject, 1, 1],
})
def SetXcodeVersion(self, version):
version_to_object_version = {
'2.4': 45,
'3.0': 45,
'3.1': 45,
'3.2': 46,
}
if not version in version_to_object_version:
supported_str = ', '.join(sorted(version_to_object_version.keys()))
raise Exception(
'Unsupported Xcode version %s (supported: %s)' %
( version, supported_str ) )
compatibility_version = 'Xcode %s' % version
self._properties['rootObject'].SetProperty('compatibilityVersion',
compatibility_version)
self.SetProperty('objectVersion', version_to_object_version[version]);
def ComputeIDs(self, recursive=True, overwrite=True, hash=None):
# Although XCProjectFile is implemented here as an XCObject, it's not a
# proper object in the Xcode sense, and it certainly doesn't have its own
# ID. Pass through an attempt to update IDs to the real root object.
if recursive:
self._properties['rootObject'].ComputeIDs(recursive, overwrite, hash)
def Print(self, file=sys.stdout):
self.VerifyHasRequiredProperties()
# Add the special "objects" property, which will be caught and handled
# separately during printing. This structure allows a fairly standard
# loop do the normal printing.
self._properties['objects'] = {}
self._XCPrint(file, 0, '// !$*UTF8*$!\n')
if self._should_print_single_line:
self._XCPrint(file, 0, '{ ')
else:
self._XCPrint(file, 0, '{\n')
for property, value in sorted(self._properties.iteritems(),
cmp=lambda x, y: cmp(x, y)):
if property == 'objects':
self._PrintObjects(file)
else:
self._XCKVPrint(file, 1, property, value)
self._XCPrint(file, 0, '}\n')
del self._properties['objects']
def _PrintObjects(self, file):
if self._should_print_single_line:
self._XCPrint(file, 0, 'objects = {')
else:
self._XCPrint(file, 1, 'objects = {\n')
objects_by_class = {}
for object in self.Descendants():
if object == self:
continue
class_name = object.__class__.__name__
if not class_name in objects_by_class:
objects_by_class[class_name] = []
objects_by_class[class_name].append(object)
for class_name in sorted(objects_by_class):
self._XCPrint(file, 0, '\n')
self._XCPrint(file, 0, '/* Begin ' + class_name + ' section */\n')
for object in sorted(objects_by_class[class_name],
cmp=lambda x, y: cmp(x.id, y.id)):
object.Print(file)
self._XCPrint(file, 0, '/* End ' + class_name + ' section */\n')
if self._should_print_single_line:
self._XCPrint(file, 0, '}; ')
else:
self._XCPrint(file, 1, '};\n')
|
mit
|
OCA/l10n-brazil
|
l10n_br_stock_account/__manifest__.py
|
1
|
1134
|
# Copyright (C) 2014 Renato Lima - Akretion
# License AGPL-3 - See http://www.gnu.org/licenses/agpl-3.0.html
{
"name": "Brazilian Localization WMS Accounting",
"category": "Localisation",
"license": "AGPL-3",
"author": "Akretion, Odoo Community Association (OCA)",
"website": "https://github.com/OCA/l10n-brazil",
"version": "12.0.4.0.0",
"depends": [
"stock_account",
"stock_picking_invoicing",
"l10n_br_stock",
"l10n_br_account",
],
"data": [
# Security
"security/ir.model.access.csv",
# Data
"data/l10n_br_stock_account_data.xml",
# Views
"views/stock_account_view.xml",
"views/stock_picking.xml",
"views/stock_rule_view.xml",
"views/stock_picking_type_view.xml",
"views/res_company_view.xml",
# Wizards
"wizards/stock_invoice_onshipping_view.xml",
],
"demo": [
# Demo
"demo/company_demo.xml",
"demo/l10n_br_stock_account_demo.xml",
],
"installable": True,
"post_init_hook": "post_init_hook",
"auto_install": True,
}
|
agpl-3.0
|
hastexo/edx-platform
|
lms/lib/xblock/test/test_mixin.py
|
8
|
21705
|
"""
Tests of the LMS XBlock Mixin
"""
import ddt
from nose.plugins.attrib import attr
from lms_xblock.mixin import (
INVALID_USER_PARTITION_GROUP_VALIDATION_COMPONENT,
INVALID_USER_PARTITION_GROUP_VALIDATION_UNIT,
INVALID_USER_PARTITION_VALIDATION_COMPONENT,
INVALID_USER_PARTITION_VALIDATION_UNIT,
NONSENSICAL_ACCESS_RESTRICTION
)
from xblock.validation import ValidationMessage
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.tests.factories import CourseFactory, ToyCourseFactory, ItemFactory
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase, TEST_DATA_MIXED_MODULESTORE
from xmodule.partitions.partitions import Group, UserPartition
class LmsXBlockMixinTestCase(ModuleStoreTestCase):
"""
Base class for XBlock mixin tests cases. A simple course with a single user partition is created
in setUp for all subclasses to use.
"""
def build_course(self):
"""
Build up a course tree with a UserPartition.
"""
# pylint: disable=attribute-defined-outside-init
self.user_partition = UserPartition(
0,
'first_partition',
'First Partition',
[
Group(0, 'alpha'),
Group(1, 'beta')
]
)
self.group1 = self.user_partition.groups[0]
self.group2 = self.user_partition.groups[1]
self.course = CourseFactory.create(user_partitions=[self.user_partition])
section = ItemFactory.create(parent=self.course, category='chapter', display_name='Test Section')
subsection = ItemFactory.create(parent=section, category='sequential', display_name='Test Subsection')
vertical = ItemFactory.create(parent=subsection, category='vertical', display_name='Test Unit')
video = ItemFactory.create(parent=vertical, category='video', display_name='Test Video 1')
split_test = ItemFactory.create(parent=vertical, category='split_test', display_name='Test Content Experiment')
child_vertical = ItemFactory.create(parent=split_test, category='vertical')
child_html_module = ItemFactory.create(parent=child_vertical, category='html')
self.section_location = section.location
self.subsection_location = subsection.location
self.vertical_location = vertical.location
self.video_location = video.location
self.split_test_location = split_test.location
self.child_vertical_location = child_vertical.location
self.child_html_module_location = child_html_module.location
def set_group_access(self, block_location, access_dict):
"""
Sets the group_access dict on the block referenced by block_location.
"""
block = self.store.get_item(block_location)
block.group_access = access_dict
self.store.update_item(block, 1)
class XBlockValidationTest(LmsXBlockMixinTestCase):
"""
Unit tests for XBlock validation
"""
def setUp(self):
super(XBlockValidationTest, self).setUp()
self.build_course()
def verify_validation_message(self, message, expected_message, expected_message_type):
"""
Verify that the validation message has the expected validation message and type.
"""
self.assertEqual(message.text, expected_message)
self.assertEqual(message.type, expected_message_type)
def test_validate_full_group_access(self):
"""
Test the validation messages produced for an xblock with full group access.
"""
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 0)
def test_validate_restricted_group_access(self):
"""
Test the validation messages produced for an xblock with a valid group access restriction
"""
self.set_group_access(self.video_location, {self.user_partition.id: [self.group1.id, self.group2.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 0)
def test_validate_invalid_user_partitions(self):
"""
Test the validation messages produced for a component referring to non-existent user partitions.
"""
self.set_group_access(self.video_location, {999: [self.group1.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_VALIDATION_COMPONENT,
ValidationMessage.ERROR,
)
# Now add a second invalid user partition and validate again.
# Note that even though there are two invalid configurations,
# only a single error message will be returned.
self.set_group_access(self.video_location, {998: [self.group2.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_VALIDATION_COMPONENT,
ValidationMessage.ERROR,
)
def test_validate_invalid_user_partitions_unit(self):
"""
Test the validation messages produced for a unit referring to non-existent user partitions.
"""
self.set_group_access(self.vertical_location, {999: [self.group1.id]})
validation = self.store.get_item(self.vertical_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_VALIDATION_UNIT,
ValidationMessage.ERROR,
)
# Now add a second invalid user partition and validate again.
# Note that even though there are two invalid configurations,
# only a single error message will be returned.
self.set_group_access(self.vertical_location, {998: [self.group2.id]})
validation = self.store.get_item(self.vertical_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_VALIDATION_UNIT,
ValidationMessage.ERROR,
)
def test_validate_invalid_groups(self):
"""
Test the validation messages produced for an xblock referring to non-existent groups.
"""
self.set_group_access(self.video_location, {self.user_partition.id: [self.group1.id, 999]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_GROUP_VALIDATION_COMPONENT,
ValidationMessage.ERROR,
)
# Now try again with two invalid group ids
self.set_group_access(self.video_location, {self.user_partition.id: [self.group1.id, 998, 999]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_GROUP_VALIDATION_COMPONENT,
ValidationMessage.ERROR,
)
def test_validate_nonsensical_access_for_split_test_children(self):
"""
Test the validation messages produced for components within
a content group experiment (also known as a split_test).
Ensures that children of split_test xblocks only validate
their access settings off the parent, rather than any
grandparent.
"""
# Test that no validation message is displayed on split_test child when child agrees with parent
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.split_test_location, {self.user_partition.id: [self.group2.id]})
self.set_group_access(self.child_vertical_location, {self.user_partition.id: [self.group2.id]})
self.set_group_access(self.child_html_module_location, {self.user_partition.id: [self.group2.id]})
validation = self.store.get_item(self.child_html_module_location).validate()
self.assertEqual(len(validation.messages), 0)
# Test that a validation message is displayed on split_test child when the child contradicts the parent,
# even though the child agrees with the grandparent unit.
self.set_group_access(self.child_html_module_location, {self.user_partition.id: [self.group1.id]})
validation = self.store.get_item(self.child_html_module_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
NONSENSICAL_ACCESS_RESTRICTION,
ValidationMessage.ERROR,
)
def test_validate_invalid_groups_for_unit(self):
"""
Test the validation messages produced for a unit-level xblock referring to non-existent groups.
"""
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id, 999]})
validation = self.store.get_item(self.vertical_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_GROUP_VALIDATION_UNIT,
ValidationMessage.ERROR,
)
def test_validate_nonsensical_access_restriction(self):
"""
Test the validation messages produced for a component whose
access settings contradict the unit level access.
"""
# Test that there is no validation message for non-contradicting access restrictions
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.video_location, {self.user_partition.id: [self.group1.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 0)
# Now try again with opposing access restrictions
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.video_location, {self.user_partition.id: [self.group2.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
NONSENSICAL_ACCESS_RESTRICTION,
ValidationMessage.ERROR,
)
# Now try again when the component restricts access to additional groups that the unit does not
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.video_location, {self.user_partition.id: [self.group1.id, self.group2.id]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
NONSENSICAL_ACCESS_RESTRICTION,
ValidationMessage.ERROR,
)
# Now try again when the component tries to allow access to all learners and staff
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.video_location, {})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 1)
self.verify_validation_message(
validation.messages[0],
NONSENSICAL_ACCESS_RESTRICTION,
ValidationMessage.ERROR,
)
def test_nonsensical_access_restriction_does_not_override(self):
"""
Test that the validation message produced for a component
whose access settings contradict the unit level access don't
override other messages but add on to them.
"""
self.set_group_access(self.vertical_location, {self.user_partition.id: [self.group1.id]})
self.set_group_access(self.video_location, {self.user_partition.id: [self.group2.id, 999]})
validation = self.store.get_item(self.video_location).validate()
self.assertEqual(len(validation.messages), 2)
self.verify_validation_message(
validation.messages[0],
INVALID_USER_PARTITION_GROUP_VALIDATION_COMPONENT,
ValidationMessage.ERROR,
)
self.verify_validation_message(
validation.messages[1],
NONSENSICAL_ACCESS_RESTRICTION,
ValidationMessage.ERROR,
)
class OpenAssessmentBlockMixinTestCase(ModuleStoreTestCase):
"""
Tests for OpenAssessmentBlock mixin.
"""
def setUp(self):
super(OpenAssessmentBlockMixinTestCase, self).setUp()
self.course = CourseFactory.create()
self.section = ItemFactory.create(parent=self.course, category='chapter', display_name='Test Section')
self.open_assessment = ItemFactory.create(
parent=self.section,
category="openassessment",
display_name="untitled",
)
def test_has_score(self):
"""
Test has_score is true for ora2 problems.
"""
self.assertTrue(self.open_assessment.has_score)
@attr(shard=3)
@ddt.ddt
class XBlockGetParentTest(LmsXBlockMixinTestCase):
"""
Test that XBlock.get_parent returns correct results with each modulestore
backend.
"""
MODULESTORE = TEST_DATA_MIXED_MODULESTORE
@ddt.data(ModuleStoreEnum.Type.mongo, ModuleStoreEnum.Type.split)
def test_parents(self, modulestore_type):
with self.store.default_store(modulestore_type):
# setting up our own local course tree here, since it needs to be
# created with the correct modulestore type.
course_key = ToyCourseFactory.create().id
course = self.store.get_course(course_key)
self.assertIsNone(course.get_parent())
def recurse(parent):
"""
Descend the course tree and ensure the result of get_parent()
is the expected one.
"""
visited = []
for child in parent.get_children():
self.assertEqual(parent.location, child.get_parent().location)
visited.append(child)
visited += recurse(child)
return visited
visited = recurse(course)
self.assertEqual(len(visited), 28)
@ddt.data(ModuleStoreEnum.Type.mongo, ModuleStoreEnum.Type.split)
def test_parents_draft_content(self, modulestore_type):
# move the video to the new vertical
with self.store.default_store(modulestore_type):
self.build_course()
subsection = self.store.get_item(self.subsection_location)
new_vertical = ItemFactory.create(parent=subsection, category='vertical', display_name='New Test Unit')
child_to_move_location = self.video_location.for_branch(None)
new_parent_location = new_vertical.location.for_branch(None)
old_parent_location = self.vertical_location.for_branch(None)
with self.store.branch_setting(ModuleStoreEnum.Branch.draft_preferred):
self.assertIsNone(self.course.get_parent())
with self.store.bulk_operations(self.course.id):
user_id = ModuleStoreEnum.UserID.test
old_parent = self.store.get_item(old_parent_location)
old_parent.children.remove(child_to_move_location)
self.store.update_item(old_parent, user_id)
new_parent = self.store.get_item(new_parent_location)
new_parent.children.append(child_to_move_location)
self.store.update_item(new_parent, user_id)
# re-fetch video from draft store
video = self.store.get_item(child_to_move_location)
self.assertEqual(
new_parent_location,
video.get_parent().location
)
with self.store.branch_setting(ModuleStoreEnum.Branch.published_only):
# re-fetch video from published store
video = self.store.get_item(child_to_move_location)
self.assertEqual(
old_parent_location,
video.get_parent().location.for_branch(None)
)
class RenamedTuple(tuple):
"""
This class is only used to allow overriding __name__ on the tuples passed
through ddt, in order to have the generated test names make sense.
"""
pass
def ddt_named(parent, child):
"""
Helper to get more readable dynamically-generated test names from ddt.
"""
args = RenamedTuple([parent, child])
args.__name__ = 'parent_{}_child_{}'.format(parent, child) # pylint: disable=attribute-defined-outside-init
return args
@attr(shard=3)
@ddt.ddt
class XBlockMergedGroupAccessTest(LmsXBlockMixinTestCase):
"""
Test that XBlock.merged_group_access is computed correctly according to
our access control rules.
"""
PARTITION_1 = 1
PARTITION_1_GROUP_1 = 11
PARTITION_1_GROUP_2 = 12
PARTITION_2 = 2
PARTITION_2_GROUP_1 = 21
PARTITION_2_GROUP_2 = 22
PARENT_CHILD_PAIRS = (
ddt_named('section_location', 'subsection_location'),
ddt_named('section_location', 'vertical_location'),
ddt_named('section_location', 'video_location'),
ddt_named('subsection_location', 'vertical_location'),
ddt_named('subsection_location', 'video_location'),
)
def setUp(self):
super(XBlockMergedGroupAccessTest, self).setUp()
self.build_course()
def verify_group_access(self, block_location, expected_dict):
"""
Verify the expected value for the block's group_access.
"""
block = self.store.get_item(block_location)
self.assertEqual(block.merged_group_access, expected_dict)
@ddt.data(*PARENT_CHILD_PAIRS)
@ddt.unpack
def test_intersecting_groups(self, parent, child):
"""
When merging group_access on a block, the resulting group IDs for each
partition is the intersection of the group IDs defined for that
partition across all ancestor blocks (including this one).
"""
parent_block = getattr(self, parent)
child_block = getattr(self, child)
self.set_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1, self.PARTITION_1_GROUP_2]})
self.set_group_access(child_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_2]})
self.verify_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1, self.PARTITION_1_GROUP_2]})
self.verify_group_access(child_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_2]})
@ddt.data(*PARENT_CHILD_PAIRS)
@ddt.unpack
def test_disjoint_groups(self, parent, child):
"""
When merging group_access on a block, if the intersection of group IDs
for a partition is empty, the merged value for that partition is False.
"""
parent_block = getattr(self, parent)
child_block = getattr(self, child)
self.set_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1]})
self.set_group_access(child_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_2]})
self.verify_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1]})
self.verify_group_access(child_block, {self.PARTITION_1: False})
def test_disjoint_groups_no_override(self):
"""
Special case of the above test - ensures that `False` propagates down
to the block being queried even if blocks further down in the hierarchy
try to override it.
"""
self.set_group_access(self.section_location, {self.PARTITION_1: [self.PARTITION_1_GROUP_1]})
self.set_group_access(self.subsection_location, {self.PARTITION_1: [self.PARTITION_1_GROUP_2]})
self.set_group_access(
self.vertical_location, {self.PARTITION_1: [self.PARTITION_1_GROUP_1, self.PARTITION_1_GROUP_2]}
)
self.verify_group_access(self.vertical_location, {self.PARTITION_1: False})
self.verify_group_access(self.video_location, {self.PARTITION_1: False})
@ddt.data(*PARENT_CHILD_PAIRS)
@ddt.unpack
def test_union_partitions(self, parent, child):
"""
When merging group_access on a block, the result's keys (partitions)
are the union of all partitions specified across all ancestor blocks
(including this one).
"""
parent_block = getattr(self, parent)
child_block = getattr(self, child)
self.set_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1]})
self.set_group_access(child_block, {self.PARTITION_2: [self.PARTITION_1_GROUP_2]})
self.verify_group_access(parent_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1]})
self.verify_group_access(
child_block, {self.PARTITION_1: [self.PARTITION_1_GROUP_1], self.PARTITION_2: [self.PARTITION_1_GROUP_2]}
)
|
agpl-3.0
|
Catweazz/catweazz
|
lib/utils/beta/t0mm0/common/addon.py
|
17
|
26782
|
'''
common XBMC Module
Copyright (C) 2011 t0mm0
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import re
import os
try:
import cPickle as pickle
except:
import pickle
import urllib
import urlparse
import xbmc
import xbmcaddon
import xbmcgui
import xbmcplugin
class Addon:
'''
This class provides a lot of code that is used across many XBMC addons
in the hope that it will simplify some of the common tasks an addon needs
to perform.
Mostly this is achieved by providing a wrapper around commonly used parts
of :mod:`xbmc`, :mod:`xbmcaddon`, :mod:`xbmcgui` and :mod:`xbmcplugin`.
You probably want to have exactly one instance of this class in your addon
which you can call from anywhere in your code.
Example::
import sys
from t0mm0.common.addon import Addon
addon = Addon('my.plugin.id', argv=sys.argv)
'''
def __init__(self, addon_id, argv=None):
'''
Args:
addon_id (str): Your addon's id (eg. 'plugin.video.t0mm0.test').
Kwargs:
argv (list): List of arguments passed to your addon if applicable
(eg. sys.argv).
'''
self.addon = xbmcaddon.Addon(id=addon_id)
if argv:
self.url = argv[0]
self.handle = int(argv[1])
self.queries = self.parse_query(argv[2][1:])
def get_author(self):
'''Returns the addon author as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('author')
def get_changelog(self):
'''Returns the addon changelog.'''
return self.addon.getAddonInfo('changelog')
def get_description(self):
'''Returns the addon description as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('description')
def get_disclaimer(self):
'''Returns the addon disclaimer as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('disclaimer')
def get_fanart(self):
'''Returns the full path to the addon fanart.'''
return self.addon.getAddonInfo('fanart')
def get_icon(self):
'''Returns the full path to the addon icon.'''
return self.addon.getAddonInfo('icon')
def get_id(self):
'''Returns the addon id as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('id')
def get_name(self):
'''Returns the addon name as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('name')
def get_path(self):
'''Returns the full path to the addon directory.'''
return self.addon.getAddonInfo('path')
def get_profile(self):
'''
Returns the full path to the addon profile directory
(useful for storing files needed by the addon such as cookies).
'''
return xbmc.translatePath(self.addon.getAddonInfo('profile'))
def get_stars(self):
'''Returns the number of stars for this addon.'''
return self.addon.getAddonInfo('stars')
def get_summary(self):
'''Returns the addon summary as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('summary')
def get_type(self):
'''
Returns the addon summary as defined in ``addon.xml``
(eg. xbmc.python.pluginsource).
'''
return self.addon.getAddonInfo('type')
def get_version(self):
'''Returns the addon version as defined in ``addon.xml``.'''
return self.addon.getAddonInfo('version')
def get_setting(self, setting):
'''
Returns an addon setting. Settings must be defined in your addon's
``resources/settings.xml`` file.
Args:
setting (str): Name of the setting to be retrieved.
Returns:
str containing the requested setting.
'''
return self.addon.getSetting(setting)
def get_string(self, string_id):
'''
Returns a localized string. Strings must be defined in your addon's
``resources/language/[lang_name]/strings.xml`` file.
Args:
string_id (int): id of the translated string to retrieve.
Returns:
str containing the localized requested string.
'''
return self.addon.getLocalizedString(string_id)
def parse_query(self, query, defaults={'mode': 'main'}):
'''
Parse a query string as used in a URL or passed to your addon by XBMC.
Example:
>>> addon.parse_query('name=test&type=basic')
{'mode': 'main', 'name': 'test', 'type': 'basic'}
Args:
query (str): A query string.
Kwargs:
defaults (dict): A dictionary containing key/value pairs parsed
from the query string. If a key is repeated in the query string
its value will be a list containing all of that keys values.
'''
queries = urlparse.parse_qs(query)
q = defaults
for key, value in queries.items():
if len(value) == 1:
q[key] = value[0]
else:
q[key] = value
return q
def build_plugin_url(self, queries):
'''
Returns a ``plugin://`` URL which can be used to call the addon with
the specified queries.
Example:
>>> addon.build_plugin_url({'name': 'test', 'type': 'basic'})
'plugin://your.plugin.id/?name=test&type=basic'
Args:
queries (dict): A dctionary of keys/values to be added to the
``plugin://`` URL.
Retuns:
A string containing a fully formed ``plugin://`` URL.
'''
out_dict = {}
for k, v in queries.iteritems():
if isinstance(v, unicode):
v = v.encode('utf8')
elif isinstance(v, str):
# Must be encoded in UTF-8
v.decode('utf8')
out_dict[k] = v
return self.url + '?' + urllib.urlencode(out_dict)
def log(self, msg, level=xbmc.LOGDEBUG):
'''
Writes a string to the XBMC log file. The addon name is inserted into
the beginning of the message automatically to help you find relevent
messages in the log file.
The available log levels are defined in the :mod:`xbmc` module and are
currently as follows::
xbmc.LOGDEBUG = 0
xbmc.LOGERROR = 4
xbmc.LOGFATAL = 6
xbmc.LOGINFO = 1
xbmc.LOGNONE = 7
xbmc.LOGNOTICE = 2
xbmc.LOGSEVERE = 5
xbmc.LOGWARNING = 3
Args:
msg (str or unicode): The message to be written to the log file.
Kwargs:
level (int): The XBMC log level to write at.
'''
#msg = unicodedata.normalize('NFKD', unicode(msg)).encode('ascii',
# 'ignore')
xbmc.log('%s: %s' % (self.get_name(), msg), level)
def log_error(self, msg):
'''
Convenience method to write to the XBMC log file at the
``xbmc.LOGERROR`` error level. Use when something has gone wrong in
your addon code. This will show up in the log prefixed with 'ERROR:'
whether you have debugging switched on or not.
'''
self.log(msg, xbmc.LOGERROR)
def log_debug(self, msg):
'''
Convenience method to write to the XBMC log file at the
``xbmc.LOGDEBUG`` error level. Use this when you want to print out lots
of detailed information that is only usefull for debugging. This will
show up in the log only when debugging is enabled in the XBMC settings,
and will be prefixed with 'DEBUG:'.
'''
self.log(msg, xbmc.LOGDEBUG)
def log_notice(self, msg):
'''
Convenience method to write to the XBMC log file at the
``xbmc.LOGNOTICE`` error level. Use for general log messages. This will
show up in the log prefixed with 'NOTICE:' whether you have debugging
switched on or not.
'''
self.log(msg, xbmc.LOGNOTICE)
def show_ok_dialog(self, msg, title=None, is_error=False):
'''
Display an XBMC dialog with a message and a single 'OK' button. The
message is also written to the XBMC log file at the appropriate log
level.
.. warning::
Don't forget that `msg` must be a list of strings and not just a
string even if you only want to display a single line!
Example::
addon.show_ok_dialog(['My message'], 'My Addon')
Args:
msg (list of strings): The message to be displayed in the dialog.
Only the first 3 list items will be displayed.
Kwargs:
title (str): String to be displayed as the title of the dialog box.
Defaults to the addon name.
is_error (bool): If ``True``, the log message will be written at
the ERROR log level, otherwise NOTICE will be used.
'''
if not title:
title = self.get_name()
log_msg = ' '.join(msg)
while len(msg) < 3:
msg.append('')
if is_error:
self.log_error(log_msg)
else:
self.log_notice(log_msg)
xbmcgui.Dialog().ok(title, msg[0], msg[1], msg[2])
def show_error_dialog(self, msg):
'''
Convenience method to show an XBMC dialog box with a single OK button
and also write the message to the log file at the ERROR log level.
The title of the dialog will be the addon's name with the prefix
'Error: '.
.. warning::
Don't forget that `msg` must be a list of strings and not just a
string even if you only want to display a single line!
Args:
msg (list of strings): The message to be displayed in the dialog.
Only the first 3 list items will be displayed.
'''
self.show_ok_dialog(msg, 'Error: %s' % self.get_name(), True)
def show_small_popup(self, title='', msg='', delay=5000, image=''):
'''
Displays a small popup box in the lower right corner. The default delay
is 5 seconds.
Code inspired by anarchintosh and daledude's Icefilms addon.
Example::
import os
logo = os.path.join(addon.get_path(), 'art','logo.jpg')
addon.show_small_popup('MyAddonName','Is now loaded enjoy', 5000, logo)
Kwargs:
title (str): title to be displayed at the top of the box
msg (str): Main message body
delay (int): delay in milliseconds until it disapears
image (str): Path to the image you want to display
'''
xbmc.executebuiltin('XBMC.Notification("%s","%s",%d,"%s")' %
(title, msg, delay, image))
def show_countdown(self, time_to_wait, title='', text=''):
'''
Show a countdown dialog with a progress bar for XBMC while delaying
execution. Necessary for some filehosters eg. megaupload
The original version of this code came from Anarchintosh.
Args:
time_to_wait (int): number of seconds to pause for.
Kwargs:
title (str): Displayed in the title of the countdown dialog. Default
is blank.
text (str): A line of text to be displayed in the dialog. Default
is blank.
Returns:
``True`` if countdown is allowed to complete, ``False`` if the
user cancelled the countdown.
'''
dialog = xbmcgui.DialogProgress()
dialog.create(title)
self.log_notice('waiting %d secs' % time_to_wait)
secs = 0
increment = 100 / time_to_wait
cancelled = False
while secs <= time_to_wait:
if (dialog.iscanceled()):
cancelled = True
break
if secs != 0:
xbmc.sleep(1000)
secs_left = time_to_wait - secs
if secs_left == 0:
percent = 100
else:
percent = increment * secs
remaining_display = ('Wait %d seconds for the ' +
'video stream to activate...') % secs_left
dialog.update(percent, text, remaining_display)
secs += 1
if cancelled == True:
self.log_notice('countdown cancelled')
return False
else:
self.log_debug('countdown finished waiting')
return True
def show_settings(self):
'''Shows the settings dialog for this addon.'''
self.addon.openSettings()
def resolve_url(self, stream_url):
'''
Tell XBMC that you have resolved a URL (or not!).
This method should be called as follows:
#. The user selects a list item that has previously had ``isPlayable``
set (this is true for items added with :meth:`add_item`,
:meth:`add_music_item` or :meth:`add_music_item`)
#. Your code resolves the item requested by the user to a media URL
#. Your addon calls this method with the resolved URL
Args:
stream_url (str or ``False``): If a string, tell XBMC that the
media URL ha been successfully resolved to stream_url. If ``False``
or an empty string tell XBMC the resolving failed and pop up an
error messsage.
'''
if stream_url:
self.log_debug('resolved to: %s' % stream_url)
xbmcplugin.setResolvedUrl(self.handle, True,
xbmcgui.ListItem(path=stream_url))
else:
self.show_error_dialog(['sorry, failed to resolve URL :('])
xbmcplugin.setResolvedUrl(self.handle, False, xbmcgui.ListItem())
def get_playlist(self, pl_type, new=False):
'''
Return a :class:`xbmc.Playlist` object of the specified type.
The available playlist types are defined in the :mod:`xbmc` module and
are currently as follows::
xbmc.PLAYLIST_MUSIC = 0
xbmc.PLAYLIST_VIDEO = 1
.. seealso::
:meth:`get_music_playlist`, :meth:`get_video_playlist`
Args:
pl_type (int): The type of playlist to get.
new (bool): If ``False`` (default), get the current
:class:`xbmc.Playlist` object of the type specified. If ``True``
then return a new blank :class:`xbmc.Playlist`.
Returns:
A :class:`xbmc.Playlist` object.
'''
pl = xbmc.PlayList(pl_type)
if new:
pl.clear()
return pl
def get_music_playlist(self, new=False):
'''
Convenience method to return a music :class:`xbmc.Playlist` object.
.. seealso::
:meth:`get_playlist`
Kwargs:
new (bool): If ``False`` (default), get the current music
:class:`xbmc.Playlist` object. If ``True`` then return a new blank
music :class:`xbmc.Playlist`.
Returns:
A :class:`xbmc.Playlist` object.
'''
self.get_playlist(xbmc.PLAYLIST_MUSIC, new)
def get_video_playlist(self, new=False):
'''
Convenience method to return a video :class:`xbmc.Playlist` object.
.. seealso::
:meth:`get_playlist`
Kwargs:
new (bool): If ``False`` (default), get the current video
:class:`xbmc.Playlist` object. If ``True`` then return a new blank
video :class:`xbmc.Playlist`.
Returns:
A :class:`xbmc.Playlist` object.
'''
self.get_playlist(xbmc.PLAYLIST_VIDEO, new)
def add_item(self, queries, infolabels, contextmenu_items='', context_replace=False, img='',
fanart='', resolved=False, total_items=0, playlist=False, item_type='video',
is_folder=False):
'''
Adds an item to the list of entries to be displayed in XBMC or to a
playlist.
Use this method when you want users to be able to select this item to
start playback of a media file. ``queries`` is a dict that will be sent
back to the addon when this item is selected::
add_item({'host': 'youtube.com', 'media_id': 'ABC123XYZ'},
{'title': 'A youtube vid'})
will add a link to::
plugin://your.plugin.id/?host=youtube.com&media_id=ABC123XYZ
.. seealso::
:meth:`add_music_item`, :meth:`add_video_item`,
:meth:`add_directory`
Args:
queries (dict): A set of keys/values to be sent to the addon when
the user selects this item.
infolabels (dict): A dictionary of information about this media
(see the `XBMC Wiki InfoLabels entry
<http://wiki.xbmc.org/?title=InfoLabels>`_).
Kwargs:
contextmenu_items (list): A list of contextmenu items
context_replace (bool): To replace the xbmc default contextmenu items
img (str): A URL to an image file to be used as an icon for this
entry.
fanart (str): A URL to a fanart image for this entry.
resolved (str): If not empty, ``queries`` will be ignored and
instead the added item will be the exact contentes of ``resolved``.
total_items (int): Total number of items to be added in this list.
If supplied it enables XBMC to show a progress bar as the list of
items is being built.
playlist (playlist object): If ``False`` (default), the item will
be added to the list of entries to be displayed in this directory.
If a playlist object is passed (see :meth:`get_playlist`) then
the item will be added to the playlist instead
item_type (str): The type of item to add (eg. 'music', 'video' or
'pictures')
'''
infolabels = self.unescape_dict(infolabels)
if not resolved:
if not is_folder:
queries['play'] = 'True'
play = self.build_plugin_url(queries)
else:
play = resolved
listitem = xbmcgui.ListItem(infolabels['title'])
listitem.setInfo(item_type, infolabels)
listitem.setProperty('IsPlayable', 'true')
listitem.setProperty('fanart_image', fanart)
try:
listitem.setArt({'thumb': img})
except:
listitem.setThumbnailImage(img)
self.log_debug('t0mm0-addon.py: setThumbnailImage is deprecated')
if contextmenu_items:
listitem.addContextMenuItems(contextmenu_items, replaceItems=context_replace)
if playlist is not False:
self.log_debug('adding item: %s - %s to playlist' % \
(infolabels['title'], play))
playlist.add(play, listitem)
else:
self.log_debug('adding item: %s - %s' % (infolabels['title'], play))
xbmcplugin.addDirectoryItem(self.handle, play, listitem,
isFolder=is_folder,
totalItems=total_items)
def add_video_item(self, queries, infolabels, contextmenu_items='', context_replace=False,
img='', fanart='', resolved=False, total_items=0, playlist=False):
'''
Convenience method to add a video item to the directory list or a
playlist.
See :meth:`add_item` for full infomation
'''
self.add_item(queries, infolabels, contextmenu_items, context_replace, img, fanart,
resolved, total_items, playlist, item_type='video')
def add_music_item(self, queries, infolabels, contextmenu_items='', context_replace=False,
img='', fanart='', resolved=False, total_items=0, playlist=False):
'''
Convenience method to add a music item to the directory list or a
playlist.
See :meth:`add_item` for full infomation
'''
self.add_item(queries, infolabels, contextmenu_items, img, context_replace, fanart,
resolved, total_items, playlist, item_type='music')
def add_directory(self, queries, infolabels, contextmenu_items='', context_replace=False,
img='', fanart='', total_items=0, is_folder=True):
'''
Convenience method to add a directory to the display list or a
playlist.
See :meth:`add_item` for full infomation
'''
self.add_item(queries, infolabels, contextmenu_items, context_replace, img, fanart,
total_items=total_items, resolved=self.build_plugin_url(queries),
is_folder=is_folder)
def end_of_directory(self):
'''Tell XBMC that we have finished adding items to this directory.'''
xbmcplugin.endOfDirectory(self.handle)
def _decode_callback(self, matches):
'''Callback method used by :meth:`decode`.'''
_id = matches.group(1)
try:
return unichr(int(_id))
except:
return _id
def decode(self, data):
'''
Regular expression to convert entities such as ``,`` to the correct
characters. It is called by :meth:`unescape` and so it is not required
to call it directly.
This method was found `on the web <http://stackoverflow.com/questions/1208916/decoding-html-entities-with-python/1208931#1208931>`_
Args:
data (str): String to be cleaned.
Returns:
Cleaned string.
'''
return re.sub("&#(\d+)(;|(?=\s))", self._decode_callback, data).strip()
def unescape(self, text):
'''
Decodes HTML entities in a string.
You can add more entities to the ``rep`` dictionary.
Args:
text (str): String to be cleaned.
Returns:
Cleaned string.
'''
try:
text = self.decode(text)
rep = {'<': '<',
'>': '>',
'"': '"',
'’': '\'',
'´': '\'',
}
for s, r in rep.items():
text = text.replace(s, r)
# this has to be last:
text = text.replace("&", "&")
#we don't want to fiddle with non-string types
except TypeError:
pass
return text
def unescape_dict(self, d):
'''
Calls :meth:`unescape` on all values in a dictionary.
Args:
d (dict): A dictionary containing string values
Returns:
A dictionary with HTML entities removed from the values.
'''
out = {}
for key, value in d.items():
out[key] = self.unescape(value)
return out
def save_data(self, filename, data):
'''
Saves the data structure using pickle. If the addon data path does
not exist it will be automatically created. This save function has
the same restrictions as the pickle module.
Args:
filename (string): name of the file you want to save data to. This
file will be saved in your addon's profile directory.
data (data object/string): you want to save.
Returns:
True on success
False on failure
'''
profile_path = self.get_profile()
try:
os.makedirs(profile_path)
except:
pass
save_path = os.path.join(profile_path, filename)
try:
pickle.dump(data, open(save_path, 'wb'))
return True
except pickle.PickleError:
return False
def load_data(self,filename):
'''
Load the data that was saved with save_data() and returns the
data structure.
Args:
filename (string): Name of the file you want to load data from. This
file will be loaded from your addons profile directory.
Returns:
Data stucture on success
False on failure
'''
profile_path = self.get_profile()
load_path = os.path.join(profile_path, filename)
print profile_path
if not os.path.isfile(load_path):
self.log_debug('%s does not exist' % load_path)
return False
try:
data = pickle.load(open(load_path))
except:
return False
return data
|
gpl-2.0
|
AlanD88/website
|
web2py/gluon/contrib/simplejson/__init__.py
|
49
|
17565
|
r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.
:mod:`simplejson` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility with Python 2.4 and Python 2.5 and (currently) has
significant performance advantages, even without using the optional C
extension for speedups.
Encoding basic Python object hierarchies::
>>> import simplejson as json
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> print json.dumps("\"foo\bar")
"\"foo\bar"
>>> print json.dumps(u'\u1234')
"\u1234"
>>> print json.dumps('\\')
"\\"
>>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
{"a": 0, "b": 0, "c": 0}
>>> from StringIO import StringIO
>>> io = StringIO()
>>> json.dump(['streaming API'], io)
>>> io.getvalue()
'["streaming API"]'
Compact encoding::
>>> import simplejson as json
>>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':'))
'[1,2,3,{"4":5,"6":7}]'
Pretty printing::
>>> import simplejson as json
>>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=' ')
>>> print '\n'.join([l.rstrip() for l in s.splitlines()])
{
"4": 5,
"6": 7
}
Decoding JSON::
>>> import simplejson as json
>>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
>>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
True
>>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
True
>>> from StringIO import StringIO
>>> io = StringIO('["streaming API"]')
>>> json.load(io)[0] == 'streaming API'
True
Specializing JSON object decoding::
>>> import simplejson as json
>>> def as_complex(dct):
... if '__complex__' in dct:
... return complex(dct['real'], dct['imag'])
... return dct
...
>>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
... object_hook=as_complex)
(1+2j)
>>> from decimal import Decimal
>>> json.loads('1.1', parse_float=Decimal) == Decimal('1.1')
True
Specializing JSON object encoding::
>>> import simplejson as json
>>> def encode_complex(obj):
... if isinstance(obj, complex):
... return [obj.real, obj.imag]
... raise TypeError(repr(o) + " is not JSON serializable")
...
>>> json.dumps(2 + 1j, default=encode_complex)
'[2.0, 1.0]'
>>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
'[2.0, 1.0]'
>>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
'[2.0, 1.0]'
Using simplejson.tool from the shell to validate and pretty-print::
$ echo '{"json":"obj"}' | python -m simplejson.tool
{
"json": "obj"
}
$ echo '{ 1.2:3.4}' | python -m simplejson.tool
Expecting property name: line 1 column 2 (char 2)
"""
__version__ = '2.1.3'
__all__ = [
'dump', 'dumps', 'load', 'loads',
'JSONDecoder', 'JSONDecodeError', 'JSONEncoder',
'OrderedDict',
]
__author__ = 'Bob Ippolito <[email protected]>'
from decimal import Decimal
from decoder import JSONDecoder, JSONDecodeError
from encoder import JSONEncoder
def _import_OrderedDict():
import collections
try:
return collections.OrderedDict
except AttributeError:
import ordered_dict
return ordered_dict.OrderedDict
OrderedDict = _import_OrderedDict()
def _import_c_make_encoder():
try:
raise ImportError # because assumes simplejson in path
from simplejson._speedups import make_encoder
return make_encoder
except ImportError:
return None
_default_encoder = JSONEncoder(
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding='utf-8',
default=None,
use_decimal=False,
)
def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, use_decimal=False, **kw):
"""Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
``.write()``-supporting file-like object).
If ``skipkeys`` is true then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If ``ensure_ascii`` is false, then the some chunks written to ``fp``
may be ``unicode`` instances, subject to normal Python ``str`` to
``unicode`` coercion rules. Unless ``fp.write()`` explicitly
understands ``unicode`` (as in ``codecs.getwriter()``) this is likely
to cause an error.
If ``check_circular`` is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If ``allow_nan`` is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
in strict compliance of the JSON specification, instead of using the
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
If *indent* is a string, then JSON array elements and object members
will be pretty-printed with a newline followed by that string repeated
for each level of nesting. ``None`` (the default) selects the most compact
representation without any newlines. For backwards compatibility with
versions of simplejson earlier than 2.1.0, an integer is also accepted
and is converted to a string with that many spaces.
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
then it will be used instead of the default ``(', ', ': ')`` separators.
``(',', ':')`` is the most compact JSON representation.
``encoding`` is the character encoding for str instances, default is UTF-8.
``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.
If *use_decimal* is true (default: ``False``) then decimal.Decimal
will be natively serialized to JSON with full precision.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and not use_decimal
and not kw):
iterable = _default_encoder.iterencode(obj)
else:
if cls is None:
cls = JSONEncoder
iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding,
default=default, use_decimal=use_decimal, **kw).iterencode(obj)
# could accelerate with writelines in some versions of Python, at
# a debuggability cost
for chunk in iterable:
fp.write(chunk)
def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, use_decimal=False, **kw):
"""Serialize ``obj`` to a JSON formatted ``str``.
If ``skipkeys`` is false then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If ``ensure_ascii`` is false, then the return value will be a
``unicode`` instance subject to normal Python ``str`` to ``unicode``
coercion rules instead of being escaped to an ASCII ``str``.
If ``check_circular`` is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If ``allow_nan`` is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
strict compliance of the JSON specification, instead of using the
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
If ``indent`` is a string, then JSON array elements and object members
will be pretty-printed with a newline followed by that string repeated
for each level of nesting. ``None`` (the default) selects the most compact
representation without any newlines. For backwards compatibility with
versions of simplejson earlier than 2.1.0, an integer is also accepted
and is converted to a string with that many spaces.
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
then it will be used instead of the default ``(', ', ': ')`` separators.
``(',', ':')`` is the most compact JSON representation.
``encoding`` is the character encoding for str instances, default is UTF-8.
``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.
If *use_decimal* is true (default: ``False``) then decimal.Decimal
will be natively serialized to JSON with full precision.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and not use_decimal
and not kw):
return _default_encoder.encode(obj)
if cls is None:
cls = JSONEncoder
return cls(
skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding, default=default,
use_decimal=use_decimal, **kw).encode(obj)
_default_decoder = JSONDecoder(encoding=None, object_hook=None,
object_pairs_hook=None)
def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, object_pairs_hook=None,
use_decimal=False, **kw):
"""Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
a JSON document) to a Python object.
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
If *use_decimal* is true (default: ``False``) then it implies
parse_float=decimal.Decimal for parity with ``dump``.
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg.
"""
return loads(fp.read(),
encoding=encoding, cls=cls, object_hook=object_hook,
parse_float=parse_float, parse_int=parse_int,
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook,
use_decimal=use_decimal, **kw)
def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, object_pairs_hook=None,
use_decimal=False, **kw):
"""Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
document) to a Python object.
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
If *use_decimal* is true (default: ``False``) then it implies
parse_float=decimal.Decimal for parity with ``dump``.
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg.
"""
if (cls is None and encoding is None and object_hook is None and
parse_int is None and parse_float is None and
parse_constant is None and object_pairs_hook is None
and not use_decimal and not kw):
return _default_decoder.decode(s)
if cls is None:
cls = JSONDecoder
if object_hook is not None:
kw['object_hook'] = object_hook
if object_pairs_hook is not None:
kw['object_pairs_hook'] = object_pairs_hook
if parse_float is not None:
kw['parse_float'] = parse_float
if parse_int is not None:
kw['parse_int'] = parse_int
if parse_constant is not None:
kw['parse_constant'] = parse_constant
if use_decimal:
if parse_float is not None:
raise TypeError("use_decimal=True implies parse_float=Decimal")
kw['parse_float'] = Decimal
return cls(encoding=encoding, **kw).decode(s)
def _toggle_speedups(enabled):
import decoder as dec
import encoder as enc
import scanner as scan
c_make_encoder = _import_c_make_encoder()
if enabled:
dec.scanstring = dec.c_scanstring or dec.py_scanstring
enc.c_make_encoder = c_make_encoder
enc.encode_basestring_ascii = (enc.c_encode_basestring_ascii or
enc.py_encode_basestring_ascii)
scan.make_scanner = scan.c_make_scanner or scan.py_make_scanner
else:
dec.scanstring = dec.py_scanstring
enc.c_make_encoder = None
enc.encode_basestring_ascii = enc.py_encode_basestring_ascii
scan.make_scanner = scan.py_make_scanner
dec.make_scanner = scan.make_scanner
global _default_decoder
_default_decoder = JSONDecoder(
encoding=None,
object_hook=None,
object_pairs_hook=None,
)
global _default_encoder
_default_encoder = JSONEncoder(
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding='utf-8',
default=None,
)
|
mit
|
jabesq/home-assistant
|
homeassistant/components/hitron_coda/device_tracker.py
|
7
|
4402
|
"""Support for the Hitron CODA-4582U, provided by Rogers."""
import logging
from collections import namedtuple
import requests
import voluptuous as vol
import homeassistant.helpers.config_validation as cv
from homeassistant.components.device_tracker import (
DOMAIN, PLATFORM_SCHEMA, DeviceScanner)
from homeassistant.const import (
CONF_HOST, CONF_PASSWORD, CONF_USERNAME, CONF_TYPE
)
_LOGGER = logging.getLogger(__name__)
DEFAULT_TYPE = "rogers"
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_USERNAME): cv.string,
vol.Required(CONF_PASSWORD): cv.string,
vol.Optional(CONF_TYPE, default=DEFAULT_TYPE): cv.string,
})
def get_scanner(_hass, config):
"""Validate the configuration and return a Nmap scanner."""
scanner = HitronCODADeviceScanner(config[DOMAIN])
return scanner if scanner.success_init else None
Device = namedtuple('Device', ['mac', 'name'])
class HitronCODADeviceScanner(DeviceScanner):
"""This class scans for devices using the CODA's web interface."""
def __init__(self, config):
"""Initialize the scanner."""
self.last_results = []
host = config[CONF_HOST]
self._url = 'http://{}/data/getConnectInfo.asp'.format(host)
self._loginurl = 'http://{}/goform/login'.format(host)
self._username = config.get(CONF_USERNAME)
self._password = config.get(CONF_PASSWORD)
if config.get(CONF_TYPE) == "shaw":
self._type = 'pwd'
else:
self._type = 'pws'
self._userid = None
self.success_init = self._update_info()
_LOGGER.info("Scanner initialized")
def scan_devices(self):
"""Scan for new devices and return a list with found device IDs."""
self._update_info()
return [device.mac for device in self.last_results]
def get_device_name(self, device):
"""Return the name of the device with the given MAC address."""
name = next((
result.name for result in self.last_results
if result.mac == device), None)
return name
def _login(self):
"""Log in to the router. This is required for subsequent api calls."""
_LOGGER.info("Logging in to CODA...")
try:
data = [
('user', self._username),
(self._type, self._password),
]
res = requests.post(self._loginurl, data=data, timeout=10)
except requests.exceptions.Timeout:
_LOGGER.error(
"Connection to the router timed out at URL %s", self._url)
return False
if res.status_code != 200:
_LOGGER.error(
"Connection failed with http code %s", res.status_code)
return False
try:
self._userid = res.cookies['userid']
return True
except KeyError:
_LOGGER.error("Failed to log in to router")
return False
def _update_info(self):
"""Get ARP from router."""
_LOGGER.info("Fetching...")
if self._userid is None:
if not self._login():
_LOGGER.error("Could not obtain a user ID from the router")
return False
last_results = []
# doing a request
try:
res = requests.get(self._url, timeout=10, cookies={
'userid': self._userid
})
except requests.exceptions.Timeout:
_LOGGER.error(
"Connection to the router timed out at URL %s", self._url)
return False
if res.status_code != 200:
_LOGGER.error(
"Connection failed with http code %s", res.status_code)
return False
try:
result = res.json()
except ValueError:
# If json decoder could not parse the response
_LOGGER.error("Failed to parse response from router")
return False
# parsing response
for info in result:
mac = info['macAddr']
name = info['hostName']
# No address = no item :)
if mac is None:
continue
last_results.append(Device(mac.upper(), name))
self.last_results = last_results
_LOGGER.info("Request successful")
return True
|
apache-2.0
|
riklaunim/django-custom-multisite
|
tests/regressiontests/views/models.py
|
144
|
1202
|
"""
Regression tests for Django built-in views.
"""
from django.db import models
class Author(models.Model):
name = models.CharField(max_length=100)
def __unicode__(self):
return self.name
def get_absolute_url(self):
return '/views/authors/%s/' % self.id
class BaseArticle(models.Model):
"""
An abstract article Model so that we can create article models with and
without a get_absolute_url method (for create_update generic views tests).
"""
title = models.CharField(max_length=100)
slug = models.SlugField()
author = models.ForeignKey(Author)
class Meta:
abstract = True
def __unicode__(self):
return self.title
class Article(BaseArticle):
date_created = models.DateTimeField()
class UrlArticle(BaseArticle):
"""
An Article class with a get_absolute_url defined.
"""
date_created = models.DateTimeField()
def get_absolute_url(self):
return '/urlarticles/%s/' % self.slug
get_absolute_url.purge = True
class DateArticle(BaseArticle):
"""
An article Model with a DateField instead of DateTimeField,
for testing #7602
"""
date_created = models.DateField()
|
bsd-3-clause
|
rdo-management/ironic
|
ironic/db/api.py
|
5
|
13180
|
# -*- encoding: utf-8 -*-
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base classes for storage engines
"""
import abc
from oslo_config import cfg
from oslo_db import api as db_api
import six
_BACKEND_MAPPING = {'sqlalchemy': 'ironic.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
lazy=True)
def get_instance():
"""Return a DB API instance."""
return IMPL
@six.add_metaclass(abc.ABCMeta)
class Connection(object):
"""Base class for storage system connections."""
@abc.abstractmethod
def __init__(self):
"""Constructor."""
@abc.abstractmethod
def get_nodeinfo_list(self, columns=None, filters=None, limit=None,
marker=None, sort_key=None, sort_dir=None):
"""Get specific columns for matching nodes.
Return a list of the specified columns for all nodes that match the
specified filters.
:param columns: List of column names to return.
Defaults to 'id' column when columns == None.
:param filters: Filters to apply. Defaults to None.
:associated: True | False
:reserved: True | False
:maintenance: True | False
:chassis_uuid: uuid of chassis
:driver: driver's name
:provision_state: provision state of node
:provisioned_before:
nodes with provision_updated_at field before this
interval in seconds
:param limit: Maximum number of nodes to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
:returns: A list of tuples of the specified columns.
"""
@abc.abstractmethod
def get_node_list(self, filters=None, limit=None, marker=None,
sort_key=None, sort_dir=None):
"""Return a list of nodes.
:param filters: Filters to apply. Defaults to None.
:associated: True | False
:reserved: True | False
:maintenance: True | False
:chassis_uuid: uuid of chassis
:driver: driver's name
:provision_state: provision state of node
:provisioned_before:
nodes with provision_updated_at field before this
interval in seconds
:param limit: Maximum number of nodes to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
"""
@abc.abstractmethod
def reserve_node(self, tag, node_id):
"""Reserve a node.
To prevent other ManagerServices from manipulating the given
Node while a Task is performed, mark it reserved by this host.
:param tag: A string uniquely identifying the reservation holder.
:param node_id: A node id or uuid.
:returns: A Node object.
:raises: NodeNotFound if the node is not found.
:raises: NodeLocked if the node is already reserved.
"""
@abc.abstractmethod
def release_node(self, tag, node_id):
"""Release the reservation on a node.
:param tag: A string uniquely identifying the reservation holder.
:param node_id: A node id or uuid.
:raises: NodeNotFound if the node is not found.
:raises: NodeLocked if the node is reserved by another host.
:raises: NodeNotLocked if the node was found to not have a
reservation at all.
"""
@abc.abstractmethod
def create_node(self, values):
"""Create a new node.
:param values: A dict containing several items used to identify
and track the node, and several dicts which are passed
into the Drivers when managing this node. For example:
::
{
'uuid': uuidutils.generate_uuid(),
'instance_uuid': None,
'power_state': states.POWER_OFF,
'provision_state': states.AVAILABLE,
'driver': 'pxe_ipmitool',
'driver_info': { ... },
'properties': { ... },
'extra': { ... },
}
:returns: A node.
"""
@abc.abstractmethod
def get_node_by_id(self, node_id):
"""Return a node.
:param node_id: The id of a node.
:returns: A node.
"""
@abc.abstractmethod
def get_node_by_uuid(self, node_uuid):
"""Return a node.
:param node_uuid: The uuid of a node.
:returns: A node.
"""
@abc.abstractmethod
def get_node_by_name(self, node_name):
"""Return a node.
:param node_name: The logical name of a node.
:returns: A node.
"""
@abc.abstractmethod
def get_node_by_instance(self, instance):
"""Return a node.
:param instance: The instance name or uuid to search for.
:returns: A node.
"""
@abc.abstractmethod
def destroy_node(self, node_id):
"""Destroy a node and all associated interfaces.
:param node_id: The id or uuid of a node.
"""
@abc.abstractmethod
def update_node(self, node_id, values):
"""Update properties of a node.
:param node_id: The id or uuid of a node.
:param values: Dict of values to update.
May be a partial list, eg. when setting the
properties for a driver. For example:
::
{
'driver_info':
{
'my-field-1': val1,
'my-field-2': val2,
}
}
:returns: A node.
:raises: NodeAssociated
:raises: NodeNotFound
"""
@abc.abstractmethod
def get_port_by_id(self, port_id):
"""Return a network port representation.
:param port_id: The id of a port.
:returns: A port.
"""
@abc.abstractmethod
def get_port_by_uuid(self, port_uuid):
"""Return a network port representation.
:param port_uuid: The uuid of a port.
:returns: A port.
"""
@abc.abstractmethod
def get_port_by_address(self, address):
"""Return a network port representation.
:param address: The MAC address of a port.
:returns: A port.
"""
@abc.abstractmethod
def get_port_list(self, limit=None, marker=None,
sort_key=None, sort_dir=None):
"""Return a list of ports.
:param limit: Maximum number of ports to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
"""
@abc.abstractmethod
def get_ports_by_node_id(self, node_id, limit=None, marker=None,
sort_key=None, sort_dir=None):
"""List all the ports for a given node.
:param node_id: The integer node ID.
:param limit: Maximum number of ports to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted
:param sort_dir: direction in which results should be sorted
(asc, desc)
:returns: A list of ports.
"""
@abc.abstractmethod
def create_port(self, values):
"""Create a new port.
:param values: Dict of values.
"""
@abc.abstractmethod
def update_port(self, port_id, values):
"""Update properties of an port.
:param port_id: The id or MAC of a port.
:param values: Dict of values to update.
:returns: A port.
"""
@abc.abstractmethod
def destroy_port(self, port_id):
"""Destroy an port.
:param port_id: The id or MAC of a port.
"""
@abc.abstractmethod
def create_chassis(self, values):
"""Create a new chassis.
:param values: Dict of values.
"""
@abc.abstractmethod
def get_chassis_by_id(self, chassis_id):
"""Return a chassis representation.
:param chassis_id: The id of a chassis.
:returns: A chassis.
"""
@abc.abstractmethod
def get_chassis_by_uuid(self, chassis_uuid):
"""Return a chassis representation.
:param chassis_uuid: The uuid of a chassis.
:returns: A chassis.
"""
@abc.abstractmethod
def get_chassis_list(self, limit=None, marker=None,
sort_key=None, sort_dir=None):
"""Return a list of chassis.
:param limit: Maximum number of chassis to return.
:param marker: the last item of the previous page; we return the next
result set.
:param sort_key: Attribute by which results should be sorted.
:param sort_dir: direction in which results should be sorted.
(asc, desc)
"""
@abc.abstractmethod
def update_chassis(self, chassis_id, values):
"""Update properties of an chassis.
:param chassis_id: The id or the uuid of a chassis.
:param values: Dict of values to update.
:returns: A chassis.
"""
@abc.abstractmethod
def destroy_chassis(self, chassis_id):
"""Destroy a chassis.
:param chassis_id: The id or the uuid of a chassis.
"""
@abc.abstractmethod
def register_conductor(self, values, update_existing=False):
"""Register an active conductor with the cluster.
:param values: A dict of values which must contain the following:
::
{
'hostname': the unique hostname which identifies
this Conductor service.
'drivers': a list of supported drivers.
}
:param update_existing: When false, registration will raise an
exception when a conflicting online record
is found. When true, will overwrite the
existing record. Default: False.
:returns: A conductor.
:raises: ConductorAlreadyRegistered
"""
@abc.abstractmethod
def get_conductor(self, hostname):
"""Retrieve a conductor's service record from the database.
:param hostname: The hostname of the conductor service.
:returns: A conductor.
:raises: ConductorNotFound
"""
@abc.abstractmethod
def unregister_conductor(self, hostname):
"""Remove this conductor from the service registry immediately.
:param hostname: The hostname of this conductor service.
:raises: ConductorNotFound
"""
@abc.abstractmethod
def touch_conductor(self, hostname):
"""Mark a conductor as active by updating its 'updated_at' property.
:param hostname: The hostname of this conductor service.
:raises: ConductorNotFound
"""
@abc.abstractmethod
def get_active_driver_dict(self, interval):
"""Retrieve drivers for the registered and active conductors.
:param interval: Seconds since last check-in of a conductor.
:returns: A dict which maps driver names to the set of hosts
which support them. For example:
::
{driverA: set([host1, host2]),
driverB: set([host2, host3])}
"""
|
apache-2.0
|
openfun/edx-platform
|
openedx/core/djangoapps/content/course_structures/api/v0/serializers.py
|
65
|
1313
|
"""
API Serializers
"""
from rest_framework import serializers
class GradingPolicySerializer(serializers.Serializer):
""" Serializer for course grading policy. """
assignment_type = serializers.CharField(source='type')
count = serializers.IntegerField(source='min_count')
dropped = serializers.IntegerField(source='drop_count')
weight = serializers.FloatField()
# pylint: disable=invalid-name
class BlockSerializer(serializers.Serializer):
""" Serializer for course structure block. """
id = serializers.CharField(source='usage_key')
type = serializers.CharField(source='block_type')
parent = serializers.CharField(source='parent')
display_name = serializers.CharField()
graded = serializers.BooleanField(default=False)
format = serializers.CharField()
children = serializers.CharField()
class CourseStructureSerializer(serializers.Serializer):
""" Serializer for course structure. """
root = serializers.CharField(source='root')
blocks = serializers.SerializerMethodField('get_blocks')
def get_blocks(self, structure):
""" Serialize the individual blocks. """
serialized = {}
for key, block in structure['blocks'].iteritems():
serialized[key] = BlockSerializer(block).data
return serialized
|
agpl-3.0
|
boomsbloom/dtm-fmri
|
DTM/for_gensim/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py
|
37
|
23222
|
"""Utilities for fast persistence of big data, with optional compression."""
# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
# Copyright (c) 2009 Gael Varoquaux
# License: BSD Style, 3 clauses.
import pickle
import os
import sys
import warnings
try:
from pathlib import Path
except ImportError:
Path = None
from .numpy_pickle_utils import _COMPRESSORS
from .numpy_pickle_utils import BinaryZlibFile
from .numpy_pickle_utils import Unpickler, Pickler
from .numpy_pickle_utils import _read_fileobject, _write_fileobject
from .numpy_pickle_utils import _read_bytes, BUFFER_SIZE
from .numpy_pickle_compat import load_compatibility
from .numpy_pickle_compat import NDArrayWrapper
# For compatibility with old versions of joblib, we need ZNDArrayWrapper
# to be visible in the current namespace.
# Explicitly skipping next line from flake8 as it triggers an F401 warning
# which we don't care.
from .numpy_pickle_compat import ZNDArrayWrapper # noqa
from ._compat import _basestring, PY3_OR_LATER
###############################################################################
# Utility objects for persistence.
class NumpyArrayWrapper(object):
"""An object to be persisted instead of numpy arrays.
This object is used to hack into the pickle machinery and read numpy
array data from our custom persistence format.
More precisely, this object is used for:
* carrying the information of the persisted array: subclass, shape, order,
dtype. Those ndarray metadata are used to correctly reconstruct the array
with low level numpy functions.
* determining if memmap is allowed on the array.
* reading the array bytes from a file.
* reading the array using memorymap from a file.
* writing the array bytes to a file.
Attributes
----------
subclass: numpy.ndarray subclass
Determine the subclass of the wrapped array.
shape: numpy.ndarray shape
Determine the shape of the wrapped array.
order: {'C', 'F'}
Determine the order of wrapped array data. 'C' is for C order, 'F' is
for fortran order.
dtype: numpy.ndarray dtype
Determine the data type of the wrapped array.
allow_mmap: bool
Determine if memory mapping is allowed on the wrapped array.
Default: False.
"""
def __init__(self, subclass, shape, order, dtype, allow_mmap=False):
"""Constructor. Store the useful information for later."""
self.subclass = subclass
self.shape = shape
self.order = order
self.dtype = dtype
self.allow_mmap = allow_mmap
def write_array(self, array, pickler):
"""Write array bytes to pickler file handle.
This function is an adaptation of the numpy write_array function
available in version 1.10.1 in numpy/lib/format.py.
"""
# Set buffer size to 16 MiB to hide the Python loop overhead.
buffersize = max(16 * 1024 ** 2 // array.itemsize, 1)
if array.dtype.hasobject:
# We contain Python objects so we cannot write out the data
# directly. Instead, we will pickle it out with version 2 of the
# pickle protocol.
pickle.dump(array, pickler.file_handle, protocol=2)
else:
for chunk in pickler.np.nditer(array,
flags=['external_loop',
'buffered',
'zerosize_ok'],
buffersize=buffersize,
order=self.order):
pickler.file_handle.write(chunk.tostring('C'))
def read_array(self, unpickler):
"""Read array from unpickler file handle.
This function is an adaptation of the numpy read_array function
available in version 1.10.1 in numpy/lib/format.py.
"""
if len(self.shape) == 0:
count = 1
else:
count = unpickler.np.multiply.reduce(self.shape)
# Now read the actual data.
if self.dtype.hasobject:
# The array contained Python objects. We need to unpickle the data.
array = pickle.load(unpickler.file_handle)
else:
if (not PY3_OR_LATER and
unpickler.np.compat.isfileobj(unpickler.file_handle)):
# In python 2, gzip.GzipFile is considered as a file so one
# can use numpy.fromfile().
# For file objects, use np.fromfile function.
# This function is faster than the memory-intensive
# method below.
array = unpickler.np.fromfile(unpickler.file_handle,
dtype=self.dtype, count=count)
else:
# This is not a real file. We have to read it the
# memory-intensive way.
# crc32 module fails on reads greater than 2 ** 32 bytes,
# breaking large reads from gzip streams. Chunk reads to
# BUFFER_SIZE bytes to avoid issue and reduce memory overhead
# of the read. In non-chunked case count < max_read_count, so
# only one read is performed.
max_read_count = BUFFER_SIZE // min(BUFFER_SIZE,
self.dtype.itemsize)
array = unpickler.np.empty(count, dtype=self.dtype)
for i in range(0, count, max_read_count):
read_count = min(max_read_count, count - i)
read_size = int(read_count * self.dtype.itemsize)
data = _read_bytes(unpickler.file_handle,
read_size, "array data")
array[i:i + read_count] = \
unpickler.np.frombuffer(data, dtype=self.dtype,
count=read_count)
del data
if self.order == 'F':
array.shape = self.shape[::-1]
array = array.transpose()
else:
array.shape = self.shape
return array
def read_mmap(self, unpickler):
"""Read an array using numpy memmap."""
offset = unpickler.file_handle.tell()
if unpickler.mmap_mode == 'w+':
unpickler.mmap_mode = 'r+'
marray = unpickler.np.memmap(unpickler.filename,
dtype=self.dtype,
shape=self.shape,
order=self.order,
mode=unpickler.mmap_mode,
offset=offset)
# update the offset so that it corresponds to the end of the read array
unpickler.file_handle.seek(offset + marray.nbytes)
return marray
def read(self, unpickler):
"""Read the array corresponding to this wrapper.
Use the unpickler to get all information to correctly read the array.
Parameters
----------
unpickler: NumpyUnpickler
Returns
-------
array: numpy.ndarray
"""
# When requested, only use memmap mode if allowed.
if unpickler.mmap_mode is not None and self.allow_mmap:
array = self.read_mmap(unpickler)
else:
array = self.read_array(unpickler)
# Manage array subclass case
if (hasattr(array, '__array_prepare__') and
self.subclass not in (unpickler.np.ndarray,
unpickler.np.memmap)):
# We need to reconstruct another subclass
new_array = unpickler.np.core.multiarray._reconstruct(
self.subclass, (0,), 'b')
return new_array.__array_prepare__(array)
else:
return array
###############################################################################
# Pickler classes
class NumpyPickler(Pickler):
"""A pickler to persist big data efficiently.
The main features of this object are:
* persistence of numpy arrays in a single file.
* optional compression with a special care on avoiding memory copies.
Attributes
----------
fp: file
File object handle used for serializing the input object.
protocol: int
Pickle protocol used. Default is pickle.DEFAULT_PROTOCOL under
python 3, pickle.HIGHEST_PROTOCOL otherwise.
"""
dispatch = Pickler.dispatch.copy()
def __init__(self, fp, protocol=None):
self.file_handle = fp
self.buffered = isinstance(self.file_handle, BinaryZlibFile)
# By default we want a pickle protocol that only changes with
# the major python version and not the minor one
if protocol is None:
protocol = (pickle.DEFAULT_PROTOCOL if PY3_OR_LATER
else pickle.HIGHEST_PROTOCOL)
Pickler.__init__(self, self.file_handle, protocol=protocol)
# delayed import of numpy, to avoid tight coupling
try:
import numpy as np
except ImportError:
np = None
self.np = np
def _create_array_wrapper(self, array):
"""Create and returns a numpy array wrapper from a numpy array."""
order = 'F' if (array.flags.f_contiguous and
not array.flags.c_contiguous) else 'C'
allow_mmap = not self.buffered and not array.dtype.hasobject
wrapper = NumpyArrayWrapper(type(array),
array.shape, order, array.dtype,
allow_mmap=allow_mmap)
return wrapper
def save(self, obj):
"""Subclass the Pickler `save` method.
This is a total abuse of the Pickler class in order to use the numpy
persistence function `save` instead of the default pickle
implementation. The numpy array is replaced by a custom wrapper in the
pickle persistence stack and the serialized array is written right
after in the file. Warning: the file produced does not follow the
pickle format. As such it can not be read with `pickle.load`.
"""
if self.np is not None and type(obj) in (self.np.ndarray,
self.np.matrix,
self.np.memmap):
if type(obj) is self.np.memmap:
# Pickling doesn't work with memmapped arrays
obj = self.np.asanyarray(obj)
# The array wrapper is pickled instead of the real array.
wrapper = self._create_array_wrapper(obj)
Pickler.save(self, wrapper)
# A framer was introduced with pickle protocol 4 and we want to
# ensure the wrapper object is written before the numpy array
# buffer in the pickle file.
# See https://www.python.org/dev/peps/pep-3154/#framing to get
# more information on the framer behavior.
if self.proto >= 4:
self.framer.commit_frame(force=True)
# And then array bytes are written right after the wrapper.
wrapper.write_array(obj, self)
return
return Pickler.save(self, obj)
class NumpyUnpickler(Unpickler):
"""A subclass of the Unpickler to unpickle our numpy pickles.
Attributes
----------
mmap_mode: str
The memorymap mode to use for reading numpy arrays.
file_handle: file_like
File object to unpickle from.
filename: str
Name of the file to unpickle from. It should correspond to file_handle.
This parameter is required when using mmap_mode.
np: module
Reference to numpy module if numpy is installed else None.
"""
dispatch = Unpickler.dispatch.copy()
def __init__(self, filename, file_handle, mmap_mode=None):
# The next line is for backward compatibility with pickle generated
# with joblib versions less than 0.10.
self._dirname = os.path.dirname(filename)
self.mmap_mode = mmap_mode
self.file_handle = file_handle
# filename is required for numpy mmap mode.
self.filename = filename
self.compat_mode = False
Unpickler.__init__(self, self.file_handle)
try:
import numpy as np
except ImportError:
np = None
self.np = np
def load_build(self):
"""Called to set the state of a newly created object.
We capture it to replace our place-holder objects, NDArrayWrapper or
NumpyArrayWrapper, by the array we are interested in. We
replace them directly in the stack of pickler.
NDArrayWrapper is used for backward compatibility with joblib <= 0.9.
"""
Unpickler.load_build(self)
# For backward compatibility, we support NDArrayWrapper objects.
if isinstance(self.stack[-1], (NDArrayWrapper, NumpyArrayWrapper)):
if self.np is None:
raise ImportError("Trying to unpickle an ndarray, "
"but numpy didn't import correctly")
array_wrapper = self.stack.pop()
# If any NDArrayWrapper is found, we switch to compatibility mode,
# this will be used to raise a DeprecationWarning to the user at
# the end of the unpickling.
if isinstance(array_wrapper, NDArrayWrapper):
self.compat_mode = True
self.stack.append(array_wrapper.read(self))
# Be careful to register our new method.
if PY3_OR_LATER:
dispatch[pickle.BUILD[0]] = load_build
else:
dispatch[pickle.BUILD] = load_build
###############################################################################
# Utility functions
def dump(value, filename, compress=0, protocol=None, cache_size=None):
"""Persist an arbitrary Python object into one file.
Parameters
-----------
value: any Python object
The object to store to disk.
filename: str or pathlib.Path
The path of the file in which it is to be stored. The compression
method corresponding to one of the supported filename extensions ('.z',
'.gz', '.bz2', '.xz' or '.lzma') will be used automatically.
compress: int from 0 to 9 or bool or 2-tuple, optional
Optional compression level for the data. 0 or False is no compression.
Higher value means more compression, but also slower read and
write times. Using a value of 3 is often a good compromise.
See the notes for more details.
If compress is True, the compression level used is 3.
If compress is a 2-tuple, the first element must correspond to a string
between supported compressors (e.g 'zlib', 'gzip', 'bz2', 'lzma'
'xz'), the second element must be an integer from 0 to 9, corresponding
to the compression level.
protocol: positive int
Pickle protocol, see pickle.dump documentation for more details.
cache_size: positive int, optional
This option is deprecated in 0.10 and has no effect.
Returns
-------
filenames: list of strings
The list of file names in which the data is stored. If
compress is false, each array is stored in a different file.
See Also
--------
joblib.load : corresponding loader
Notes
-----
Memmapping on load cannot be used for compressed files. Thus
using compression can significantly slow down loading. In
addition, compressed files take extra extra memory during
dump and load.
"""
if Path is not None and isinstance(filename, Path):
filename = str(filename)
is_filename = isinstance(filename, _basestring)
is_fileobj = hasattr(filename, "write")
compress_method = 'zlib' # zlib is the default compression method.
if compress is True:
# By default, if compress is enabled, we want to be using 3 by default
compress_level = 3
elif isinstance(compress, tuple):
# a 2-tuple was set in compress
if len(compress) != 2:
raise ValueError(
'Compress argument tuple should contain exactly 2 elements: '
'(compress method, compress level), you passed {0}'
.format(compress))
compress_method, compress_level = compress
else:
compress_level = compress
if compress_level is not False and compress_level not in range(10):
# Raising an error if a non valid compress level is given.
raise ValueError(
'Non valid compress level given: "{0}". Possible values are '
'{1}.'.format(compress_level, list(range(10))))
if compress_method not in _COMPRESSORS:
# Raising an error if an unsupported compression method is given.
raise ValueError(
'Non valid compression method given: "{0}". Possible values are '
'{1}.'.format(compress_method, _COMPRESSORS))
if not is_filename and not is_fileobj:
# People keep inverting arguments, and the resulting error is
# incomprehensible
raise ValueError(
'Second argument should be a filename or a file-like object, '
'%s (type %s) was given.'
% (filename, type(filename))
)
if is_filename and not isinstance(compress, tuple):
# In case no explicit compression was requested using both compression
# method and level in a tuple and the filename has an explicit
# extension, we select the corresponding compressor.
if filename.endswith('.z'):
compress_method = 'zlib'
elif filename.endswith('.gz'):
compress_method = 'gzip'
elif filename.endswith('.bz2'):
compress_method = 'bz2'
elif filename.endswith('.lzma'):
compress_method = 'lzma'
elif filename.endswith('.xz'):
compress_method = 'xz'
else:
# no matching compression method found, we unset the variable to
# be sure no compression level is set afterwards.
compress_method = None
if compress_method in _COMPRESSORS and compress_level == 0:
# we choose a default compress_level of 3 in case it was not given
# as an argument (using compress).
compress_level = 3
if not PY3_OR_LATER and compress_method in ('lzma', 'xz'):
raise NotImplementedError("{0} compression is only available for "
"python version >= 3.3. You are using "
"{1}.{2}".format(compress_method,
sys.version_info[0],
sys.version_info[1]))
if cache_size is not None:
# Cache size is deprecated starting from version 0.10
warnings.warn("Please do not set 'cache_size' in joblib.dump, "
"this parameter has no effect and will be removed. "
"You used 'cache_size={0}'".format(cache_size),
DeprecationWarning, stacklevel=2)
if compress_level != 0:
with _write_fileobject(filename, compress=(compress_method,
compress_level)) as f:
NumpyPickler(f, protocol=protocol).dump(value)
elif is_filename:
with open(filename, 'wb') as f:
NumpyPickler(f, protocol=protocol).dump(value)
else:
NumpyPickler(filename, protocol=protocol).dump(value)
# If the target container is a file object, nothing is returned.
if is_fileobj:
return
# For compatibility, the list of created filenames (e.g with one element
# after 0.10.0) is returned by default.
return [filename]
def _unpickle(fobj, filename="", mmap_mode=None):
"""Internal unpickling function."""
# We are careful to open the file handle early and keep it open to
# avoid race-conditions on renames.
# That said, if data is stored in companion files, which can be
# the case with the old persistence format, moving the directory
# will create a race when joblib tries to access the companion
# files.
unpickler = NumpyUnpickler(filename, fobj, mmap_mode=mmap_mode)
obj = None
try:
obj = unpickler.load()
if unpickler.compat_mode:
warnings.warn("The file '%s' has been generated with a "
"joblib version less than 0.10. "
"Please regenerate this pickle file."
% filename,
DeprecationWarning, stacklevel=3)
except UnicodeDecodeError as exc:
# More user-friendly error message
if PY3_OR_LATER:
new_exc = ValueError(
'You may be trying to read with '
'python 3 a joblib pickle generated with python 2. '
'This feature is not supported by joblib.')
new_exc.__cause__ = exc
raise new_exc
# Reraise exception with Python 2
raise
return obj
def load(filename, mmap_mode=None):
"""Reconstruct a Python object from a file persisted with joblib.dump.
Parameters
-----------
filename: str or pathlib.Path
The path of the file from which to load the object
mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional
If not None, the arrays are memory-mapped from the disk. This
mode has no effect for compressed files. Note that in this
case the reconstructed object might not longer match exactly
the originally pickled object.
Returns
-------
result: any Python object
The object stored in the file.
See Also
--------
joblib.dump : function to save an object
Notes
-----
This function can load numpy array files saved separately during the
dump. If the mmap_mode argument is given, it is passed to np.load and
arrays are loaded as memmaps. As a consequence, the reconstructed
object might not match the original pickled object. Note that if the
file was saved with compression, the arrays cannot be memmaped.
"""
if Path is not None and isinstance(filename, Path):
filename = str(filename)
if hasattr(filename, "read") and hasattr(filename, "seek"):
with _read_fileobject(filename, "", mmap_mode) as fobj:
obj = _unpickle(fobj)
else:
with open(filename, 'rb') as f:
with _read_fileobject(f, filename, mmap_mode) as fobj:
if isinstance(fobj, _basestring):
# if the returned file object is a string, this means we
# try to load a pickle file generated with an version of
# Joblib so we load it with joblib compatibility function.
return load_compatibility(fobj)
obj = _unpickle(fobj, filename, mmap_mode)
return obj
|
mit
|
florentchandelier/keras
|
keras/regularizers.py
|
81
|
1907
|
from __future__ import absolute_import
import theano.tensor as T
class Regularizer(object):
def set_param(self, p):
self.p = p
def set_layer(self, layer):
self.layer = layer
def __call__(self, loss):
return loss
def get_config(self):
return {"name": self.__class__.__name__}
class WeightRegularizer(Regularizer):
def __init__(self, l1=0., l2=0.):
self.l1 = l1
self.l2 = l2
def set_param(self, p):
self.p = p
def __call__(self, loss):
loss += T.sum(abs(self.p)) * self.l1
loss += T.sum(self.p ** 2) * self.l2
return loss
def get_config(self):
return {"name": self.__class__.__name__,
"l1": self.l1,
"l2": self.l2}
class ActivityRegularizer(Regularizer):
def __init__(self, l1=0., l2=0.):
self.l1 = l1
self.l2 = l2
def set_layer(self, layer):
self.layer = layer
def __call__(self, loss):
loss += self.l1 * T.sum(T.mean(abs(self.layer.get_output(True)), axis=0))
loss += self.l2 * T.sum(T.mean(self.layer.get_output(True) ** 2, axis=0))
return loss
def get_config(self):
return {"name": self.__class__.__name__,
"l1": self.l1,
"l2": self.l2}
def l1(l=0.01):
return WeightRegularizer(l1=l)
def l2(l=0.01):
return WeightRegularizer(l2=l)
def l1l2(l1=0.01, l2=0.01):
return WeightRegularizer(l1=l1, l2=l2)
def activity_l1(l=0.01):
return ActivityRegularizer(l1=l)
def activity_l2(l=0.01):
return ActivityRegularizer(l2=l)
def activity_l1l2(l1=0.01, l2=0.01):
return ActivityRegularizer(l1=l1, l2=l2)
identity = Regularizer
from .utils.generic_utils import get_from_module
def get(identifier, kwargs=None):
return get_from_module(identifier, globals(), 'regularizer', instantiate=True, kwargs=kwargs)
|
mit
|
ScreamingUdder/mantid
|
scripts/test/ISISPowderSampleDetailsTest.py
|
2
|
20166
|
from __future__ import (absolute_import, division, print_function)
import mantid
import io
import six
import sys
import unittest
from isis_powder.routines import sample_details
from six_shim import assertRaisesRegex, assertRegex
class ISISPowderSampleDetailsTest(unittest.TestCase):
def test_constructor(self):
expected_height = 1.1
expected_radius = 2.2
expected_center = [3.3, 4.4, 5.5]
# Check easiest case
sample_details_obj = sample_details.SampleDetails(height=expected_height, radius=expected_radius,
center=expected_center)
self.assertEqual(sample_details_obj.height(), expected_height)
self.assertEqual(sample_details_obj.radius(), expected_radius)
self.assertEqual(sample_details_obj.center(), expected_center)
# Check shape stype defaults to cylinder
self.assertEqual(sample_details_obj.shape_type(), "cylinder")
# Does it handle ints correctly
height_radius_int = 1
center_int = [2, 3, 4]
sample_details_obj_int = sample_details.SampleDetails(height=height_radius_int, radius=height_radius_int,
center=center_int, shape="cylinder")
self.assertTrue(isinstance(sample_details_obj.height(), float))
self.assertTrue(isinstance(sample_details_obj.radius(), float))
self.assertEqual(sample_details_obj_int.height(), float(height_radius_int))
self.assertEqual(sample_details_obj_int.radius(), float(height_radius_int))
self.assertEqual(sample_details_obj_int.center(), [2.0, 3.0, 4.0])
# Does it handle strings correctly
height_radius_string = "5"
center_string = ["2.0", "3.0", "5.0"]
sample_details_obj_str = sample_details.SampleDetails(height=height_radius_string, radius=height_radius_string,
center=center_string, shape="cylinder")
self.assertTrue(isinstance(sample_details_obj.height(), float))
self.assertTrue(isinstance(sample_details_obj.radius(), float))
self.assertEqual(sample_details_obj_str.height(), float(height_radius_string))
self.assertEqual(sample_details_obj_str.radius(), float(height_radius_string))
self.assertEqual(sample_details_obj_str.center(), [2.0, 3.0, 5.0])
def test_constructor_non_number_input(self):
good_input = 1.0
good_center_input = [1.0, 2.0, 3.0]
empty_input_value = ''
char_input_value = 'a'
# Check it handles empty input
with assertRaisesRegex(self, ValueError, "Could not convert the height to a number"):
sample_details.SampleDetails(height=empty_input_value, radius=good_input,
center=good_center_input, shape="cylinder")
# Does it handle bad input and tell us what we put in
with assertRaisesRegex(self, ValueError, ".*to a number. The input was: '" + char_input_value + "'"):
sample_details.SampleDetails(height=char_input_value, radius=good_input,
center=good_center_input, shape="cylinder")
# Does it indicate which field was incorrect
with assertRaisesRegex(self, ValueError, "radius"):
sample_details.SampleDetails(height=good_input, radius=char_input_value,
center=good_center_input, shape="cylinder")
# Can it handle bad center values
with assertRaisesRegex(self, ValueError, "center"):
sample_details.SampleDetails(height=good_input, radius=good_input, center=["", 2, 3], shape="cylinder")
# Does it throw if were not using a list for the input
with assertRaisesRegex(self, ValueError, "must be specified as a list of X, Y, Z"):
sample_details.SampleDetails(height=good_input, radius=good_input, center=1, shape="cylinder")
# Does it throw if we are using a list of incorrect length (e.g. not 3D)
with assertRaisesRegex(self, ValueError, "must have three values corresponding to"):
sample_details.SampleDetails(height=good_input, radius=good_input, center=[], shape="cylinder")
with assertRaisesRegex(self, ValueError, "must have three values corresponding to"):
sample_details.SampleDetails(height=good_input, radius=good_input, center=[1, 2], shape="cylinder")
with assertRaisesRegex(self, ValueError, "must have three values corresponding to"):
sample_details.SampleDetails(height=good_input, radius=good_input, center=[1, 2, 3, 4], shape="cylinder")
def test_constructor_with_impossible_val(self):
good_input = 1
good_center_input = [1, 2, 3]
zero_value = 0
negative_value = -0.0000001
negative_int = -1
negative_string = "-1"
# Check it handles zero
with assertRaisesRegex(self, ValueError, "The value set for height was: 0"):
sample_details.SampleDetails(height=zero_value, radius=good_input,
center=good_center_input, shape="cylinder")
# Very small negative
with assertRaisesRegex(self, ValueError, "which is impossible for a physical object"):
sample_details.SampleDetails(height=good_input, radius=negative_value,
center=good_center_input, shape="cylinder")
# Integer negative
with assertRaisesRegex(self, ValueError, "The value set for height was: -1"):
sample_details.SampleDetails(height=negative_int, radius=good_input,
center=good_center_input, shape="cylinder")
# String negative
with assertRaisesRegex(self, ValueError, "The value set for radius was: -1"):
sample_details.SampleDetails(height=good_input, radius=negative_string,
center=good_center_input, shape="cylinder")
def test_set_material(self):
sample_details_obj = sample_details.SampleDetails(height=1.0, radius=1.0, center=[2, 3, 4], shape="cylinder")
# Check that we can only set a material once. We will test the underlying class elsewhere
sample_details_obj.set_material(chemical_formula='V')
self.assertIsNotNone(sample_details_obj.material_object)
# Check that the material is now immutable
with assertRaisesRegex(self, RuntimeError, "The material has already been set to the above details"):
sample_details_obj.set_material(chemical_formula='V')
# Check resetting it works
sample_details_obj.reset_sample_material()
self.assertIsNone(sample_details_obj.material_object)
# And ensure setting it for a second time works
sample_details_obj.set_material(chemical_formula='V')
self.assertIsNotNone(sample_details_obj.material_object)
def test_set_material_properties(self):
sample_details_obj = sample_details.SampleDetails(height=1.0, radius=1.0, center=[2, 3, 5], shape="cylinder")
self.assertIsNone(sample_details_obj.material_object)
# Check we cannot set a material property without setting the underlying material
with assertRaisesRegex(self, RuntimeError, "The material has not been set"):
sample_details_obj.set_material_properties(absorption_cross_section=1.0, scattering_cross_section=2.0)
# Check that with a material object we are allowed to set material properties
sample_details_obj.set_material(chemical_formula='V')
# We will test the immutability of the underlying object elsewhere
sample_details_obj.set_material_properties(scattering_cross_section=2.0, absorption_cross_section=3.0)
def test_material_constructor(self):
chemical_formula_one_char_element = 'V'
chemical_formula_two_char_element = 'Si'
chemical_formula_complex = 'V Si' # Yes, this isn't a sensible input but for our tests it will do
number_density_sample = 1.234
material_obj_one_char = sample_details._Material(chemical_formula=chemical_formula_one_char_element)
self.assertIsNotNone(material_obj_one_char)
self.assertEqual(material_obj_one_char.chemical_formula, chemical_formula_one_char_element)
self.assertIsNone(material_obj_one_char.number_density)
# Also check that the absorption and scattering X sections have not been set
self.assertIsNone(material_obj_one_char.absorption_cross_section)
self.assertIsNone(material_obj_one_char.scattering_cross_section)
self.assertFalse(material_obj_one_char._is_material_props_set)
# Check if it accepts two character elements without number density
material_obj_two_char = sample_details._Material(chemical_formula=chemical_formula_two_char_element)
self.assertIsNotNone(material_obj_two_char)
self.assertEqual(material_obj_two_char.chemical_formula, chemical_formula_two_char_element)
self.assertIsNone(material_obj_two_char.number_density)
# Check it stores number density if passed
material_obj_number_density = sample_details._Material(chemical_formula=chemical_formula_two_char_element,
number_density=number_density_sample)
self.assertEqual(material_obj_number_density.number_density, number_density_sample)
# Check that it raises an error if we have a non-elemental formula without number density
with assertRaisesRegex(self, ValueError, "A number density formula must be set on a chemical formula"):
sample_details._Material(chemical_formula=chemical_formula_complex)
# Check it constructs if it is given the number density too
material_obj_num_complex_formula = sample_details._Material(chemical_formula=chemical_formula_complex,
number_density=number_density_sample)
self.assertEqual(material_obj_num_complex_formula.chemical_formula, chemical_formula_complex)
self.assertEqual(material_obj_num_complex_formula.number_density, number_density_sample)
def test_material_set_properties(self):
bad_absorb = '-1'
bad_scattering = 0
good_absorb = '1'
good_scattering = 2.0
material_obj = sample_details._Material(chemical_formula='V')
with assertRaisesRegex(self, ValueError, "absorption_cross_section was: -1 which is impossible for a physical "
"object"):
material_obj.set_material_properties(abs_cross_sect=bad_absorb, scattering_cross_sect=good_scattering)
# Check the immutability flag has not been set on a failure
self.assertFalse(material_obj._is_material_props_set)
with assertRaisesRegex(self, ValueError, "scattering_cross_section was: 0"):
material_obj.set_material_properties(abs_cross_sect=good_absorb, scattering_cross_sect=bad_scattering)
# Check nothing has been set yet
self.assertIsNone(material_obj.absorption_cross_section)
self.assertIsNone(material_obj.scattering_cross_section)
# Set the object this time
material_obj.set_material_properties(abs_cross_sect=good_absorb, scattering_cross_sect=good_scattering)
self.assertTrue(material_obj._is_material_props_set)
self.assertEqual(material_obj.absorption_cross_section, float(good_absorb))
self.assertEqual(material_obj.scattering_cross_section, float(good_scattering))
# Check we cannot set it twice and fields do not change
with assertRaisesRegex(self, RuntimeError, "The material properties have already been set"):
material_obj.set_material_properties(abs_cross_sect=999, scattering_cross_sect=999)
self.assertEqual(material_obj.absorption_cross_section, float(good_absorb))
self.assertEqual(material_obj.scattering_cross_section, float(good_scattering))
def test_print_sample_details(self):
expected_height = 1
expected_radius = 2
expected_center = [3, 4, 5]
chemical_formula = 'Si'
chemical_formula_two = 'V'
expected_number_density = 1.2345
old_std_out = sys.stdout
# Wrap in try finally so we always restore std out if any exception is thrown
try:
# Redirect std out to a capture object
std_out_buffer = get_std_out_buffer_obj()
sys.stdout = std_out_buffer
sample_details_obj = sample_details.SampleDetails(height=expected_height, radius=expected_radius,
center=expected_center, shape="cylinder")
# Test with most defaults set
sample_details_obj.print_sample_details()
captured_std_out_default = std_out_buffer.getvalue()
assertRegex(self, captured_std_out_default, "Height: " + str(float(expected_height)))
assertRegex(self, captured_std_out_default, "Radius: " + str(float(expected_radius)))
assertRegex(self, captured_std_out_default, "Center X:" + str(float(expected_center[0])))
assertRegex(self, captured_std_out_default, "Material has not been set")
# Test with material set but not number density
sys.stdout = std_out_buffer = get_std_out_buffer_obj()
sample_details_obj.set_material(chemical_formula=chemical_formula)
sample_details_obj.print_sample_details()
captured_std_out_material_default = std_out_buffer.getvalue()
assertRegex(self, captured_std_out_material_default, "Material properties:")
assertRegex(self, captured_std_out_material_default, "Chemical formula: " + chemical_formula)
assertRegex(self, captured_std_out_material_default, "Number Density: Set from elemental properties")
# Test with material and number density
sys.stdout = std_out_buffer = get_std_out_buffer_obj()
sample_details_obj.reset_sample_material()
sample_details_obj.set_material(chemical_formula=chemical_formula_two,
number_density=expected_number_density)
sample_details_obj.print_sample_details()
captured_std_out_material_set = std_out_buffer.getvalue()
assertRegex(self, captured_std_out_material_set, "Chemical formula: " + chemical_formula_two)
assertRegex(self, captured_std_out_material_set, "Number Density: " + str(expected_number_density))
# Test with no material properties set - we can reuse buffer from previous test
assertRegex(self, captured_std_out_material_default, "Absorption cross section: Calculated by Mantid")
assertRegex(self, captured_std_out_material_default, "Scattering cross section: Calculated by Mantid")
assertRegex(self, captured_std_out_material_default, "Note to manually override these call")
expected_abs_x_section = 2.13
expected_scattering_x_section = 5.32
# Test with material set
sys.stdout = std_out_buffer = get_std_out_buffer_obj()
sample_details_obj.set_material_properties(absorption_cross_section=expected_abs_x_section,
scattering_cross_section=expected_scattering_x_section)
sample_details_obj.print_sample_details()
captured_std_out_material_props = std_out_buffer.getvalue()
assertRegex(self, captured_std_out_material_props, "Absorption cross section: " +
str(expected_abs_x_section))
assertRegex(self, captured_std_out_material_props, "Scattering cross section: " +
str(expected_scattering_x_section))
finally:
# Ensure std IO is restored. Do NOT remove this line as all std out will pipe into our buffer otherwise
sys.stdout = old_std_out
def test_construct_slab(self):
expected_thickness = 2.2
expected_width = 1.0
expected_height = 2.0
expected_center = [1.0, 2.0, 3.0]
expected_angle = 3.0
# Check easiest case
sample_details_obj = sample_details.SampleDetails(thickness=expected_thickness, shape="slab",
height=expected_height, width=expected_width,
center=expected_center, angle=expected_angle)
self.assertEqual(sample_details_obj.thickness(), expected_thickness)
self.assertEqual(sample_details_obj.width(), expected_width)
self.assertEqual(sample_details_obj.height(), expected_height)
self.assertEqual(sample_details_obj.center(), expected_center)
self.assertEqual(sample_details_obj.angle(), expected_angle)
# Does it handle ints correctly
thickness_int = 1
width_int = 2
height_int = 3
center_int = [1, 2, 3]
angle_int = 4
sample_details_obj_int = sample_details.SampleDetails(thickness=thickness_int, shape="slab",
height=height_int, width=width_int, center=center_int,
angle=angle_int)
self.assertTrue(isinstance(sample_details_obj_int.thickness(), float))
self.assertTrue(isinstance(sample_details_obj_int.width(), float))
self.assertTrue(isinstance(sample_details_obj_int.height(), float))
self.assertTrue(isinstance(sample_details_obj_int.center(), list))
self.assertTrue(all(isinstance(p, float) for p in sample_details_obj_int.center()))
self.assertTrue(isinstance(sample_details_obj_int.angle(), float))
self.assertEqual(sample_details_obj_int.thickness(), float(thickness_int))
self.assertEqual(sample_details_obj_int.width(), float(width_int))
self.assertEqual(sample_details_obj_int.height(), float(height_int))
self.assertEqual(sample_details_obj_int.center(), [float(p) for p in center_int])
self.assertEqual(sample_details_obj_int.angle(), float(angle_int))
# Does it handle strings correctly
thickness_string = "5"
width_string = "1"
height_string = "2"
center_string = ["1", "2", "3"]
angle_string = "3"
sample_details_obj_str = sample_details.SampleDetails(thickness=thickness_string, shape="slab",
height=height_string, width=width_string,
center=center_string, angle=angle_string)
self.assertTrue(isinstance(sample_details_obj_str.thickness(), float))
self.assertTrue(isinstance(sample_details_obj_str.width(), float))
self.assertTrue(isinstance(sample_details_obj_str.height(), float))
self.assertTrue(isinstance(sample_details_obj_str.center(), list))
self.assertTrue(all(isinstance(p, float) for p in sample_details_obj_str.center()))
self.assertTrue(isinstance(sample_details_obj_str.angle(), float))
self.assertEqual(sample_details_obj_str.thickness(), float(thickness_string))
self.assertEqual(sample_details_obj_str.width(), float(width_string))
self.assertEqual(sample_details_obj_str.height(), float(height_string))
self.assertEqual(sample_details_obj_str.center(), [float(p) for p in center_string])
self.assertEqual(sample_details_obj_str.angle(), float(angle_string))
def get_std_out_buffer_obj():
# Because of the way that strings and bytes
# have changed between Python 2/3 we need to
# return a buffer which is appropriate to the current version
if six.PY2:
return io.BytesIO()
elif six.PY3:
return io.StringIO()
if __name__ == "__main__":
unittest.main()
|
gpl-3.0
|
nitzmahone/ansible
|
lib/ansible/modules/cloud/digital_ocean/digital_ocean_region_facts.py
|
35
|
2897
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: digital_ocean_region_facts
short_description: Gather facts about DigitalOcean regions
description:
- This module can be used to gather facts about regions.
author: "Abhijeet Kasurde (@Akasurde)"
version_added: "2.6"
extends_documentation_fragment: digital_ocean.documentation
requirements:
- "python >= 2.6"
'''
EXAMPLES = '''
- name: Gather facts about all regions
digital_ocean_region_facts:
oauth_token: "{{ oauth_token }}"
- name: Get Name of region where slug is known
digital_ocean_region_facts:
oauth_token: "{{ oauth_token }}"
register: resp_out
- debug: var=resp_out
- set_fact:
region_slug: "{{ item.name }}"
with_items: "{{ resp_out.data|json_query(name) }}"
vars:
name: "[?slug==`nyc1`]"
- debug: var=region_slug
'''
RETURN = '''
data:
description: DigitalOcean regions facts
returned: success
type: list
sample: [
{
"available": true,
"features": [
"private_networking",
"backups",
"ipv6",
"metadata",
"install_agent",
"storage"
],
"name": "New York 1",
"sizes": [
"512mb",
"s-1vcpu-1gb",
"1gb",
"s-3vcpu-1gb",
"s-1vcpu-2gb",
"s-2vcpu-2gb",
"2gb",
"s-1vcpu-3gb",
"s-2vcpu-4gb",
"4gb",
"c-2",
"m-1vcpu-8gb",
"8gb",
"s-4vcpu-8gb",
"s-6vcpu-16gb",
"16gb"
],
"slug": "nyc1"
},
]
'''
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
rest = DigitalOceanHelper(module)
base_url = 'regions?'
regions = rest.get_paginated_data(base_url=base_url, data_key_name='regions')
module.exit_json(changed=False, data=regions)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
module = AnsibleModule(argument_spec=argument_spec)
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=format_exc())
if __name__ == '__main__':
main()
|
gpl-3.0
|
proxysh/Safejumper-for-Desktop
|
buildmac/Resources/env/lib/python2.7/site-packages/twisted/news/database.py
|
10
|
33683
|
# -*- test-case-name: twisted.news.test -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
News server backend implementations.
"""
import getpass, pickle, time, socket
import os
import StringIO
from hashlib import md5
from email.Message import Message
from email.Generator import Generator
from zope.interface import implementer, Interface
from twisted.news.nntp import NNTPError
from twisted.mail import smtp
from twisted.internet import defer
from twisted.enterprise import adbapi
from twisted.persisted import dirdbm
ERR_NOGROUP, ERR_NOARTICLE = range(2, 4) # XXX - put NNTP values here (I guess?)
OVERVIEW_FMT = [
'Subject', 'From', 'Date', 'Message-ID', 'References',
'Bytes', 'Lines', 'Xref'
]
def hexdigest(md5): #XXX: argh. 1.5.2 doesn't have this.
return ''.join(map(lambda x: hex(ord(x))[2:], md5.digest()))
class Article:
def __init__(self, head, body):
self.body = body
self.headers = {}
header = None
for line in head.split('\r\n'):
if line[0] in ' \t':
i = list(self.headers[header])
i[1] += '\r\n' + line
else:
i = line.split(': ', 1)
header = i[0].lower()
self.headers[header] = tuple(i)
if not self.getHeader('Message-ID'):
s = str(time.time()) + self.body
id = hexdigest(md5(s)) + '@' + socket.gethostname()
self.putHeader('Message-ID', '<%s>' % id)
if not self.getHeader('Bytes'):
self.putHeader('Bytes', str(len(self.body)))
if not self.getHeader('Lines'):
self.putHeader('Lines', str(self.body.count('\n')))
if not self.getHeader('Date'):
self.putHeader('Date', time.ctime(time.time()))
def getHeader(self, header):
h = header.lower()
if h in self.headers:
return self.headers[h][1]
else:
return ''
def putHeader(self, header, value):
self.headers[header.lower()] = (header, value)
def textHeaders(self):
headers = []
for i in self.headers.values():
headers.append('%s: %s' % i)
return '\r\n'.join(headers) + '\r\n'
def overview(self):
xover = []
for i in OVERVIEW_FMT:
xover.append(self.getHeader(i))
return xover
class NewsServerError(Exception):
pass
class INewsStorage(Interface):
"""
An interface for storing and requesting news articles
"""
def listRequest():
"""
Returns a deferred whose callback will be passed a list of 4-tuples
containing (name, max index, min index, flags) for each news group
"""
def subscriptionRequest():
"""
Returns a deferred whose callback will be passed the list of
recommended subscription groups for new server users
"""
def postRequest(message):
"""
Returns a deferred whose callback will be invoked if 'message'
is successfully posted to one or more specified groups and
whose errback will be invoked otherwise.
"""
def overviewRequest():
"""
Returns a deferred whose callback will be passed the a list of
headers describing this server's overview format.
"""
def xoverRequest(group, low, high):
"""
Returns a deferred whose callback will be passed a list of xover
headers for the given group over the given range. If low is None,
the range starts at the first article. If high is None, the range
ends at the last article.
"""
def xhdrRequest(group, low, high, header):
"""
Returns a deferred whose callback will be passed a list of XHDR data
for the given group over the given range. If low is None,
the range starts at the first article. If high is None, the range
ends at the last article.
"""
def listGroupRequest(group):
"""
Returns a deferred whose callback will be passed a two-tuple of
(group name, [article indices])
"""
def groupRequest(group):
"""
Returns a deferred whose callback will be passed a five-tuple of
(group name, article count, highest index, lowest index, group flags)
"""
def articleExistsRequest(id):
"""
Returns a deferred whose callback will be passed with a true value
if a message with the specified Message-ID exists in the database
and with a false value otherwise.
"""
def articleRequest(group, index, id = None):
"""
Returns a deferred whose callback will be passed a file-like object
containing the full article text (headers and body) for the article
of the specified index in the specified group, and whose errback
will be invoked if the article or group does not exist. If id is
not None, index is ignored and the article with the given Message-ID
will be returned instead, along with its index in the specified
group.
"""
def headRequest(group, index):
"""
Returns a deferred whose callback will be passed the header for
the article of the specified index in the specified group, and
whose errback will be invoked if the article or group does not
exist.
"""
def bodyRequest(group, index):
"""
Returns a deferred whose callback will be passed the body for
the article of the specified index in the specified group, and
whose errback will be invoked if the article or group does not
exist.
"""
class NewsStorage:
"""
Backwards compatibility class -- There is no reason to inherit from this,
just implement INewsStorage instead.
"""
def listRequest(self):
raise NotImplementedError()
def subscriptionRequest(self):
raise NotImplementedError()
def postRequest(self, message):
raise NotImplementedError()
def overviewRequest(self):
return defer.succeed(OVERVIEW_FMT)
def xoverRequest(self, group, low, high):
raise NotImplementedError()
def xhdrRequest(self, group, low, high, header):
raise NotImplementedError()
def listGroupRequest(self, group):
raise NotImplementedError()
def groupRequest(self, group):
raise NotImplementedError()
def articleExistsRequest(self, id):
raise NotImplementedError()
def articleRequest(self, group, index, id = None):
raise NotImplementedError()
def headRequest(self, group, index):
raise NotImplementedError()
def bodyRequest(self, group, index):
raise NotImplementedError()
class _ModerationMixin:
"""
Storage implementations can inherit from this class to get the easy-to-use
C{notifyModerators} method which will take care of sending messages which
require moderation to a list of moderators.
"""
sendmail = staticmethod(smtp.sendmail)
def notifyModerators(self, moderators, article):
"""
Send an article to a list of group moderators to be moderated.
@param moderators: A C{list} of C{str} giving RFC 2821 addresses of
group moderators to notify.
@param article: The article requiring moderation.
@type article: L{Article}
@return: A L{Deferred} which fires with the result of sending the email.
"""
# Moderated postings go through as long as they have an Approved
# header, regardless of what the value is
group = article.getHeader('Newsgroups')
subject = article.getHeader('Subject')
if self._sender is None:
# This case should really go away. This isn't a good default.
sender = 'twisted-news@' + socket.gethostname()
else:
sender = self._sender
msg = Message()
msg['Message-ID'] = smtp.messageid()
msg['From'] = sender
msg['To'] = ', '.join(moderators)
msg['Subject'] = 'Moderate new %s message: %s' % (group, subject)
msg['Content-Type'] = 'message/rfc822'
payload = Message()
for header, value in article.headers.values():
payload.add_header(header, value)
payload.set_payload(article.body)
msg.attach(payload)
out = StringIO.StringIO()
gen = Generator(out, False)
gen.flatten(msg)
msg = out.getvalue()
return self.sendmail(self._mailhost, sender, moderators, msg)
@implementer(INewsStorage)
class PickleStorage(_ModerationMixin):
"""
A trivial NewsStorage implementation using pickles
Contains numerous flaws and is generally unsuitable for any
real applications. Consider yourself warned!
"""
sharedDBs = {}
def __init__(self, filename, groups=None, moderators=(),
mailhost=None, sender=None):
"""
@param mailhost: A C{str} giving the mail exchange host which will
accept moderation emails from this server. Must accept emails
destined for any address specified as a moderator.
@param sender: A C{str} giving the address which will be used as the
sender of any moderation email generated by this server.
"""
self.datafile = filename
self.load(filename, groups, moderators)
self._mailhost = mailhost
self._sender = sender
def getModerators(self, groups):
# first see if any groups are moderated. if so, nothing gets posted,
# but the whole messages gets forwarded to the moderator address
moderators = []
for group in groups:
moderators.extend(self.db['moderators'].get(group, None))
return filter(None, moderators)
def listRequest(self):
"Returns a list of 4-tuples: (name, max index, min index, flags)"
l = self.db['groups']
r = []
for i in l:
if len(self.db[i].keys()):
low = min(self.db[i].keys())
high = max(self.db[i].keys()) + 1
else:
low = high = 0
if i in self.db['moderators']:
flags = 'm'
else:
flags = 'y'
r.append((i, high, low, flags))
return defer.succeed(r)
def subscriptionRequest(self):
return defer.succeed(['alt.test'])
def postRequest(self, message):
cleave = message.find('\r\n\r\n')
headers, article = message[:cleave], message[cleave + 4:]
a = Article(headers, article)
groups = a.getHeader('Newsgroups').split()
xref = []
# Check moderated status
moderators = self.getModerators(groups)
if moderators and not a.getHeader('Approved'):
return self.notifyModerators(moderators, a)
for group in groups:
if group in self.db:
if len(self.db[group].keys()):
index = max(self.db[group].keys()) + 1
else:
index = 1
xref.append((group, str(index)))
self.db[group][index] = a
if len(xref) == 0:
return defer.fail(None)
a.putHeader('Xref', '%s %s' % (
socket.gethostname().split()[0],
''.join(map(lambda x: ':'.join(x), xref))
))
self.flush()
return defer.succeed(None)
def overviewRequest(self):
return defer.succeed(OVERVIEW_FMT)
def xoverRequest(self, group, low, high):
if group not in self.db:
return defer.succeed([])
r = []
for i in self.db[group].keys():
if (low is None or i >= low) and (high is None or i <= high):
r.append([str(i)] + self.db[group][i].overview())
return defer.succeed(r)
def xhdrRequest(self, group, low, high, header):
if group not in self.db:
return defer.succeed([])
r = []
for i in self.db[group].keys():
if low is None or i >= low and high is None or i <= high:
r.append((i, self.db[group][i].getHeader(header)))
return defer.succeed(r)
def listGroupRequest(self, group):
if group in self.db:
return defer.succeed((group, self.db[group].keys()))
else:
return defer.fail(None)
def groupRequest(self, group):
if group in self.db:
if len(self.db[group].keys()):
num = len(self.db[group].keys())
low = min(self.db[group].keys())
high = max(self.db[group].keys())
else:
num = low = high = 0
flags = 'y'
return defer.succeed((group, num, high, low, flags))
else:
return defer.fail(ERR_NOGROUP)
def articleExistsRequest(self, id):
for group in self.db['groups']:
for a in self.db[group].values():
if a.getHeader('Message-ID') == id:
return defer.succeed(1)
return defer.succeed(0)
def articleRequest(self, group, index, id = None):
if id is not None:
raise NotImplementedError
if group in self.db:
if index in self.db[group]:
a = self.db[group][index]
return defer.succeed((
index,
a.getHeader('Message-ID'),
StringIO.StringIO(a.textHeaders() + '\r\n' + a.body)
))
else:
return defer.fail(ERR_NOARTICLE)
else:
return defer.fail(ERR_NOGROUP)
def headRequest(self, group, index):
if group in self.db:
if index in self.db[group]:
a = self.db[group][index]
return defer.succeed((index, a.getHeader('Message-ID'), a.textHeaders()))
else:
return defer.fail(ERR_NOARTICLE)
else:
return defer.fail(ERR_NOGROUP)
def bodyRequest(self, group, index):
if group in self.db:
if index in self.db[group]:
a = self.db[group][index]
return defer.succeed((index, a.getHeader('Message-ID'), StringIO.StringIO(a.body)))
else:
return defer.fail(ERR_NOARTICLE)
else:
return defer.fail(ERR_NOGROUP)
def flush(self):
with open(self.datafile, 'w') as f:
pickle.dump(self.db, f)
def load(self, filename, groups = None, moderators = ()):
if filename in PickleStorage.sharedDBs:
self.db = PickleStorage.sharedDBs[filename]
else:
try:
with open(filename) as f:
self.db = pickle.load(f)
PickleStorage.sharedDBs[filename] = self.db
except IOError:
self.db = PickleStorage.sharedDBs[filename] = {}
self.db['groups'] = groups
if groups is not None:
for i in groups:
self.db[i] = {}
self.db['moderators'] = dict(moderators)
self.flush()
class Group:
name = None
flags = ''
minArticle = 1
maxArticle = 0
articles = None
def __init__(self, name, flags = 'y'):
self.name = name
self.flags = flags
self.articles = {}
@implementer(INewsStorage)
class NewsShelf(_ModerationMixin):
"""
A NewStorage implementation using Twisted's dirdbm persistence module.
"""
def __init__(self, mailhost, path, sender=None):
"""
@param mailhost: A C{str} giving the mail exchange host which will
accept moderation emails from this server. Must accept emails
destined for any address specified as a moderator.
@param sender: A C{str} giving the address which will be used as the
sender of any moderation email generated by this server.
"""
self.path = path
self._mailhost = self.mailhost = mailhost
self._sender = sender
if not os.path.exists(path):
os.mkdir(path)
self.dbm = dirdbm.Shelf(os.path.join(path, "newsshelf"))
if not len(self.dbm.keys()):
self.initialize()
def initialize(self):
# A dictionary of group name/Group instance items
self.dbm['groups'] = dirdbm.Shelf(os.path.join(self.path, 'groups'))
# A dictionary of group name/email address
self.dbm['moderators'] = dirdbm.Shelf(os.path.join(self.path, 'moderators'))
# A list of group names
self.dbm['subscriptions'] = []
# A dictionary of MessageID strings/xref lists
self.dbm['Message-IDs'] = dirdbm.Shelf(os.path.join(self.path, 'Message-IDs'))
def addGroup(self, name, flags):
self.dbm['groups'][name] = Group(name, flags)
def addSubscription(self, name):
self.dbm['subscriptions'] = self.dbm['subscriptions'] + [name]
def addModerator(self, group, email):
self.dbm['moderators'][group] = email
def listRequest(self):
result = []
for g in self.dbm['groups'].values():
result.append((g.name, g.maxArticle, g.minArticle, g.flags))
return defer.succeed(result)
def subscriptionRequest(self):
return defer.succeed(self.dbm['subscriptions'])
def getModerator(self, groups):
# first see if any groups are moderated. if so, nothing gets posted,
# but the whole messages gets forwarded to the moderator address
for group in groups:
try:
return self.dbm['moderators'][group]
except KeyError:
pass
return None
def notifyModerator(self, moderator, article):
"""
Notify a single moderator about an article requiring moderation.
C{notifyModerators} should be preferred.
"""
return self.notifyModerators([moderator], article)
def postRequest(self, message):
cleave = message.find('\r\n\r\n')
headers, article = message[:cleave], message[cleave + 4:]
article = Article(headers, article)
groups = article.getHeader('Newsgroups').split()
xref = []
# Check for moderated status
moderator = self.getModerator(groups)
if moderator and not article.getHeader('Approved'):
return self.notifyModerators([moderator], article)
for group in groups:
try:
g = self.dbm['groups'][group]
except KeyError:
pass
else:
index = g.maxArticle + 1
g.maxArticle += 1
g.articles[index] = article
xref.append((group, str(index)))
self.dbm['groups'][group] = g
if not xref:
return defer.fail(NewsServerError("No groups carried: " + ' '.join(groups)))
article.putHeader('Xref', '%s %s' % (socket.gethostname().split()[0], ' '.join(map(lambda x: ':'.join(x), xref))))
self.dbm['Message-IDs'][article.getHeader('Message-ID')] = xref
return defer.succeed(None)
def overviewRequest(self):
return defer.succeed(OVERVIEW_FMT)
def xoverRequest(self, group, low, high):
if group not in self.dbm['groups']:
return defer.succeed([])
if low is None:
low = 0
if high is None:
high = self.dbm['groups'][group].maxArticle
r = []
for i in range(low, high + 1):
if i in self.dbm['groups'][group].articles:
r.append([str(i)] + self.dbm['groups'][group].articles[i].overview())
return defer.succeed(r)
def xhdrRequest(self, group, low, high, header):
if group not in self.dbm['groups']:
return defer.succeed([])
if low is None:
low = 0
if high is None:
high = self.dbm['groups'][group].maxArticle
r = []
for i in range(low, high + 1):
if i in self.dbm['groups'][group].articles:
r.append((i, self.dbm['groups'][group].articles[i].getHeader(header)))
return defer.succeed(r)
def listGroupRequest(self, group):
if group in self.dbm['groups']:
return defer.succeed((group, self.dbm['groups'][group].articles.keys()))
return defer.fail(NewsServerError("No such group: " + group))
def groupRequest(self, group):
try:
g = self.dbm['groups'][group]
except KeyError:
return defer.fail(NewsServerError("No such group: " + group))
else:
flags = g.flags
low = g.minArticle
high = g.maxArticle
num = high - low + 1
return defer.succeed((group, num, high, low, flags))
def articleExistsRequest(self, id):
return defer.succeed(id in self.dbm['Message-IDs'])
def articleRequest(self, group, index, id = None):
if id is not None:
try:
xref = self.dbm['Message-IDs'][id]
except KeyError:
return defer.fail(NewsServerError("No such article: " + id))
else:
group, index = xref[0]
index = int(index)
try:
a = self.dbm['groups'][group].articles[index]
except KeyError:
return defer.fail(NewsServerError("No such group: " + group))
else:
return defer.succeed((
index,
a.getHeader('Message-ID'),
StringIO.StringIO(a.textHeaders() + '\r\n' + a.body)
))
def headRequest(self, group, index, id = None):
if id is not None:
try:
xref = self.dbm['Message-IDs'][id]
except KeyError:
return defer.fail(NewsServerError("No such article: " + id))
else:
group, index = xref[0]
index = int(index)
try:
a = self.dbm['groups'][group].articles[index]
except KeyError:
return defer.fail(NewsServerError("No such group: " + group))
else:
return defer.succeed((index, a.getHeader('Message-ID'), a.textHeaders()))
def bodyRequest(self, group, index, id = None):
if id is not None:
try:
xref = self.dbm['Message-IDs'][id]
except KeyError:
return defer.fail(NewsServerError("No such article: " + id))
else:
group, index = xref[0]
index = int(index)
try:
a = self.dbm['groups'][group].articles[index]
except KeyError:
return defer.fail(NewsServerError("No such group: " + group))
else:
return defer.succeed((index, a.getHeader('Message-ID'), StringIO.StringIO(a.body)))
@implementer(INewsStorage)
class NewsStorageAugmentation:
"""
A NewsStorage implementation using Twisted's asynchronous DB-API
"""
schema = """
CREATE TABLE groups (
group_id SERIAL,
name VARCHAR(80) NOT NULL,
flags INTEGER DEFAULT 0 NOT NULL
);
CREATE UNIQUE INDEX group_id_index ON groups (group_id);
CREATE UNIQUE INDEX name_id_index ON groups (name);
CREATE TABLE articles (
article_id SERIAL,
message_id TEXT,
header TEXT,
body TEXT
);
CREATE UNIQUE INDEX article_id_index ON articles (article_id);
CREATE UNIQUE INDEX article_message_index ON articles (message_id);
CREATE TABLE postings (
group_id INTEGER,
article_id INTEGER,
article_index INTEGER NOT NULL
);
CREATE UNIQUE INDEX posting_article_index ON postings (article_id);
CREATE TABLE subscriptions (
group_id INTEGER
);
CREATE TABLE overview (
header TEXT
);
"""
def __init__(self, info):
self.info = info
self.dbpool = adbapi.ConnectionPool(**self.info)
def __setstate__(self, state):
self.__dict__ = state
self.info['password'] = getpass.getpass('Database password for %s: ' % (self.info['user'],))
self.dbpool = adbapi.ConnectionPool(**self.info)
del self.info['password']
def listRequest(self):
# COALESCE may not be totally portable
# it is shorthand for
# CASE WHEN (first parameter) IS NOT NULL then (first parameter) ELSE (second parameter) END
sql = """
SELECT groups.name,
COALESCE(MAX(postings.article_index), 0),
COALESCE(MIN(postings.article_index), 0),
groups.flags
FROM groups LEFT OUTER JOIN postings
ON postings.group_id = groups.group_id
GROUP BY groups.name, groups.flags
ORDER BY groups.name
"""
return self.dbpool.runQuery(sql)
def subscriptionRequest(self):
sql = """
SELECT groups.name FROM groups,subscriptions WHERE groups.group_id = subscriptions.group_id
"""
return self.dbpool.runQuery(sql)
def postRequest(self, message):
cleave = message.find('\r\n\r\n')
headers, article = message[:cleave], message[cleave + 4:]
article = Article(headers, article)
return self.dbpool.runInteraction(self._doPost, article)
def _doPost(self, transaction, article):
# Get the group ids
groups = article.getHeader('Newsgroups').split()
if not len(groups):
raise NNTPError('Missing Newsgroups header')
sql = """
SELECT name, group_id FROM groups
WHERE name IN (%s)
""" % (', '.join([("'%s'" % (adbapi.safe(group),)) for group in groups]),)
transaction.execute(sql)
result = transaction.fetchall()
# No relevant groups, bye bye!
if not len(result):
raise NNTPError('None of groups in Newsgroup header carried')
# Got some groups, now find the indices this article will have in each
sql = """
SELECT groups.group_id, COALESCE(MAX(postings.article_index), 0) + 1
FROM groups LEFT OUTER JOIN postings
ON postings.group_id = groups.group_id
WHERE groups.group_id IN (%s)
GROUP BY groups.group_id
""" % (', '.join([("%d" % (id,)) for (group, id) in result]),)
transaction.execute(sql)
indices = transaction.fetchall()
if not len(indices):
raise NNTPError('Internal server error - no indices found')
# Associate indices with group names
gidToName = dict([(b, a) for (a, b) in result])
gidToIndex = dict(indices)
nameIndex = []
for i in gidToName:
nameIndex.append((gidToName[i], gidToIndex[i]))
# Build xrefs
xrefs = socket.gethostname().split()[0]
xrefs = xrefs + ' ' + ' '.join([('%s:%d' % (group, id)) for (group, id) in nameIndex])
article.putHeader('Xref', xrefs)
# Hey! The article is ready to be posted! God damn f'in finally.
sql = """
INSERT INTO articles (message_id, header, body)
VALUES ('%s', '%s', '%s')
""" % (
adbapi.safe(article.getHeader('Message-ID')),
adbapi.safe(article.textHeaders()),
adbapi.safe(article.body)
)
transaction.execute(sql)
# Now update the posting to reflect the groups to which this belongs
for gid in gidToName:
sql = """
INSERT INTO postings (group_id, article_id, article_index)
VALUES (%d, (SELECT last_value FROM articles_article_id_seq), %d)
""" % (gid, gidToIndex[gid])
transaction.execute(sql)
return len(nameIndex)
def overviewRequest(self):
sql = """
SELECT header FROM overview
"""
return self.dbpool.runQuery(sql).addCallback(lambda result: [header[0] for header in result])
def xoverRequest(self, group, low, high):
sql = """
SELECT postings.article_index, articles.header
FROM articles,postings,groups
WHERE postings.group_id = groups.group_id
AND groups.name = '%s'
AND postings.article_id = articles.article_id
%s
%s
""" % (
adbapi.safe(group),
low is not None and "AND postings.article_index >= %d" % (low,) or "",
high is not None and "AND postings.article_index <= %d" % (high,) or ""
)
return self.dbpool.runQuery(sql).addCallback(
lambda results: [
[id] + Article(header, None).overview() for (id, header) in results
]
)
def xhdrRequest(self, group, low, high, header):
sql = """
SELECT articles.header
FROM groups,postings,articles
WHERE groups.name = '%s' AND postings.group_id = groups.group_id
AND postings.article_index >= %d
AND postings.article_index <= %d
""" % (adbapi.safe(group), low, high)
return self.dbpool.runQuery(sql).addCallback(
lambda results: [
(i, Article(h, None).getHeader(h)) for (i, h) in results
]
)
def listGroupRequest(self, group):
sql = """
SELECT postings.article_index FROM postings,groups
WHERE postings.group_id = groups.group_id
AND groups.name = '%s'
""" % (adbapi.safe(group),)
return self.dbpool.runQuery(sql).addCallback(
lambda results, group = group: (group, [res[0] for res in results])
)
def groupRequest(self, group):
sql = """
SELECT groups.name,
COUNT(postings.article_index),
COALESCE(MAX(postings.article_index), 0),
COALESCE(MIN(postings.article_index), 0),
groups.flags
FROM groups LEFT OUTER JOIN postings
ON postings.group_id = groups.group_id
WHERE groups.name = '%s'
GROUP BY groups.name, groups.flags
""" % (adbapi.safe(group),)
return self.dbpool.runQuery(sql).addCallback(
lambda results: tuple(results[0])
)
def articleExistsRequest(self, id):
sql = """
SELECT COUNT(message_id) FROM articles
WHERE message_id = '%s'
""" % (adbapi.safe(id),)
return self.dbpool.runQuery(sql).addCallback(
lambda result: bool(result[0][0])
)
def articleRequest(self, group, index, id = None):
if id is not None:
sql = """
SELECT postings.article_index, articles.message_id, articles.header, articles.body
FROM groups,postings LEFT OUTER JOIN articles
ON articles.message_id = '%s'
WHERE groups.name = '%s'
AND groups.group_id = postings.group_id
""" % (adbapi.safe(id), adbapi.safe(group))
else:
sql = """
SELECT postings.article_index, articles.message_id, articles.header, articles.body
FROM groups,articles LEFT OUTER JOIN postings
ON postings.article_id = articles.article_id
WHERE postings.article_index = %d
AND postings.group_id = groups.group_id
AND groups.name = '%s'
""" % (index, adbapi.safe(group))
return self.dbpool.runQuery(sql).addCallback(
lambda result: (
result[0][0],
result[0][1],
StringIO.StringIO(result[0][2] + '\r\n' + result[0][3])
)
)
def headRequest(self, group, index):
sql = """
SELECT postings.article_index, articles.message_id, articles.header
FROM groups,articles LEFT OUTER JOIN postings
ON postings.article_id = articles.article_id
WHERE postings.article_index = %d
AND postings.group_id = groups.group_id
AND groups.name = '%s'
""" % (index, adbapi.safe(group))
return self.dbpool.runQuery(sql).addCallback(lambda result: result[0])
def bodyRequest(self, group, index):
sql = """
SELECT postings.article_index, articles.message_id, articles.body
FROM groups,articles LEFT OUTER JOIN postings
ON postings.article_id = articles.article_id
WHERE postings.article_index = %d
AND postings.group_id = groups.group_id
AND groups.name = '%s'
""" % (index, adbapi.safe(group))
return self.dbpool.runQuery(sql).addCallback(
lambda result: result[0]
).addCallback(
# result is a tuple of (index, id, body)
lambda result: (result[0], result[1], StringIO.StringIO(result[2]))
)
####
#### XXX - make these static methods some day
####
def makeGroupSQL(groups):
res = ''
for g in groups:
res = res + """\n INSERT INTO groups (name) VALUES ('%s');\n""" % (adbapi.safe(g),)
return res
def makeOverviewSQL():
res = ''
for o in OVERVIEW_FMT:
res = res + """\n INSERT INTO overview (header) VALUES ('%s');\n""" % (adbapi.safe(o),)
return res
|
gpl-2.0
|
mvesper/invenio
|
modules/websearch/lib/websearch_external_collections_unit_tests.py
|
3
|
4111
|
# -*- coding: utf-8 -*-
# This file is part of Invenio.
# Copyright (C) 2006, 2007, 2008, 2010, 2011, 2013 CERN.
#
# Invenio is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Testing functions for the external collections search.
More tests of the page getter module can be done with
websearch_external_collections_getter_tests.py
"""
__revision__ = "$Id$"
from invenio.testutils import InvenioTestCase
from invenio.websearch_external_collections_searcher import external_collections_dictionary
from invenio.websearch_external_collections_getter import HTTPAsyncPageGetter, async_download
from invenio.testutils import make_test_suite, run_test_suite, nottest
def download_and_parse():
"""Try to make a query that always return results on all search engines.
Check that a page is well returned and that the result can be parsed.
This test is not included in the general test suite.
This test give false positive if any of the external server is non working or too slow.
"""
test = [['+', 'ieee', '', 'w']]
errors = []
external_collections = external_collections_dictionary.values()
urls = [engine.build_search_url(test) for engine in external_collections]
pagegetters = [HTTPAsyncPageGetter(url) for url in urls]
dummy = async_download(pagegetters, None, None, 30)
for (page, engine, url) in zip(pagegetters, external_collections, urls):
if not url:
errors.append("Unable to build url for : " + engine.name)
continue
if len(page.data) == 0:
errors.append("Zero sized page with : " + engine.name)
continue
if engine.parser:
results = engine.parser.parse_and_get_results(page.data)
num_results = engine.parser.parse_num_results()
if len(results) == 0:
errors.append("Unable to parse results for : " + engine.name)
continue
if not num_results:
errors.append("Unable to parse (None returned) number of results for : " + engine.name)
try:
num_results = int(num_results)
except:
errors.append("Unable to parse (not a number) number of results for : " + engine.name)
return errors
@nottest
def build_search_urls_test():
"""Build some classical urls from basic_search_units."""
print "Testing external_search_engines build_search_url functions."
tests = [ [['+', 'ellis', 'author', 'w'], ['+', 'unification', 'title', 'w'],
['-', 'Ross', 'author', 'w'], ['+', 'large', '', 'w'], ['-', 'helloworld', '', 'w']],
[['+', 'ellis', 'author', 'w'], ['+', 'unification', 'title', 'w']],
[['+', 'ellis', 'author', 'w']],
[['-', 'Ross', 'author', 'w']] ]
for engine in external_collections_dictionary.values():
print engine.name
for test in tests:
url = engine.build_search_url(test)
print " Url: " + str(url)
class ExtCollTests(InvenioTestCase):
"""Test cases for websearch_external_collections_*"""
@nottest
def test_download_and_parse(self):
"""websearch_external_collections - download_and_parse (not reliable, see docstring)"""
self.assertEqual([], download_and_parse())
# FIXME: the above tests not plugged into global unit test suite
TEST_SUITE = make_test_suite() #ExtCollTests,)
if __name__ == "__main__":
build_search_urls_test()
run_test_suite(TEST_SUITE)
|
gpl-2.0
|
flyher/pymo
|
symbian/PythonForS60_1.9.6/module-repo/standard-modules/encodings/iso2022_kr.py
|
816
|
1053
|
#
# iso2022_kr.py: Python Unicode Codec for ISO2022_KR
#
# Written by Hye-Shik Chang <[email protected]>
#
import _codecs_iso2022, codecs
import _multibytecodec as mbc
codec = _codecs_iso2022.getcodec('iso2022_kr')
class Codec(codecs.Codec):
encode = codec.encode
decode = codec.decode
class IncrementalEncoder(mbc.MultibyteIncrementalEncoder,
codecs.IncrementalEncoder):
codec = codec
class IncrementalDecoder(mbc.MultibyteIncrementalDecoder,
codecs.IncrementalDecoder):
codec = codec
class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader):
codec = codec
class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter):
codec = codec
def getregentry():
return codecs.CodecInfo(
name='iso2022_kr',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
)
|
mit
|
Rogentos/rogentos-anaconda
|
booty/ia64.py
|
3
|
1456
|
from booty import BootyNoKernelWarning
from bootloaderInfo import *
class ia64BootloaderInfo(efiBootloaderInfo):
def getBootloaderConfig(self, instRoot, bl, kernelList,
chainList, defaultDev):
config = bootloaderInfo.getBootloaderConfig(self, instRoot,
bl, kernelList, chainList,
defaultDev)
# altix boxes need relocatable (#120851)
config.addEntry("relocatable")
return config
def writeLilo(self, instRoot, bl, kernelList,
chainList, defaultDev):
config = self.getBootloaderConfig(instRoot, bl,
kernelList, chainList, defaultDev)
return config.write(instRoot + self.configfile, perms = 0755)
def write(self, instRoot, bl, kernelList, chainList, defaultDev):
if len(kernelList) >= 1:
rc = self.writeLilo(instRoot, bl, kernelList,
chainList, defaultDev)
if rc:
return rc
else:
raise BootyNoKernelWarning
rc = self.removeOldEfiEntries(instRoot)
if rc:
return rc
return self.addNewEfiEntry(instRoot)
def __init__(self, anaconda):
efiBootloaderInfo.__init__(self, anaconda)
self._configname = "elilo.conf"
self._bootloader = "elilo.efi"
|
gpl-2.0
|
ZhenxingWu/luigi
|
luigi/server.py
|
24
|
10444
|
# -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Simple REST server that takes commands in a JSON payload
Interface to the :py:class:`~luigi.scheduler.CentralPlannerScheduler` class.
See :doc:`/central_scheduler` for more info.
"""
#
# Description: Added codes for visualization of how long each task takes
# running-time until it reaches the next status (failed or done)
# At "{base_url}/tasklist", all completed(failed or done) tasks are shown.
# At "{base_url}/tasklist", a user can select one specific task to see
# how its running-time has changed over time.
# At "{base_url}/tasklist/{task_name}", it visualizes a multi-bar graph
# that represents the changes of the running-time for a selected task
# up to the next status (failed or done).
# This visualization let us know how the running-time of the specific task
# has changed over time.
#
# Copyright 2015 Naver Corp.
# Author Yeseul Park ([email protected])
#
import atexit
import json
import logging
import mimetypes
import os
import posixpath
import signal
import sys
import datetime
import time
import pkg_resources
import tornado.httpclient
import tornado.httpserver
import tornado.ioloop
import tornado.netutil
import tornado.web
from luigi import configuration
from luigi.scheduler import CentralPlannerScheduler
logger = logging.getLogger("luigi.server")
class RPCHandler(tornado.web.RequestHandler):
"""
Handle remote scheduling calls using rpc.RemoteSchedulerResponder.
"""
def initialize(self, scheduler):
self._scheduler = scheduler
def get(self, method):
payload = self.get_argument('data', default="{}")
arguments = json.loads(payload)
# TODO: we should probably denote all methods on the scheduler that are "API-level"
# versus internal methods. Right now you can do a REST method call to any method
# defined on the scheduler, which is pretty bad from a security point of view.
if hasattr(self._scheduler, method):
result = getattr(self._scheduler, method)(**arguments)
self.write({"response": result}) # wrap all json response in a dictionary
else:
self.send_error(404)
post = get
class BaseTaskHistoryHandler(tornado.web.RequestHandler):
def initialize(self, scheduler):
self._scheduler = scheduler
def get_template_path(self):
return pkg_resources.resource_filename(__name__, 'templates')
class AllRunHandler(BaseTaskHistoryHandler):
def get(self):
all_tasks = self._scheduler.task_history.find_all_runs()
tasknames = []
for task in all_tasks:
tasknames.append(task.name)
# show all tasks with their name list to be selected
# why all tasks? the duration of the event history of a selected task
# can be more than 24 hours.
self.render("menu.html", tasknames=tasknames)
class SelectedRunHandler(BaseTaskHistoryHandler):
def get(self, name):
tasks = {}
statusResults = {}
taskResults = []
# get all tasks that has been updated
all_tasks = self._scheduler.task_history.find_all_runs()
# get events history for all tasks
all_tasks_event_history = self._scheduler.task_history.find_all_events()
for task in all_tasks:
task_seq = task.id
task_name = task.name
# build the dictionary, tasks with index: id, value: task_name
tasks[task_seq] = str(task_name)
for task in all_tasks_event_history:
# if the name of user-selected task is in tasks, get its task_id
if tasks.get(task.task_id) == str(name):
status = str(task.event_name)
if status not in statusResults:
statusResults[status] = []
# append the id, task_id, ts, y with 0, next_process with null
# for the status(running/failed/done) of the selected task
statusResults[status].append(({
'id': str(task.id), 'task_id': str(task.task_id),
'x': from_utc(str(task.ts)), 'y': 0, 'next_process': ''}))
# append the id, task_name, task_id, status, datetime, timestamp
# for the selected task
taskResults.append({
'id': str(task.id), 'taskName': str(name), 'task_id': str(task.task_id),
'status': str(task.event_name), 'datetime': str(task.ts),
'timestamp': from_utc(str(task.ts))})
statusResults = json.dumps(statusResults)
taskResults = json.dumps(taskResults)
statusResults = tornado.escape.xhtml_unescape(str(statusResults))
taskResults = tornado.escape.xhtml_unescape(str(taskResults))
self.render('history.html', name=name, statusResults=statusResults, taskResults=taskResults)
def from_utc(utcTime, fmt=None):
"""convert UTC time string to time.struct_time: change datetime.datetime to time, return time.struct_time type"""
if fmt is None:
try_formats = ["%Y-%m-%d %H:%M:%S.%f", "%Y-%m-%d %H:%M:%S"]
else:
try_formats = [fmt]
for fmt in try_formats:
try:
time_struct = datetime.datetime.strptime(utcTime, fmt)
except ValueError:
pass
else:
date = int(time.mktime(time_struct.timetuple()))
return date
else:
raise ValueError("No UTC format matches {}".format(utcTime))
class RecentRunHandler(BaseTaskHistoryHandler):
def get(self):
tasks = self._scheduler.task_history.find_latest_runs()
self.render("recent.html", tasks=tasks)
class ByNameHandler(BaseTaskHistoryHandler):
def get(self, name):
tasks = self._scheduler.task_history.find_all_by_name(name)
self.render("recent.html", tasks=tasks)
class ByIdHandler(BaseTaskHistoryHandler):
def get(self, id):
task = self._scheduler.task_history.find_task_by_id(id)
self.render("show.html", task=task)
class ByParamsHandler(BaseTaskHistoryHandler):
def get(self, name):
payload = self.get_argument('data', default="{}")
arguments = json.loads(payload)
tasks = self._scheduler.task_history.find_all_by_parameters(name, session=None, **arguments)
self.render("recent.html", tasks=tasks)
class StaticFileHandler(tornado.web.RequestHandler):
def get(self, path):
# Path checking taken from Flask's safe_join function:
# https://github.com/mitsuhiko/flask/blob/1d55b8983/flask/helpers.py#L563-L587
path = posixpath.normpath(path)
if os.path.isabs(path) or path.startswith(".."):
return self.send_error(404)
extension = os.path.splitext(path)[1]
if extension in mimetypes.types_map:
self.set_header("Content-Type", mimetypes.types_map[extension])
data = pkg_resources.resource_string(__name__, os.path.join("static", path))
self.write(data)
class RootPathHandler(BaseTaskHistoryHandler):
def get(self):
visualization_graph = self._scheduler._config.visualization_graph
if visualization_graph == "d3":
self.redirect("/static/visualiser/index.d3.html")
elif visualization_graph == "svg":
self.redirect("/static/visualiser/index.html")
else:
self.redirect("/static/visualiser/index.html")
def app(scheduler):
settings = {"static_path": os.path.join(os.path.dirname(__file__), "static"), "unescape": tornado.escape.xhtml_unescape}
handlers = [
(r'/api/(.*)', RPCHandler, {"scheduler": scheduler}),
(r'/static/(.*)', StaticFileHandler),
(r'/', RootPathHandler, {'scheduler': scheduler}),
(r'/tasklist', AllRunHandler, {'scheduler': scheduler}),
(r'/tasklist/(.*?)', SelectedRunHandler, {'scheduler': scheduler}),
(r'/history', RecentRunHandler, {'scheduler': scheduler}),
(r'/history/by_name/(.*?)', ByNameHandler, {'scheduler': scheduler}),
(r'/history/by_id/(.*?)', ByIdHandler, {'scheduler': scheduler}),
(r'/history/by_params/(.*?)', ByParamsHandler, {'scheduler': scheduler})
]
api_app = tornado.web.Application(handlers, **settings)
return api_app
def _init_api(scheduler, responder=None, api_port=None, address=None):
if responder:
raise Exception('The "responder" argument is no longer supported')
api_app = app(scheduler)
api_sockets = tornado.netutil.bind_sockets(api_port, address=address)
server = tornado.httpserver.HTTPServer(api_app)
server.add_sockets(api_sockets)
# Return the bound socket names. Useful for connecting client in test scenarios.
return [s.getsockname() for s in api_sockets]
def run(api_port=8082, address=None, scheduler=None, responder=None):
"""
Runs one instance of the API server.
"""
if scheduler is None:
scheduler = CentralPlannerScheduler()
# load scheduler state
scheduler.load()
_init_api(scheduler, responder, api_port, address)
# prune work DAG every 60 seconds
pruner = tornado.ioloop.PeriodicCallback(scheduler.prune, 60000)
pruner.start()
def shutdown_handler(signum, frame):
exit_handler()
sys.exit(0)
@atexit.register
def exit_handler():
logger.info("Scheduler instance shutting down")
scheduler.dump()
stop()
signal.signal(signal.SIGINT, shutdown_handler)
signal.signal(signal.SIGTERM, shutdown_handler)
if os.name == 'nt':
signal.signal(signal.SIGBREAK, shutdown_handler)
else:
signal.signal(signal.SIGQUIT, shutdown_handler)
logger.info("Scheduler starting up")
tornado.ioloop.IOLoop.instance().start()
def stop():
tornado.ioloop.IOLoop.instance().stop()
if __name__ == "__main__":
run()
|
apache-2.0
|
doughgle/pomodoro_evolved
|
pomodoro_evolved/native_ui.py
|
1
|
4010
|
import Tkinter as tk
import tkFont
import tkMessageBox
from pomodoro import Pomodoro
from datetime import timedelta, datetime
from Queue import Queue, Empty
from rest_break import Break as ShortBreak
from rest_break import Break as LongBreak
from timer_log import TimerLog
class NativeUI(tk.Tk):
def __init__(self, pomodoroDurationInMins=25, shortBreakDurationInMins=5, longBreakDurationInMins=15):
tk.Tk.__init__(self)
self.clockFont = tkFont.Font(family="Helvetica", size=18)
self.clock = tk.Label(self, width=15, font=self.clockFont)
self.startStopButton = tk.Button(self)
self.analyseButton = tk.Button(self, text="Analyse", command=self.onAnalyse)
self.clock.pack()
self.startStopButton.pack()
self.analyseButton.pack()
self.uiQueue = Queue()
self._handleUiRequest()
self._completedPomodoros = 0
self._pomodoroDurationInMins = pomodoroDurationInMins
self._shortBreakDurationInMins = shortBreakDurationInMins
self._longBreakDurationInMins = longBreakDurationInMins
self._timerLog = TimerLog()
self.newTimer()
def isLongBreakTime(self):
return self._completedPomodoros % 4 == 0
def newTimer(self, prevTimer=None):
'''
Set's up the next timer, whether it's a Pomodoro or a Break
'''
self.timer = Pomodoro(self.whenTimeup, durationInMins=self._pomodoroDurationInMins, name="Pomodoro")
if prevTimer is not None:
# addToLog status of prevTimer before creating a new one.
prevTimer.addToLog(self._timerLog)
if isinstance(prevTimer, Pomodoro):
self._completedPomodoros += 1
if self.isLongBreakTime():
self.timer = LongBreak(self.whenTimeup,
durationInMins=self._longBreakDurationInMins,
name="Long Break")
else:
self.timer = ShortBreak(self.whenTimeup,
durationInMins=self._shortBreakDurationInMins,
name="Short Break")
self.title(self.timer.name)
self.clock.configure(text=str(timedelta(seconds=self.timer.timeRemaining)))
self.startStopButton.configure(text="Start", command=self.onStart)
def onStart(self):
self.timer.start()
self.drawClock()
self.startStopButton.configure(text="Stop", command=self.onStop)
print "started %s!" % self.timer.name
def onStop(self):
if tkMessageBox.askyesno("", "Void this %s?" % self.timer.name):
if self.timer.isRunning():
self.timer.stop()
print "stopped!"
self.newTimer(self.timer)
def onAnalyse(self):
print "showing log..."
print str(self._timerLog)
def whenTimeup(self):
'''
Called by the timer in a separate thread when time's up.
'''
print "timeup!"
uiFunction = (tkMessageBox.showinfo, ("time's up", "%s Complete!" % self.timer.name), {})
self.uiQueue.put(uiFunction)
newTimerFunction = (self.newTimer, (self.timer,), {})
self.uiQueue.put(newTimerFunction)
def drawClock(self):
if self.timer.isRunning():
self.clock.configure(text=str(timedelta(seconds=self.timer.timeRemaining)))
self.after(1000, self.drawClock)
def _handleUiRequest(self):
'''
Services the UI queue to handles UI requests in the main thread.
'''
try:
while True:
f, a, k = self.uiQueue.get_nowait()
f(*a, **k)
except Empty:
pass
self.after(200, self._handleUiRequest)
if __name__ == "__main__":
app = NativeUI()
app.mainloop()
|
mit
|
openscriptures/Biblelator
|
Biblelator/Windows/TSVEditWindow.py
|
1
|
137680
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# TSVEditWindow.py
#
# The edit windows for Biblelator TSV table editing
#
# Copyright (C) 2020 Robert Hunt
# Author: Robert Hunt <[email protected]>
# License: See gpl-3.0.txt
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
A general window with one text box that has full editing functions,
i.e., load, save, save-as, font resizing, etc.
The add-on can be used to build other editing windows.
"""
from gettext import gettext as _
from typing import List, Tuple, Optional
import os.path
import logging
import shutil
from datetime import datetime
import random
import tkinter as tk
from tkinter import font
from tkinter.filedialog import asksaveasfilename
from tkinter.ttk import Frame, Scrollbar, Button, Label, Entry, Style
# BibleOrgSys imports
from BibleOrgSys import BibleOrgSysGlobals
from BibleOrgSys.BibleOrgSysGlobals import fnPrint, vPrint, dPrint
from BibleOrgSys.Reference.VerseReferences import SimpleVerseKey
from BibleOrgSys.Reference.BibleOrganisationalSystems import BibleOrganisationalSystem
# Biblelator imports
if __name__ == '__main__':
import sys
aboveAboveFolderpath = os.path.dirname( os.path.dirname( os.path.dirname( os.path.abspath( __file__ ) ) ) )
if aboveAboveFolderpath not in sys.path:
sys.path.insert( 0, aboveAboveFolderpath )
from Biblelator import BiblelatorGlobals
from Biblelator.BiblelatorGlobals import APP_NAME, tkSTART, tkBREAK, DEFAULT, \
DATA_SUBFOLDER_NAME, BIBLE_GROUP_CODES
from Biblelator.Dialogs.BiblelatorSimpleDialogs import showError, showInfo
from Biblelator.Dialogs.BiblelatorDialogs import YesNoDialog, OkCancelDialog
from Biblelator.Windows.TextBoxes import CustomText, TRAILING_SPACE_SUBSTITUTE, MULTIPLE_SPACE_SUBSTITUTE, \
DOUBLE_SPACE_SUBSTITUTE, ALL_POSSIBLE_SPACE_CHARS
from Biblelator.Windows.ChildWindows import ChildWindow, BibleWindowAddon
from Biblelator.Helpers.AutocorrectFunctions import setDefaultAutocorrectEntries # setAutocorrectEntries
from Biblelator.Helpers.AutocompleteFunctions import getCharactersBeforeCursor, \
getWordCharactersBeforeCursor, getCharactersAndWordBeforeCursor, \
getWordBeforeSpace, addNewAutocompleteWord, acceptAutocompleteSelection
LAST_MODIFIED_DATE = '2020-05-01' # by RJH
SHORT_PROGRAM_NAME = "BiblelatorTSVEditWindow"
PROGRAM_NAME = "Biblelator TSV Edit Window"
PROGRAM_VERSION = '0.46'
programNameVersion = f'{PROGRAM_NAME} v{PROGRAM_VERSION}'
debuggingThisModule = False
REFRESH_TITLE_TIME = 500 # msecs
CHECK_DISK_CHANGES_TIME = 33333 # msecs
NO_TYPE_TIME = 6000 # msecs
NUM_AUTOCOMPLETE_POPUP_LINES = 6
MAX_PSEUDOVERSES = 200 # What should this really be?
class RowFrame( Frame ):
"""
Class to display the TSV rows before or after the current row.
It only displays a vital subset of the row data.
"""
def __init__( self, parent ) -> None:
"""
Create the widgets.
"""
fnPrint( debuggingThisModule, "RowFrame.__init__()" )
self.parent = parent
super().__init__( parent )
padX, padY = 2, 2
self.rowNumberLabel = Label( self, relief=tk.SUNKEN )
self.verseLabel = Label( self, relief=tk.SUNKEN )
self.supportReferenceLabel = Label( self, relief=tk.SUNKEN, width=5 )
self.origQuoteLabel = Label( self, relief=tk.SUNKEN, width=10 )
self.GLQuoteLabel = Label( self, relief=tk.SUNKEN, width=10 )
self.occurrenceNoteLabel = Label( self, relief=tk.SUNKEN, width=20 )
self.rowNumberLabel.pack( side=tk.LEFT, padx=padX, pady=padY )
self.verseLabel.pack( side=tk.LEFT, padx=padX, pady=padY )
self.supportReferenceLabel.pack( side=tk.LEFT, fill=tk.X, padx=padX, pady=padY )
self.origQuoteLabel.pack( side=tk.LEFT, fill=tk.X, padx=padX, pady=padY )
self.GLQuoteLabel.pack( side=tk.LEFT, fill=tk.X, padx=padX, pady=padY )
self.occurrenceNoteLabel.pack( side=tk.LEFT, fill=tk.X, expand=tk.YES, padx=padX, pady=padY )
# end of RowFrame.__init__ function
def fill( self, rowNumber:int, rowData:Optional[List[str]] ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"RowFrame.fill( {rowNumber}, {rowData} )" )
if rowData is None:
self.rowNumberLabel['text'] = ''
self.verseLabel['text'] = ''
self.supportReferenceLabel['text'] = ''
self.origQuoteLabel['text'] = ''
self.GLQuoteLabel['text'] = ''
self.occurrenceNoteLabel['text'] = ''
else:
self.rowNumberLabel['text'] = rowNumber
self.verseLabel['text'] = rowData[self.parent.verseColumn]
self.supportReferenceLabel['text'] = rowData[self.parent.supportReferenceColumn]
self.origQuoteLabel['text'] = rowData[self.parent.origQuoteColumn]
self.GLQuoteLabel['text'] = rowData[self.parent.GLQuoteColumn]
self.occurrenceNoteLabel['text'] = rowData[self.parent.occurrenceNoteColumn]
# end of RowFrame.fill function
# end of RowFrame class
class TSVEditWindowAddon:
"""
"""
def __init__( self, windowType:str, folderpath:str ):
"""
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.__init__( {windowType}, {folderpath} )" )
self.windowType, self.folderpath = windowType, folderpath
BiblelatorGlobals.theApp.logUsage( PROGRAM_NAME, debuggingThisModule, f"TSVEditWindowAddon __init__ {windowType} {folderpath}" )
self.loading = True
self.protocol( 'WM_DELETE_WINDOW', self.doClose ) # Catch when window is closed
# Set-up our Bible system and our callables
self.BibleOrganisationalSystem = BibleOrganisationalSystem( 'GENERIC-KJV-66-ENG' )
self.getNumChapters = self.BibleOrganisationalSystem.getNumChapters
self.getNumVerses = lambda BBB,C: MAX_PSEUDOVERSES if BBB=='UNK' or C=='-1' or C==-1 \
else self.BibleOrganisationalSystem.getNumVerses( BBB, C )
self.lastBBB = None
self.current_row = None
self.numDataRows = 0
self.onTextNoChangeID = None
self.editStatus = 'Editable'
# # Make our own custom textBox which allows a callback function
# # Delete these lines and the callback line if you don't need either autocorrect or autocomplete
# self.textBox.destroy() # from the ChildWindow default
# self.myKeyboardBindingsList = []
# if BibleOrgSysGlobals.debugFlag: self.myKeyboardShortcutsList = []
# self.customFont = tk.font.Font( family="sans-serif", size=12 )
# self.customFontBold = tk.font.Font( family="sans-serif", size=12, weight='bold' )
# self.textBox = CustomText( self, yscrollcommand=self.vScrollbar.set, wrap='word', font=self.customFont )
self.defaultBackgroundColour = 'gold2'
# self.textBox.configure( background=self.defaultBackgroundColour )
# self.textBox.configure( selectbackground='blue' )
# self.textBox.configure( highlightbackground='orange' )
# self.textBox.configure( inactiveselectbackground='green' )
# self.textBox.configure( wrap='word', undo=True, autoseparators=True )
# # self.textBox.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
# self.vScrollbar.configure( command=self.textBox.yview ) # link the scrollbar to the text box
# self.textBox.setTextChangeCallback( self.onTextChange )
# self.createEditorKeyboardBindings()
# #self.createMenuBar()
# self.createContextMenu() # Enable right-click menu
self.lastFiletime = self.lastFilesize = None
# self.clearText()
self.markMultipleSpacesFlag = True
self.markTrailingSpacesFlag = True
self.autocorrectEntries = []
# Temporarily include some default autocorrect values
setDefaultAutocorrectEntries( self )
#setAutocorrectEntries( self, ourAutocorrectEntries )
self.autocompleteBox, self.autocompleteWords, self.existingAutocompleteWordText = None, {}, ''
self.autocompleteWordChars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_'
# Note: I guess we could have used non-word chars instead (to stop the backwards word search)
self.autocompleteMinLength = 3 # Show the normal window after this many characters have been typed
self.autocompleteMaxLength = 15 # Remove window after this many characters have been typed
self.autocompleteMode = None # None or Dictionary1 or Dictionary2 (or Bible or BibleBook)
self.addAllNewWords = False
self.invalidCombinations = [] # characters or character combinations that shouldn't occur
# Temporarily include some default invalid values
self.invalidCombinations = [',,',' ,',] # characters or character combinations that shouldn't occur
self.patternsToHighlight = []
# Temporarily include some default values -- simplistic demonstration examples
self.patternsToHighlight.append( (True, r'rc://en/ta/[^ \]\)]+','blue',{'foreground':'blue'}) )
self.patternsToHighlight.append( (True, r'rc://en/tw/[^ \]\)]+','orange',{'foreground':'orange', 'background':'grey'}) )
self.patternsToHighlight.append( (True, r'rc://[^ \]\)]+','blue',{'foreground':'blue'}) )
self.patternsToHighlight.append( (True,'#.*?\\n','grey',{'foreground':'grey'}) )
self.patternsToHighlight.append( (False,'...','error',{'background':'red'}) )
self.patternsToHighlight.append( (False,'Introduction','intro',{'foreground':'green'}) )
self.patternsToHighlight.append( (False,'Who ','who',{'foreground':'green'}) )
self.patternsToHighlight.append( (False,'What ','what',{'foreground':'green'}) )
self.patternsToHighlight.append( (False,'How ','how',{'foreground':'green'}) )
self.patternsToHighlight.append( (True, r'\d','digits',{'foreground':'blue'}) )
# boldDict = {'font':self.customFontBold } #, 'background':'green'}
# for pythonKeyword in ( 'from','import', 'class','def', 'if','and','or','else','elif',
# 'for','while', 'return', 'try','accept','finally', 'assert', ):
# self.patternsToHighlight.append( (True,'\\y'+pythonKeyword+'\\y','bold',boldDict) )
self.saveChangesAutomatically = False # different from AutoSave (which is in different files)
self.autosaveTime = 2*60*1000 # msecs (zero is no autosaves)
self.autosaveScheduled = False
# self.thisBookUSFMCode = None
# self.loadBookDataFromDisk()
self.tsvHeaders = []
self.buildWidgets()
# self._gotoRow()
# self.validateTSVTable() # _gotoRow() sets self.thisBookUSFMCode needed by validateTSVTable()
self.after( CHECK_DISK_CHANGES_TIME, self.checkForDiskChanges )
#self.after( REFRESH_TITLE_TIME, self.refreshTitle )
self.loading = self.hadTextWarning = False
#self.lastTextChangeTime = time()
vPrint( 'Never', debuggingThisModule, "TSVEditWindowAddon.__init__ finished." )
# end of TSVEditWindowAddon.__init__
def updateShownBCV( self, newReferenceVerseKey:SimpleVerseKey, originator:Optional[str]=None ) -> None:
"""
Updates self.textBox in various ways depending on the contextViewMode held by the enclosing window.
If newReferenceVerseKey is None: clears the window
Otherwise, basically does the following steps (depending on the contextViewMode):
1/ Saves any changes in the editor to self.bookText
2/ If we've changed book:
if changes to self.bookText, save them to disk
load the new book text
3/ Load the appropriate verses into the editor according to the contextViewMode.
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.updateShownBCV( {newReferenceVerseKey.getShortText()}, originator={originator} ) from {self.currentVerseKey.getShortText()} for {self.moduleID}" )
# dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.updateShownBCV( {}, {} ) from {} for".format( newReferenceVerseKey.getShortText(), originator, self.currentVerseKey.getShortText() ), self.moduleID )
#dPrint( 'Quiet', debuggingThisModule, "contextViewMode", self._contextViewMode )
#assert self._formatViewMode == 'Unformatted' # Only option done so far
# Check the book
if not newReferenceVerseKey or newReferenceVerseKey==self.currentVerseKey: return
if self.current_row: # The last row might have changed
self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
oldVerseKey = self.currentVerseKey
oldBBB, oldC, oldV = (None,None,None) if oldVerseKey is None else oldVerseKey.getBCV()
BBB, C, V = newReferenceVerseKey.getBCV()
if BBB != oldBBB:
vPrint( 'Quiet', debuggingThisModule, f" updateShownBCV switching from {oldBBB} to {BBB}" )
if oldBBB and oldBBB != 'UNK':
# self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
self.doSave() # Save existing file if necessary
self.rowNumberVar.set( 1 ) # In case we're loading a shorter book
self.current_row = None
self.loadBookDataFromDisk( BBB )
self.buildWidgets() # again coz columns could be different now???
self.validateTSVTable() # _gotoRow() sets self.thisBookUSFMCode needed by validateTSVTable()
# Go through all our rows to find if this verse occurs in the table
# NOTE: The present code doesn't change the row if there's no entry for that BCV ref
# What would the user want here?
for j, row in enumerate( self.tsvTable ):
if row[self.chapterColumn] == newReferenceVerseKey.C \
and (row[self.verseColumn] == newReferenceVerseKey.V
or (row[self.verseColumn]=='intro' and newReferenceVerseKey.V=='0')):
self.rowNumberVar.set( j )
self._gotoRow( notifyMain=False ) # Don't notify up or it gets recursive
break
# end of TSVEditWindowAddon.updateShownBCV function
# def updateShownBCV( self, newReferenceVerseKey, originator=None ):
# """
# Updates self.textBox in various ways depending on the contextViewMode held by the enclosing window.
# If newReferenceVerseKey is None: clears the window
# Otherwise, basically does the following steps (depending on the contextViewMode):
# 1/ Saves any changes in the editor to self.bookText
# 2/ If we've changed book:
# if changes to self.bookText, save them to disk
# load the new book text
# 3/ Load the appropriate verses into the editor according to the contextViewMode.
# """
# logging.debug( "USFMEditWindow.updateShownBCV( {}, {} ) from {} for".format( newReferenceVerseKey, originator, self.currentVerseKey ), self.moduleID )
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "USFMEditWindow.updateShownBCV( {}, {} ) from {} for".format( newReferenceVerseKey, originator, self.currentVerseKey ), self.moduleID )
# #dPrint( 'Quiet', debuggingThisModule, "contextViewMode", self._contextViewMode )
# #assert self._formatViewMode == 'Unformatted' # Only option done so far
# if self.autocompleteBox is not None: self.removeAutocompleteBox()
# self.textBox.configure( background=self.defaultBackgroundColour ) # Go back to default background
# if self._formatViewMode != 'Unformatted': # Only option done so far
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "Ignoring {!r} mode for USFMEditWindow".format( self._formatViewMode ) )
# return
# oldVerseKey = self.currentVerseKey
# oldBBB, oldC, oldV = (None,None,None) if oldVerseKey is None else oldVerseKey.getBCV()
# if newReferenceVerseKey is None:
# newBBB = None
# self.setCurrentVerseKey( None )
# else: # it must be a real verse key
# assert isinstance( newReferenceVerseKey, SimpleVerseKey )
# refBBB, refC, refV, refS = newReferenceVerseKey.getBCVS()
# newBBB, C, V, S = self.BibleOrganisationalSystem.convertFromReferenceVersification( refBBB, refC, refV, refS )
# newVerseKey = SimpleVerseKey( newBBB, C, V, S )
# self.setCurrentVerseKey( newVerseKey )
# #if newBBB == 'PSA': halt
# if newBBB != oldBBB: self.numTotalVerses = calculateTotalVersesForBook( newBBB, self.getNumChapters, self.getNumVerses )
# if C != oldC and self.saveChangesAutomatically and self.modified(): self.doSave( 'Auto from chapter change' )
# if originator is self: # We initiated this by clicking in our own edit window
# # Don't do everything below because that makes the window contents move around annoyingly when clicked
# vPrint( 'Never', debuggingThisModule, "Seems to be called from self--not much to do here" )
# self.refreshTitle()
# return
# if self.textBox.edit_modified(): # we need to extract the changes into self.bookText
# assert self.bookTextModified
# self.bookText = self.getEntireText()
# if newBBB == oldBBB: # We haven't changed books -- update our book cache
# self.cacheBook( newBBB )
# if newReferenceVerseKey is None:
# if oldVerseKey is not None:
# if self.bookTextModified: self.doSave() # resets bookTextModified flag
# self.clearText() # Leaves the text box enabled
# self.textBox.configure( state=tk.DISABLED ) # Don't allow editing
# self.textBox.edit_modified( False ) # clear modified flag (otherwise we could empty the book file)
# self.refreshTitle()
# return
# savedCursorPosition = self.textBox.index( tk.INSERT ) # Something like 55.6 for line 55, before column 6
# #dPrint( 'Quiet', debuggingThisModule, "savedCursorPosition", savedCursorPosition ) # Beginning of file is 1.0
# # Now check if the book they're viewing has changed since last time
# # If so, save the old book if necessary
# # then either load or create the new book
# #markAsUnmodified = True
# if newBBB != oldBBB: # we've switched books
# if self.bookTextModified: self.doSave() # resets bookTextModified flag
# self.editStatus = 'Editable'
# self.bookText = self.getBookDataFromDisk( newBBB )
# if self.bookText is None:
# uNumber, uAbbrev = BibleOrgSysGlobals.loadedBibleBooksCodes.getUSFMNumber(newBBB), BibleOrgSysGlobals.loadedBibleBooksCodes.getUSFMAbbreviation(newBBB)
# if uNumber is None or uAbbrev is None: # no use asking about creating the book
# # NOTE: I think we've already shown this error in getBookDataFromDisk()
# #showError( self, APP_NAME, _("Couldn't determine USFM filename for {!r} book").format( newBBB ) )
# self.clearText() # Leaves the text box enabled
# self.textBox.edit_modified( tk.FALSE ) # clear Tkinter modified flag
# self.bookTextModified = False
# self.textBox.configure( state=tk.DISABLED ) # Don't allow editing
# self.editStatus = 'DISABLED'
# else:
# #showError( self, _("USFM Editor"), _("We need to create the book: {} in {}").format( newBBB, self.internalBible.sourceFolder ) )
# ocd = OkCancelDialog( self, _("We need to create the book: {} in {}".format( newBBB, self.internalBible.sourceFolder ) ), title=_('Create?') )
# #dPrint( 'Quiet', debuggingThisModule, "Need to create USFM book ocdResult", repr(ocd.result) )
# if ocd.result == True: # Ok was chosen
# self.setFilename( '{}-{}.USFM'.format( uNumber, uAbbrev ), createFile=True )
# self.bookText = createEmptyUSFMBookText( newBBB, self.getNumChapters, self.getNumVerses )
# #markAsUnmodified = False
# self.bookTextModified = True
# #self.doSave() # Save the chapter/verse markers (blank book outline) ## Doesn't work -- saves a blank file
# else: self.cacheBook( newBBB )
# # Now load the desired part of the book into the edit window
# # while at the same time, setting self.bookTextBefore and self.bookTextAfter
# # (so that combining these three components, would reconstitute the entire file).
# if self.bookText is not None:
# self.loading = True # Turns off USFMEditWindow onTextChange notifications for now
# self.clearText() # Leaves the text box enabled
# startingFlag = True
# elif self._contextViewMode == 'ByBook':
# vPrint( 'Never', debuggingThisModule, 'USFMEditWindow.updateShownBCV', 'ByBook2' )
# self.bookTextBefore = self.bookTextAfter = ''
# BBB, intC, intV = newVerseKey.getBBB(), newVerseKey.getChapterNumberInt(), newVerseKey.getVerseNumberInt()
# for thisC in range( -1, self.getNumChapters( BBB ) + 1 ):
# try: numVerses = self.getNumVerses( BBB, thisC )
# except KeyError: numVerses = 0
# for thisV in range( numVerses+1 ):
# thisVerseKey = SimpleVerseKey( BBB, thisC, thisV )
# thisVerseData = self.getCachedVerseData( thisVerseKey )
# #dPrint( 'Quiet', debuggingThisModule, 'tVD', repr(thisVerseData) )
# self.displayAppendVerse( startingFlag, thisVerseKey, thisVerseData,
# currentVerseFlag=thisC==intC and thisV==intV )
# startingFlag = False
# self.textBox.highlightAllPatterns( self.patternsToHighlight )
# self.textBox.edit_reset() # clear undo/redo stks
# self.textBox.edit_modified( tk.FALSE ) # clear modified flag
# self.loading = False # Turns onTextChange notifications back on
# self.lastCVMark = None
# # Make sure we can see what we're supposed to be looking at
# desiredMark = 'C{}V{}'.format( newVerseKey.getChapterNumber(), newVerseKey.getVerseNumber() )
# try: self.textBox.see( desiredMark )
# except tk.TclError: vPrint( 'Quiet', debuggingThisModule, "USFMEditWindow.updateShownBCV couldn't find {} mark {!r} for {}".format( newVerseKey.getBBB(), desiredMark, self.moduleID ) )
# self.lastCVMark = desiredMark
# # Put the cursor back where it was (if necessary)
# self.loading = True # Turns off USFMEditWindow onTextChange notifications for now
# self.textBox.mark_set( tk.INSERT, savedCursorPosition )
# self.loading = False # Turns onTextChange notifications back on
# self.refreshTitle()
# if self._showStatusBarVar.get(): self.setReadyStatus()
# # end of USFMEditWindow.updateShownBCV
def loadBookDataFromDisk( self, BBB ) -> bool:
"""
Fetches and returns the table for the given book
by reading the TSV source file completely
and saving the row and columns.
"""
logging.debug( f"USFMEditWindow.loadBookDataFromDisk( {BBB} ) was {self.lastBBB}…" )
vPrint( 'Never', debuggingThisModule, "USFMEditWindow.loadBookDataFromDisk( {} ) was {}".format( BBB, self.lastBBB ) )
# if BBB != self.lastBBB:
# #self.bookText = None
# #self.bookTextModified = False
# self.lastBBB = BBB
self.BBB = BBB
# Read the entire file contents at the beginning (assumes lots of RAM)
self.thisBookUSFMCode = BibleOrgSysGlobals.loadedBibleBooksCodes.getUSFMAbbreviation( BBB ).upper()
USFMnn = BibleOrgSysGlobals.loadedBibleBooksCodes.getUSFMNumber( BBB )
foldername = os.path.split( self.folderpath )[1]
# dPrint( 'Info', debuggingThisModule, f"Got foldername='{foldername}'" ) # Something like en_tn or en_tw perhaps
self.filename = f'{foldername}_{USFMnn}-{self.thisBookUSFMCode}.tsv' # Temp hard-coding XXXXX
# dPrint( 'Info', debuggingThisModule, f"Got filename '{filename}'")
self.filepath = os.path.join( self.folderpath, self.filename )
try:
with open( self.filepath, 'rt', encoding='utf-8' ) as input_file:
self.originalText = input_file.read()
except FileNotFoundError:
showError( self, _('TSV Window'), _("Could not open and read '{}'").format( self.filepath ) )
return False
if not self.originalText:
showError( self, _('TSV Window'), _("Could not read {}").format( self.filepath ) )
return False
fileLines = self.originalText.split( '\n' )
if fileLines and fileLines[-1] == '':
# dPrint( 'Info', debuggingThisModule, "Deleting final blank line" )
fileLines = fileLines[:-1]
self.hadTrailingNL = True
else:
self.hadTrailingNL = False
self.numOriginalLines = len( fileLines )
vPrint( 'Info', debuggingThisModule, f" {len(self.originalText):,} bytes ({self.numOriginalLines:,} lines) read from {self.filepath}" )
if self.numOriginalLines < 2:
showError( self, APP_NAME, f'Not enough ({self.numOriginalLines}) preexisting lines in file ' + self.filepath )
# We keep self.originalText and self.numOriginalLines to determine later if we have any changes
vPrint( 'Verbose', debuggingThisModule, "Checking loaded TSV table…" )
self.num_columns = None
self.tsvTable:List[List] = []
for j, line in enumerate( fileLines, start=1 ):
if line and line[-1]=='\n': line = line[:-1] # Remove trailing nl
columns = line.split( '\t' )
if self.num_columns is None:
self.num_columns = len( columns )
# dPrint( 'Info', debuggingThisModule, f" Have {self.num_columns} columns")
elif len(columns) != self.num_columns:
logging.critical( f"Expected {self.num_columns} columns but found {len(columns)} in row {j} of {self.filepath}" )
self.tsvTable.append( columns )
self.tsvHeaders = self.tsvTable[0]
vPrint( 'Normal', debuggingThisModule, f" Have table headers ({self.num_columns}): {self.tsvHeaders}" )
self.numDataRows = len(self.tsvTable) - 1
vPrint( 'Info', debuggingThisModule, f" Have {self.numDataRows:,} rows" )
return True
# end of TSVEditWindowAddon.loadBookDataFromDisk
def buildWidgets( self ):
"""
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.buildWidgets() for {self.BBB}" )
# Delete old widgets so we can rebuild without problems (for each new book)
for widget in self.pack_slaves():
widget.destroy()
Style().configure( 'good.TLabel', background='white' )
Style().configure( 'bad.TLabel', background='red' )
# self.createStatusBar()
Style().configure( '{}.ChildStatusBar.TLabel'.format( self ), background='purple' )
#Style().map("Halt.TButton", foreground=[('pressed', 'red'), ('active', 'yellow')],
#background=[('pressed', '!disabled', 'black'), ('active', 'pink')] )
#self.statusBar = Frame( self, cursor='hand2', relief=tk.RAISED, style='ChildWindowStatusBar.TFrame' )
# self.textBox.pack_forget() # Make sure the status bar gets the priority at the bottom of the window
# self.vScrollbar.pack_forget()
self.statusTextLabel = Label( self, relief=tk.SUNKEN,
textvariable=self._statusTextVar, style='{}.ChildStatusBar.TLabel'.format( self ) )
#, font=('arial',16,tk.NORMAL) )
self.statusTextLabel.pack( side=tk.BOTTOM, fill=tk.X )
# self.vScrollbar.pack( side=tk.RIGHT, fill=tk.Y )
# self.textBox.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
self.prevFrame1 = RowFrame( self )
self.prevFrame2 = RowFrame( self )
self.prevFrame3 = RowFrame( self )
self.prevFrame1.pack( side=tk.TOP, fill=tk.X, expand=tk.YES )
self.prevFrame2.pack( side=tk.TOP, fill=tk.X, expand=tk.YES )
self.prevFrame3.pack( side=tk.TOP, fill=tk.X, expand=tk.YES )
self.nextFrame1 = RowFrame( self )
self.nextFrame2 = RowFrame( self )
self.nextFrame3 = RowFrame( self )
self.nextFrame3.pack( side=tk.BOTTOM, fill=tk.X, expand=tk.YES )
self.nextFrame2.pack( side=tk.BOTTOM, fill=tk.X, expand=tk.YES )
self.nextFrame1.pack( side=tk.BOTTOM, fill=tk.X, expand=tk.YES )
ButtonFrame = Frame( self )
ButtonFrame.pack( side=tk.TOP, fill=tk.X, expand=True )
self.moveUpButton = Button( ButtonFrame, text=_('Move row up'), command=self.doMoveUp )
self.moveUpButton.pack( side=tk.LEFT, padx=4, pady=2 )
self.moveDownButton = Button( ButtonFrame, text=_('Move row down'), command=self.doMoveDown )
self.moveDownButton.pack( side=tk.LEFT, padx=4, pady=2 )
self.addBeforeButton = Button( ButtonFrame, text=_('Add row before'), command=self.doAddBefore )
self.addBeforeButton.pack( side=tk.LEFT, padx=4, pady=2 )
self.addAfterButton = Button( ButtonFrame, text=_('Add row after'), command=self.doAddAfter )
self.addAfterButton.pack( side=tk.LEFT, padx=4, pady=2 )
self.deleteRowButton = Button( ButtonFrame, text=_('Delete row'), command=self.doDeleteRow )
self.deleteRowButton.pack( side=tk.LEFT, padx=12, pady=2 )
idFrame = Frame( self ) # Row number, Book, C, V, ID
secondFrame, thirdFrame = Frame( self ), Frame( self )
idFrame.pack( side=tk.TOP, fill=tk.X )
secondFrame.pack( side=tk.TOP, fill=tk.X, expand=True )
thirdFrame.pack( side=tk.TOP, fill=tk.X, expand=True )
Label( idFrame, text=_('Row') ).pack( side=tk.LEFT, padx=(4,1), pady=2 )
self.topButton = Button( idFrame, text='◄', width=1, command=self.gotoTop )
self.topButton.pack( side=tk.LEFT, padx=(2,0), pady=2 )
self.rowNumberVar = tk.IntVar()
self.rowNumberVar.set( 1 )
self.rowSpinbox = tk.Spinbox( idFrame, from_=1.0, to=max(2, self.numDataRows),
textvariable=self.rowNumberVar, width=4, command=self._spinToNewRow )
self.rowSpinbox.bind( '<Return>', self._spinToNewRow )
self.rowSpinbox.pack( side=tk.LEFT, padx=0, pady=2 )
self.bottomButton = Button( idFrame, text='►', width=1, command=self.gotoBottom )
self.bottomButton.pack( side=tk.LEFT, padx=(0,4), pady=2 )
self.widgets = []
for j, headerText in enumerate( self.tsvHeaders, start=1 ):
# dPrint( 'Info', debuggingThisModule, f"{j}/ {headerText}")
if headerText == 'Book':
self.bookColumn = j-1
widgetFrame = Frame( idFrame )
headerWidget = Label( widgetFrame, text=headerText )
dataWidget = Label( widgetFrame )
headerWidget.pack()
dataWidget.pack()
widgetFrame.pack( side=tk.LEFT, padx=6, pady=2 )
var = None
elif headerText == 'Chapter':
self.chapterColumn = j-1
widgetFrame = Frame( idFrame )
headerWidget = Label( widgetFrame, width=8, text=headerText )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=8, textvariable=var )
headerWidget.pack()
dataWidget.pack()
widgetFrame.pack( side=tk.LEFT, padx=(4,1), pady=2 )
elif headerText == 'Verse':
self.verseColumn = j-1
widgetFrame = Frame( idFrame )
headerWidget = Label( widgetFrame, width=6, text=headerText )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=6, textvariable=var )
headerWidget.pack()
dataWidget.pack()
widgetFrame.pack( side=tk.LEFT, padx=(1,4), pady=2 )
elif headerText == 'ID':
self.idColumn = j-1
widgetFrame = Frame( idFrame )
headerWidget = Label( widgetFrame, text=headerText )
dataWidget = Label( widgetFrame )
headerWidget.pack()
dataWidget.pack()
widgetFrame.pack( side=tk.LEFT, padx=6, pady=2 )
var = None
elif headerText == 'SupportReference':
self.supportReferenceColumn = j-1
widgetFrame = Frame( secondFrame )
headerWidget = Label( widgetFrame, width=30, text=headerText )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=30, textvariable=var )
headerWidget.pack()
dataWidget.pack( fill=tk.X, expand=tk.YES )
widgetFrame.pack( side=tk.LEFT, fill=tk.X, expand=tk.YES, padx=4, pady=2 )
elif headerText == 'OrigQuote':
self.origQuoteColumn = j-1
widgetFrame = Frame( secondFrame )
headerWidget = Label( widgetFrame, width=35, text=headerText )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=35, textvariable=var )
headerWidget.pack()
dataWidget.pack( fill=tk.X, expand=tk.YES )
widgetFrame.pack( side=tk.LEFT, fill=tk.X, expand=tk.YES, padx=(4,2), pady=2 )
elif headerText == 'Occurrence':
self.occurenceColumn = j-1
widgetFrame = Frame( secondFrame )
headerWidget = Label( widgetFrame, width=2, text='#' )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=2, textvariable=var )
headerWidget.pack()
dataWidget.pack()
widgetFrame.pack( side=tk.LEFT, padx=(2,4), pady=2 )
elif headerText == 'GLQuote':
self.GLQuoteColumn = j-1
widgetFrame = Frame( thirdFrame )
headerWidget = Label( widgetFrame, width=50, text=headerText )
var = tk.StringVar()
dataWidget = Entry( widgetFrame, width=50, textvariable=var )
headerWidget.pack()
dataWidget.pack( fill=tk.X, expand=tk.YES )
widgetFrame.pack( side=tk.LEFT, fill=tk.X, expand=tk.YES, padx=4, pady=2 )
elif headerText == 'OccurrenceNote':
self.occurrenceNoteColumn = j-1
widgetFrame = Frame( self )
headerWidget = Label( widgetFrame, text=headerText )
# Make our own custom textBox which allows a callback function
# Delete these lines and the callback line if you don't need either autocorrect or autocomplete
self.vScrollbar.destroy() # from the ChildWindow default
self.textBox.destroy() # from the ChildWindow default
self.myKeyboardBindingsList = []
if BibleOrgSysGlobals.debugFlag: self.myKeyboardShortcutsList = []
self.vScrollbar = Scrollbar( widgetFrame )
self.vScrollbar.pack( side=tk.RIGHT, fill=tk.Y )
self.customFont = tk.font.Font( family="sans-serif", size=12 )
self.customFontBold = tk.font.Font( family="sans-serif", size=12, weight='bold' )
self.textBox = CustomText( widgetFrame, yscrollcommand=self.vScrollbar.set, wrap='word', font=self.customFont )
self.textBox.configure( background=self.defaultBackgroundColour )
self.textBox.configure( selectbackground='blue' )
self.textBox.configure( highlightbackground='orange' )
self.textBox.configure( inactiveselectbackground='green' )
self.textBox.configure( wrap='word', undo=True, autoseparators=True )
# self.textBox.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
self.vScrollbar.configure( command=self.textBox.yview ) # link the scrollbar to the text box
self.textBox.setTextChangeCallback( self.onTextChange )
self.createEditorKeyboardBindings()
#self.createMenuBar()
self.createContextMenu() # Enable right-click menu
dataWidget = self.textBox
headerWidget.pack()
dataWidget.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
widgetFrame.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES, padx=4, pady=2 )
var = 'TB'
else: # it's not one we recognise / usually expect
widgetFrame = Frame( self )
headerWidget = Label( widgetFrame, text=headerText )
dataWidget = Label( widgetFrame )
headerWidget.pack()
dataWidget.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
widgetFrame.pack( side=tk.LEFT, padx=4, pady=2 )
var = None
# Doesn't work to bind KeyPress coz the text hasn't changed yet!
dataWidget.bind( '<KeyRelease>', self.checkCurrentDisplayedRowData )
dataWidget.bind( '<FocusIn>', self.checkCurrentDisplayedRowData )
dataWidget.bind( '<FocusOut>', self.checkCurrentDisplayedRowData )
self.widgets.append( (var,dataWidget) )
self.numLabel = Label( idFrame )
self.numLabel.pack( side=tk.RIGHT, padx=8, pady=2 ) # Goes to right of some data widgets
self.setStatus() # Clear it
BiblelatorGlobals.theApp.setReadyStatus() # So it doesn't get left with an error message on it
# end of TSVEditWindowAddon.buildWidgets function
def _spinToNewRow( self, event=None ) -> None:
"""
Handle a new row number from the row spinbox.
"""
fnPrint( debuggingThisModule, f"_spinToNewRow( {event} ) from {self.current_row}" )
if self.current_row: # The last row might have changed
self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
self._gotoRow()
# end of TSVEditWindowAddon._spinToNewRow function
def _gotoRow( self, event=None, force:bool=False, notifyMain:bool=True ) -> None:
"""
Handle a new row number.
"""
fnPrint( debuggingThisModule, f"_gotoRow( {event}, f={force}, nM={notifyMain} ) from {self.current_row}" )
#dPrint( 'Never', debuggingThisModule, dir(event) )
row = self.rowNumberVar.get()
vPrint( 'Normal', debuggingThisModule, f" _gotoRow got {row} (was {self.current_row})" )
# Check for bad numbers (they must have manually entered them as spinner doesn't allow this)
if row < 1:
self.rowNumberVar.set( 1 ); return
if row > self.numDataRows:
self.rowNumberVar.set( self.numDataRows ); return
assert 1 <= row <= self.numDataRows
if row==self.current_row and not force: return # Nothing to do here
currentRowData = self.tsvTable[row]
self.numLabel.configure( text=f'Have {self.numDataRows} rows (plus header)' )
if self.thisBookUSFMCode is None:
self.thisBookUSFMCode = currentRowData[self.bookColumn]
self.BBB = BibleOrgSysGlobals.loadedBibleBooksCodes.getBBBFromUSFMAbbreviation( self.thisBookUSFMCode )
elif currentRowData[self.bookColumn] != self.thisBookUSFMCode:
logging.critical( f"Row {row} seems to have a different book code '{currentRowData[self.bookColumn]}' from expected '{self.thisBookUSFMCode}'" )
C, V, = currentRowData[self.chapterColumn], currentRowData[self.verseColumn]
if C == 'front': C = '-1'
if V == 'intro': V = '0'
newVerseKey = SimpleVerseKey( self.BBB,C,V )
if newVerseKey != self.currentVerseKey: # we've changed
self.currentVerseKey = SimpleVerseKey( self.BBB,C,V )
if notifyMain and not self.loading:
BiblelatorGlobals.theApp.gotoBCV( self.BBB, C,V, 'TSVEditWindowAddon._gotoRow' ) # Update main window (and from there, other child windows)
assert len(currentRowData) == self.num_columns
for j, (var,dataWidget) in enumerate( self.widgets ):
# dPrint( 'Info', debuggingThisModule, f"{j}/ {dir(dataWidget) if j==0 else dataWidget.winfo_name}")
if var=='TB': self.setAllText( currentRowData[j].replace( '<br>', '\n' ) ) # For TextBox
elif var: var.set( currentRowData[j] ) # For Entries
else: dataWidget.configure( text=currentRowData[j] ) # For Labels
self.prevFrame1.fill( row-2, self.tsvTable[row-3] if row>3 else None )
self.prevFrame2.fill( row-1, self.tsvTable[row-2] if row>2 else None )
self.prevFrame3.fill( row-1, self.tsvTable[row-1] if row>1 else None )
self.nextFrame1.fill( row+1, self.tsvTable[row+1] if row<self.numDataRows else None )
self.nextFrame2.fill( row+2, self.tsvTable[row+2] if row<self.numDataRows-1 else None )
self.nextFrame3.fill( row+3, self.tsvTable[row+3] if row<self.numDataRows-2 else None )
self.current_row = row
# Update button states
self.setStatus() # Clear it
self.topButton.configure( state=tk.NORMAL if row>1 else tk.DISABLED )
self.bottomButton.configure( state=tk.NORMAL if row<self.numDataRows else tk.DISABLED )
self.moveUpButton.configure( state=tk.NORMAL if row>1 else tk.DISABLED )
self.moveDownButton.configure( state=tk.NORMAL if row<self.numDataRows else tk.DISABLED )
self.deleteRowButton.configure( state=tk.NORMAL if self.numDataRows else tk.DISABLED )
self.checkCurrentDisplayedRowData() # Update field states
# end of TSVEditWindowAddon._gotoRow function
def setCurrentVerseKey( self, newVerseKey:SimpleVerseKey ) -> None:
"""
Called to set the current verse key.
Note that newVerseKey can be None.
"""
fnPrint( debuggingThisModule, f"setCurrentVerseKey( {newVerseKey.getShortText()} )" )
if debuggingThisModule or BibleOrgSysGlobals.debugFlag:
BiblelatorGlobals.theApp.setDebugText( "BRW setCurrentVerseKey…" )
if newVerseKey is None:
self.currentVerseKey = None
self.maxChaptersThisBook = self.maxVersesThisChapter = 0
return
# If we get this far, it must be a real verse key
assert isinstance( newVerseKey, SimpleVerseKey )
self.currentVerseKey = newVerseKey
BBB = self.currentVerseKey.getBBB()
self.maxChaptersThisBook = self.getNumChapters( BBB )
self.maxVersesThisChapter = self.getNumVerses( BBB, self.currentVerseKey.getChapterNumber() )
# end of BibleResourceWindowAddon.setCurrentVerseKey
def gotoTop( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"gotoTop( {event} )" )
assert self.current_row > 1
self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
self.rowNumberVar.set( 1 )
self._gotoRow() # Refresh
# end of TSVEditWindowAddon.gotoTop function
def gotoBottom( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"gotoBottom( {event} )" )
assert self.current_row < self.numDataRows
self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
self.rowNumberVar.set( self.numDataRows )
self._gotoRow() # Refresh
# end of TSVEditWindowAddon.gotoBottom function
def doMoveUp( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"doMoveUp( {event} )" )
assert self.current_row > 1
currentRowData = self.retrieveCurrentRowData( updateTable=False ) # in case current row was edited
self.tsvTable[self.current_row-1], self.tsvTable[self.current_row] = currentRowData, self.tsvTable[self.current_row-1]
self.rowNumberVar.set( self.current_row - 1 ) # Stay on the same (moved-up) row
self._gotoRow() # Refresh
# end of TSVEditWindowAddon.doMoveUp function
def doMoveDown( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"doMoveDown( {event} )" )
assert self.current_row < self.numDataRows
currentRowData = self.retrieveCurrentRowData( updateTable=False ) # in case current row was edited
self.tsvTable[self.current_row], self.tsvTable[self.current_row+1] = self.tsvTable[self.current_row+1], currentRowData
self.rowNumberVar.set( self.current_row + 1 ) # Stay on the same (moved-up) row
self._gotoRow() # Refresh
# end of TSVEditWindowAddon.doMoveDown function
def doAddBefore( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"doAddBefore( {event} )" )
currentRowData = self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
newRowData = currentRowData.copy()
newRowData[self.idColumn] = self.generateID()
newRowData[self.occurenceColumn] = '1'
newRowData[self.supportReferenceColumn] = newRowData[self.origQuoteColumn] = ''
newRowData[self.GLQuoteColumn] = newRowData[self.occurrenceNoteColumn] = ''
self.tsvTable.insert( self.current_row, newRowData )
self.numDataRows += 1
assert len(self.tsvTable) == self.numDataRows + 1
self._gotoRow( force=True ) # Stay on the same row number (which is the new row), but cause refresh
# end of TSVEditWindowAddon.doAddBefore function
def doAddAfter( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"doAddAfter( {event} )" )
currentRowData = self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
newRowData = currentRowData.copy()
newRowData[self.idColumn] = self.generateID()
newRowData[self.occurenceColumn] = '1'
newRowData[self.supportReferenceColumn] = newRowData[self.origQuoteColumn] = ''
newRowData[self.GLQuoteColumn] = newRowData[self.occurrenceNoteColumn] = ''
self.tsvTable.insert( self.current_row+1, newRowData )
self.numDataRows += 1
assert len(self.tsvTable) == self.numDataRows + 1
self.rowNumberVar.set( self.current_row + 1 ) # Go to the new (=next) row
self._gotoRow() # Refresh
# end of TSVEditWindowAddon.doAddAfter function
def doDeleteRow( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, f"doDeleteRow( {event} )" )
assert self.numDataRows
self.deletedRow = self.tsvTable.pop( self.current_row )
self.numDataRows -= 1
assert len(self.tsvTable) == self.numDataRows + 1
self._gotoRow( force=True ) # Stay on the same row, but cause refresh
# end of TSVEditWindowAddon.doDeleteRow function
def retrieveCurrentRowData( self, updateTable:bool ) -> List[str]:
"""
Get the data out of the displayed boxes and return it.
If updateTable is True, also put any changed data back into the table.
"""
fnPrint( debuggingThisModule, f"retrieveCurrentRowData( uT={updateTable}) for {self.current_row}" )
if not self.current_row: return # Still setting up -- nothing to do here yet
# Extract the data out of the various Label, Entry, and TextBox widgets
retrievedRowData = [''] * self.num_columns # Create blank row first
for j, (var,dataWidget) in enumerate( self.widgets ):
# dPrint( 'Info', debuggingThisModule, f"{j}/ {dir(dataWidget) if j==0 else dataWidget.winfo_name}")
if var=='TB': retrievedRowData[j] = self.getAllText().replace( '\n', '<br>' ) # For TextBox
elif var: retrievedRowData[j] = var.get() # For Entries
else: retrievedRowData[j] = dataWidget['text'] # For Labels
assert len(retrievedRowData) == self.num_columns
# dPrint( 'Quiet', debuggingThisModule, f" got retrieveCurrentRowData: {retrievedRowData}" )
if retrievedRowData == ['', '', '', '', '', '', '', '', '']:
logging.critical( f"WHAT WENT WRONG HERE: {self.current_row}" )
return retrievedRowData
# Now we can replace that row in the table (if requested)
vPrint( 'Never', debuggingThisModule, f" Row {self.current_row} has changed: {retrievedRowData != self.tsvTable[self.current_row]}" )
if updateTable and retrievedRowData != self.tsvTable[self.current_row]:
vPrint( 'Quiet', debuggingThisModule, f"\nRow {self.current_row}: replace {self.tsvTable[self.current_row]}\n with {retrievedRowData}" )
self.tsvTable[self.current_row] = retrievedRowData
return retrievedRowData
# end of TSVEditWindowAddon.retrieveCurrentRowData
def generateID( self ) -> str:
"""
Generate 4-character random ID (starting with a lowercase letter)
Theoretically they only have to be unique within a verse,
but we make them unique within the whole table/file.
"""
fnPrint( debuggingThisModule, "generateID()" )
while True:
newID = random.choice( 'abcdefghijklmnopqrstuvwxyz') \
+ random.choice( 'abcdefghijklmnopqrstuvwxyz0123456789' ) \
+ random.choice( 'abcdefghijklmnopqrstuvwxyz0123456789' ) \
+ random.choice( 'abcdefghijklmnopqrstuvwxyz0123456789' )
if newID not in self.allExistingIDs:
self.allExistingIDs.add( newID )
return newID
# end of TSVEditWindowAddon.generateID function
def checkCurrentDisplayedRowData( self, event=None ) -> bool:
"""
Checks the data in the display window for basic errors and warnings,
e.g., missing book code, verse numbers out of sequence, etc., etc.
Called when keys are pressed and when focus changes between widgets.
Updates widget colours and the status bar to signal to the user.
"""
fnPrint( debuggingThisModule, "checkCurrentDisplayedRowData()" )
# if self.loading: return
currentRowData = self.retrieveCurrentRowData( updateTable=False )
self.maxChaptersThisBook = self.getNumChapters( self.BBB )
currentC = self.currentVerseKey.getChapterNumber()
self.maxVersesThisChapter = self.getNumVerses( self.BBB, currentC )
errorList = []
for j, (fieldData,(var,widget)) in enumerate( zip( currentRowData, self.widgets ) ):
haveError = False
if j == self.bookColumn: # Label
if not fieldData:
errorList.append( f"Missing book (name) field: '{fieldData}'" ); haveError = True
elif fieldData != self.thisBookUSFMCode:
errorList.append( f"Wrong '{fieldData}' book (name) field -- expected '{self.thisBookUSFMCode}'" ); haveError = True
elif j == self.chapterColumn: # Entry
if not fieldData:
errorList.append( f"Missing chapter (number) field" ); haveError = True
elif fieldData not in ('front',) and not fieldData.isdigit():
errorList.append( f"Invalid chapter (number) field: '{fieldData}'" ); haveError = True
elif fieldData.isdigit(): # We expect chapter numbers to increment by 1
intC = int( fieldData )
if intC < -1:
errorList.append( f"Invalid chapter (number) field: '{fieldData}'" ); haveError = True
elif intC > self.maxChaptersThisBook:
errorList.append( f"Invalid chapter (number) field: '{fieldData}' in {self.BBB} with {self.maxChaptersThisBook} (max) chapters" ); haveError = True
else:
lastC = nextC = None
if self.current_row > 1: lastC = self.tsvTable[self.current_row-1][self.chapterColumn]
if self.current_row < self.numDataRows: nextC = self.tsvTable[self.current_row+1][self.chapterColumn]
if lastC and lastC.isdigit() and intC not in (int(lastC), int(lastC)+1):
errorList.append( f"Unexpected chapter (number) field: '{fieldData}' after {lastC}" ); haveError = True
elif nextC and nextC.isdigit() and intC not in (int(nextC), int(nextC)-1):
errorList.append( f"Unexpected chapter (number) field: '{fieldData}' before {nextC}" ); haveError = True
elif j == self.verseColumn: # Entry
if not fieldData:
errorList.append( f"Missing verse (number) field" ); haveError = True
elif fieldData not in ('intro',) and not fieldData.isdigit():
errorList.append( f"Invalid verse (number) field: '{fieldData}'" ); haveError = True
# if fieldData == 'intro' and currentRowData[self.chapterColumn] != 'front':
# errorList.append( f"Unexpected verse (number) field: '{fieldData}' when Chapter is '{currentRowData[self.chapterColumn]}'" ); haveError = True
if fieldData.isdigit(): # We expect verse numbers to increment
intV = int( fieldData )
if intV < 0:
errorList.append( f"Invalid verse (number) field: '{fieldData}'" ); haveError = True
elif intV > self.maxVersesThisChapter:
errorList.append( f"Invalid verse (number) field: '{fieldData}' in {self.BBB} {currentC} with {self.maxVersesThisChapter} (max) verses" ); haveError = True
else:
lastV = nextV = None
if self.current_row > 1: lastV = self.tsvTable[self.current_row-1][self.verseColumn]
if self.current_row < self.numDataRows: nextV = self.tsvTable[self.current_row+1][self.verseColumn]
if lastV and lastV.isdigit() and intV < int(lastV):
errorList.append( f"Unexpected verse (number) field: '{fieldData}' after {lastV}" ); haveError = True
elif nextV and nextV.isdigit() and intV > int(nextV):
errorList.append( f"Unexpected verse (number) field: '{fieldData}' before {nextV}" ); haveError = True
elif j == self.idColumn:
if len(fieldData) < 4:
errorList.append( f"ID field '{fieldData}' is too short" )
anyErrors = True
elif j == self.supportReferenceColumn: # Label
for badText in ('...',' … ',' '):
if badText in fieldData:
errorList.append( f"Unallowed '{badText}' in '{fieldData}'" ); haveError = True
if fieldData:
if fieldData[0] == ' ':
errorList.append( f"Unexpected leading space(s) in '{fieldData}'" ); haveError = True
if fieldData[-1] == ' ':
errorList.append( f"Unexpected trailing space(s) in '{fieldData}'" ); haveError = True
elif j == self.origQuoteColumn: # Entry
for badText in ('...',' … ',' …','… ', ' '):
if badText in fieldData:
errorList.append( f"Unallowed '{badText}' in '{fieldData}'" ); haveError = True
break
if fieldData:
if fieldData[0] == ' ':
errorList.append( f"Unexpected leading space(s) in '{fieldData}'" ); haveError = True
if fieldData[-1] == ' ':
errorList.append( f"Unexpected trailing space(s) in '{fieldData}'" ); haveError = True
# for appWin in BiblelatorGlobals.theApp.childWindows:
# if appWin.windowType == 'InternalBibleResourceWindow' \
# and '_ugnt' in appWin.moduleID:
# print( f"GREAT!!!! Found {appWin.windowType} {appWin.moduleID} ")
for iB,controllingWindowList in BiblelatorGlobals.theApp.internalBibles:
if iB.abbreviation == 'UGNT':
# dPrint( 'Info', debuggingThisModule, f"Found {iB.abbreviation} {iB.getAName()} ")
UGNTtext = iB.getVerseText( self.currentVerseKey )
print( f"Got UGNT {UGNTtext} for {self.currentVerseKey.getShortText()}" )
if '…' in fieldData:
quoteBits = fieldData.split( '…' )
for quoteBit in quoteBits:
if quoteBit not in UGNTtext:
errorList.append( f"Can't find OrigQuote component in UGNT: '{quoteBit}'" ); haveError = True
elif fieldData not in UGNTtext:
errorList.append( f"Can't find OrigQuote in UGNT: '{fieldData}'" ); haveError = True
break
elif j == self.occurenceColumn: # Entry
if not fieldData:
errorList.append( f"Missing occurrence (number) field" ); haveError = True
elif currentRowData[self.origQuoteColumn] and fieldData not in '123456789':
errorList.append( f"Unexpected occurrence (number) field: '{fieldData}'" ); haveError = True
elif not currentRowData[self.origQuoteColumn] and fieldData != '0':
errorList.append( f"Unexpected occurrence (number) field: '{fieldData}'" ); haveError = True
elif j == self.GLQuoteColumn: # Entry
for badText in ('...',' … ',' …','… ', ' '):
if badText in fieldData:
errorList.append( f"Unallowed '{badText}' in '{fieldData}'" ); haveError = True
break
if fieldData:
if fieldData[0] == ' ':
errorList.append( f"Unexpected leading space(s) in '{fieldData}'" ); haveError = True
if fieldData[-1] == ' ':
errorList.append( f"Unexpected trailing space(s) in '{fieldData}'" ); haveError = True
if fieldData not in ('Connecting Statement:', 'General Information:'):
for iB,controllingWindowList in BiblelatorGlobals.theApp.internalBibles:
if iB.abbreviation == 'ULT':
# dPrint( 'Info', debuggingThisModule, f"Found {iB.abbreviation} {iB.getAName()} ")
ULTtext = iB.getVerseText( self.currentVerseKey )
print( f"Got {ULTtext} for {self.currentVerseKey.getShortText()}" )
if '…' in fieldData:
quoteBits = fieldData.split( '…' )
for quoteBit in quoteBits:
if quoteBit not in ULTtext:
errorList.append( f"Can't find GLQuote component '{quoteBit}' in ULT: {ULTtext}" ); haveError = True
elif fieldData not in ULTtext: # Show non-break space
errorList.append( f"Can't find GLQuote '{fieldData.replace(' ','~')}' in ULT: {ULTtext}" ); haveError = True
break
elif j == self.occurrenceNoteColumn: # TextBox
for badText in ('...',' ',' … '):
if badText in fieldData:
if badText == ' ': # These can be hard to see
ix = fieldData.index( ' ' )
snippet = fieldData[max(0,ix-10):ix+12]
if ix>10: snippet = f'…{snippet}'
if ix<len(fieldData)-12: snippet = f'{snippet}…'
else:
snippet = fieldData
errorList.append( f"Unexpected '{badText}' in '{snippet}'" ); haveError = True
for lChar,rChar in (('(',')'),('[',']'),('{','}')):
lCount, rCount = fieldData.count(lChar), fieldData.count(rChar)
if lCount != rCount:
errorList.append( f"Unmatched {lCount} '{lChar}' chars vs {rCount} '{rChar}' chars in '{fieldData}'" ); haveError = True
if fieldData:
if fieldData[0] == ' ':
errorList.append( f"Unexpected leading space(s) in '{fieldData}'" ); haveError = True
if fieldData[-1] == ' ':
errorList.append( f"Unexpected trailing space(s) in '{fieldData}'" ); haveError = True
if haveError:
if var == 'TB':
widget.configure( bg='orange' )
else: # ttk Label or Entry
widget.configure( style='bad.TLabel' )
anyErrors = True
else: # no error
if var == 'TB':
widget.configure( bg=self.defaultBackgroundColour )
else: # ttk Label or Entry
widget.configure( style='good.TLabel' )
currentRowData = self.retrieveCurrentRowData( updateTable=False )
if errorList:
if errorList:
vPrint( 'Normal', debuggingThisModule, f"Found {len(errorList)} errors: {errorList[0]}" )
self.setErrorStatus( errorList[0] )
return True
self.setStatus() # Clear it
return False
# end of TSVEditWindowAddon.checkCurrentDisplayedRowData function
def validateTSVTable( self ) -> int:
"""
Checks the entire table (other than headers)
Returns the number of errors
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.validateTSVTable() for {self.BBB}" )
if 'tsvTable' not in self.__dict__ or not self.tsvTable:
return 0
num_errors = 0
self.allExistingIDs = set()
for j, row in enumerate( self.tsvTable[1:], start=2 ):
bkCode, C, V, thisID = row[self.bookColumn], row[self.chapterColumn], row[self.verseColumn], row[self.idColumn]
if not bkCode:
print( f" Missing USFM book id (expected '{self.thisBookUSFMCode}') in row {j}: {row}" )
num_errors += 1
elif bkCode != self.thisBookUSFMCode:
print( f" Bad USFM book id '{bkCode}' (expected '{self.thisBookUSFMCode}') in row {j}: {row}" )
num_errors += 1
if not C:
print( f" Missing chapter field in row {j}: {row}" )
num_errors += 1
if not V:
print( f" Missing verse field in row {j}: {row}" )
num_errors += 1
if thisID in self.allExistingIDs:
print( f" Already had ID='{thisID}'" )
num_errors += 1
else:
self.allExistingIDs.add( thisID )
vPrint( 'Normal', debuggingThisModule, f" validateTSV returning {num_errors} errors" )
return num_errors
# end of TSVEditWindowAddon.validateTSVTable()
def createEditorKeyboardBindings( self ):
"""
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.createEditorKeyboardBindings()" )
for name,commandFunction in ( #('Paste',self.doPaste), ('Cut',self.doCut),
#('Undo',self.doUndo), ('Redo',self.doRedo),
('Find',self.doBoxFind), ('Refind',self.doBoxRefind),
('Save',self.doSave),
('ShowMain',self.doShowMainWindow),
):
#dPrint( 'Quiet', debuggingThisModule, "TEW CheckLoop", (name,BiblelatorGlobals.theApp.keyBindingDict[name][0],BiblelatorGlobals.theApp.keyBindingDict[name][1],) )
assert (name,BiblelatorGlobals.theApp.keyBindingDict[name][0],) not in self.myKeyboardBindingsList
if name in BiblelatorGlobals.theApp.keyBindingDict:
for keyCode in BiblelatorGlobals.theApp.keyBindingDict[name][1:]:
#dPrint( 'Quiet', debuggingThisModule, " TEW Bind {} for {}".format( repr(keyCode), repr(name) ) )
self.textBox.bind( keyCode, commandFunction )
if BibleOrgSysGlobals.debugFlag:
assert keyCode not in self.myKeyboardShortcutsList
self.myKeyboardShortcutsList.append( keyCode )
self.myKeyboardBindingsList.append( (name,BiblelatorGlobals.theApp.keyBindingDict[name][0],) )
else: logging.critical( 'No key binding available for {}'.format( repr(name) ) )
# end of TSVEditWindowAddon.createEditorKeyboardBindings()
def createMenuBar( self ):
"""
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.createMenuBar()" )
self.menubar = tk.Menu( self )
#self['menu'] = self.menubar
self.configure( menu=self.menubar ) # alternative
fileMenu = tk.Menu( self.menubar, tearoff=False )
self.menubar.add_cascade( menu=fileMenu, label=_('File'), underline=0 )
fileMenu.add_command( label=_('Save'), underline=0, command=self.doSave, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Save')][0] )
fileMenu.add_command( label=_('Save as…'), underline=5, command=self.doSaveAs )
#fileMenu.add_separator()
#subfileMenuImport = tk.Menu( fileMenu, tearoff=False )
#subfileMenuImport.add_command( label=_('USX'), underline=0, command=self.notWrittenYet )
#fileMenu.add_cascade( label=_('Import'), underline=0, menu=subfileMenuImport )
#subfileMenuExport = tk.Menu( fileMenu, tearoff=False )
#subfileMenuExport.add_command( label=_('USX'), underline=0, command=self.notWrittenYet )
#subfileMenuExport.add_command( label=_('HTML'), underline=0, command=self.notWrittenYet )
#fileMenu.add_cascade( label=_('Export'), underline=0, menu=subfileMenuExport )
fileMenu.add_separator()
fileMenu.add_command( label=_('Info…'), underline=0, command=self.doShowInfo, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Info')][0] )
fileMenu.add_separator()
fileMenu.add_command( label=_('Close'), underline=0, command=self.doClose, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Close')][0] )
editMenu = tk.Menu( self.menubar )
self.menubar.add_cascade( menu=editMenu, label=_('Edit'), underline=0 )
editMenu.add_command( label=_('Undo'), underline=0, command=self.doUndo, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Undo')][0] )
editMenu.add_command( label=_('Redo'), underline=0, command=self.doRedo, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Redo')][0] )
editMenu.add_separator()
editMenu.add_command( label=_('Cut'), underline=2, command=self.doCut, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Cut')][0] )
editMenu.add_command( label=_('Copy'), underline=0, command=self.doCopy, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Copy')][0] )
editMenu.add_command( label=_('Paste'), underline=0, command=self.doPaste, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Paste')][0] )
editMenu.add_separator()
editMenu.add_command( label=_('Delete'), underline=0, command=self.doDelete )
editMenu.add_command( label=_('Select all'), underline=0, command=self.doSelectAll, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('SelectAll')][0] )
searchMenu = tk.Menu( self.menubar )
self.menubar.add_cascade( menu=searchMenu, label=_('Search'), underline=0 )
searchMenu.add_command( label=_('Goto line…'), underline=0, command=self.doGotoWindowLine, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Line')][0] )
searchMenu.add_separator()
searchMenu.add_command( label=_('Find…'), underline=0, command=self.doBoxFind, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Find')][0] )
searchMenu.add_command( label=_('Find again'), underline=5, command=self.doBoxRefind, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Refind')][0] )
searchMenu.add_command( label=_('Replace…'), underline=0, command=self.doBoxFindReplace )
#searchMenu.add_separator()
#searchMenu.add_command( label=_('Grep…'), underline=0, command=self.onGrep )
## gotoMenu = tk.Menu( self.menubar )
## self.menubar.add_cascade( menu=gotoMenu, label=_('Goto'), underline=0 )
## gotoMenu.add_command( label=_('Previous book'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Next book'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Previous chapter'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Next chapter'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Previous verse'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Next verse'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_separator()
## gotoMenu.add_command( label=_('Forward'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Backward'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_separator()
## gotoMenu.add_command( label=_('Previous list item'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_command( label=_('Next list item'), underline=0, command=self.notWrittenYet )
## gotoMenu.add_separator()
## gotoMenu.add_command( label=_('Book'), underline=0, command=self.notWrittenYet )
viewMenu = tk.Menu( self.menubar, tearoff=False )
self.menubar.add_cascade( menu=viewMenu, label=_('View'), underline=0 )
viewMenu.add_command( label=_('Larger text'), underline=0, command=self.OnFontBigger )
viewMenu.add_command( label=_('Smaller text'), underline=1, command=self.OnFontSmaller )
viewMenu.add_separator()
viewMenu.add_checkbutton( label=_('Status bar'), underline=9, variable=self._showStatusBarVar, command=self.doToggleStatusBar )
toolsMenu = tk.Menu( self.menubar, tearoff=False )
self.menubar.add_cascade( menu=toolsMenu, label=_('Tools'), underline=0 )
toolsMenu.add_command( label=_('Options…'), underline=0, command=self.notWrittenYet )
windowMenu = tk.Menu( self.menubar, tearoff=False )
self.menubar.add_cascade( menu=windowMenu, label=_('Window'), underline=0 )
windowMenu.add_command( label=_('Bring in'), underline=0, command=self.notWrittenYet )
windowMenu.add_separator()
windowMenu.add_command( label=_('Show main window'), underline=0, command=self.doShowMainWindow, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('ShowMain')][0] )
if BibleOrgSysGlobals.debugFlag:
debugMenu = tk.Menu( self.menubar, tearoff=False )
self.menubar.add_cascade( menu=debugMenu, label=_('Debug'), underline=0 )
#debugMenu.add_command( label=_('View settings…'), underline=5, command=self.doViewSettings )
#debugMenu.add_separator()
debugMenu.add_command( label=_('View log…'), underline=5, command=self.doViewLog )
helpMenu = tk.Menu( self.menubar, name='help', tearoff=False )
self.menubar.add_cascade( menu=helpMenu, label=_('Help'), underline=0 )
helpMenu.add_command( label=_('Help…'), underline=0, command=self.doHelp, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Help')][0] )
helpMenu.add_separator()
helpMenu.add_command( label=_('About…'), underline=0, command=self.doAbout, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('About')][0] )
# end of TSVEditWindowAddon.createMenuBar
def setWindowGroup( self, newGroup:str ) -> None:
"""
Set the Bible group for the window.
Ideally we wouldn't need this info to be stored in both of these class variables.
"""
fnPrint( debuggingThisModule, _("BibleWindowAddon.setWindowGroup( {} ) for {}").format( newGroup, self.genericWindowType ) )
if debuggingThisModule or BibleOrgSysGlobals.debugFlag or BibleOrgSysGlobals.strictCheckingFlag:
assert newGroup==DEFAULT or newGroup in BIBLE_GROUP_CODES
self._groupCode = BIBLE_GROUP_CODES[0] if newGroup==DEFAULT else newGroup
# self._groupRadioVar.set( BIBLE_GROUP_CODES.index( self._groupCode ) + 1 )
# end of BibleWindowAddon.setWindowGroup
def createContextMenu( self ) -> None:
"""
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.createContextMenu()" )
self.contextMenu = tk.Menu( self, tearoff=False )
self.contextMenu.add_command( label=_('Cut'), underline=2, command=self.doCut, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Cut')][0] )
self.contextMenu.add_command( label=_('Copy'), underline=0, command=self.doCopy, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Copy')][0] )
self.contextMenu.add_command( label=_('Paste'), underline=0, command=self.doPaste, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Paste')][0] )
self.contextMenu.add_separator()
self.contextMenu.add_command( label=_('Select all'), underline=7, command=self.doSelectAll, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('SelectAll')][0] )
#self.contextMenu.add_separator()
#self.contextMenu.add_command( label=_('Close'), underline=1, command=self.doClose, accelerator=BiblelatorGlobals.theApp.keyBindingDict[_('Close')][0] )
self.bind( '<Button-3>', self.showContextMenu ) # right-click
# end of TSVEditWindowAddon.createContextMenu
#def showContextMenu( self, e):
#self.contextMenu.post( e.x_root, e.y_root )
## end of TSVEditWindowAddon.showContextMenu
#def createToolBar( self ):
#toolbar = Frame( self, cursor='hand2', relief=tk.SUNKEN ) # bd=2
#toolbar.pack( side=tk.BOTTOM, fill=tk.X )
#Button( toolbar, text='Halt', command=self.quit ).pack( side=tk.RIGHT )
#Button( toolbar, text='Hide Resources', command=self.hideResources ).pack(side=tk.LEFT )
#Button( toolbar, text='Hide All', command=self.hideAll ).pack( side=tk.LEFT )
#Button( toolbar, text='Show All', command=self.showAll ).pack( side=tk.LEFT )
#Button( toolbar, text='Bring All', command=self.bringAll ).pack( side=tk.LEFT )
## end of TSVEditWindowAddon.createToolBar
def refreshTitle( self ):
"""
Refresh the title of the text edit window,
put an asterisk if it's modified.
"""
#if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.refreshTitle()" )
self.title( "{}[{}] {} ({}) {}".format( '*' if self.modified() else '',
_("Text"), self.filename, self.folderpath, self.editStatus ) )
self.refreshTitleContinue()
# end if TSVEditWindowAddon.refreshTitle
def refreshTitleContinue( self ):
"""
Check if an autosave is needed,
and schedule the next refresh.
"""
#if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.refreshTitleContinue()" )
self.after( REFRESH_TITLE_TIME, self.refreshTitle ) # Redo it so we can put up the asterisk if the text is changed
try:
if self.autosaveTime and self.modified() and not self.autosaveScheduled:
self.after( self.autosaveTime, self.doAutosave ) # Redo it so we can put up the asterisk if the text is changed
self.autosaveScheduled = True
except AttributeError:
vPrint( 'Quiet', debuggingThisModule, "Autosave not set-up properly yet" )
# end if TSVEditWindowAddon.refreshTitleContinue
def OnFontBigger( self ):
"""
Make the font one point bigger
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.OnFontBigger()" )
size = self.customFont['size']
self.customFont.configure( size=size+1 )
# end if TSVEditWindowAddon.OnFontBigger
def OnFontSmaller( self ):
"""
Make the font one point smaller
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.OnFontSmaller()" )
size = self.customFont['size']
self.customFont.configure( size=size-1 )
# end if TSVEditWindowAddon.OnFontSmaller
def getAllText( self ) -> str:
"""
Returns all the TextBox text as a string.
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.getAllText()" )
allText = self.textBox.get( tkSTART, tk.END+'-1c' )
#if self.markMultipleSpacesFlag:
allText = allText.replace( MULTIPLE_SPACE_SUBSTITUTE, ' ' )
#if self.markTrailingSpacesFlag:
allText = allText.replace( TRAILING_SPACE_SUBSTITUTE, ' ' )
vPrint( 'Never', debuggingThisModule, f" TSVEditWindowAddon.getAllText returning ({len(allText)}) {allText!r}" )
return allText
# end of TSVEditWindowAddon.getAllText
def makeAutocompleteBox( self ) -> None:
"""
Create a pop-up listbox in order to be able to display possible autocomplete words.
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.makeAutocompleteBox()" )
if debuggingThisModule or BibleOrgSysGlobals.debugFlag or BibleOrgSysGlobals.strictCheckingFlag:
assert self.autocompleteBox is None
# Create the pop-up listbox
x, y, cx, cy = self.textBox.bbox( tk.INSERT ) # Get canvas coordinates
topLevel = tk.Toplevel( self.textBox.master )
topLevel.wm_overrideredirect(1) # Don't display window decorations (close button, etc.)
topLevel.wm_geometry( '+{}+{}' \
.format( x + self.textBox.winfo_rootx() + 2, y + cy + self.textBox.winfo_rooty() ) )
frame = tk.Frame( topLevel, highlightthickness=1, highlightcolor='darkgreen' )
frame.pack( fill=tk.BOTH, expand=tk.YES )
autocompleteScrollbar = tk.Scrollbar( frame, highlightthickness=0 )
autocompleteScrollbar.pack( side=tk.RIGHT, fill=tk.Y )
self.autocompleteBox = tk.Listbox( frame, highlightthickness=0,
relief='flat',
yscrollcommand=autocompleteScrollbar.set,
width=20, height=NUM_AUTOCOMPLETE_POPUP_LINES )
autocompleteScrollbar.configure( command=self.autocompleteBox.yview )
self.autocompleteBox.pack( side=tk.LEFT, fill=tk.BOTH )
#self.autocompleteBox.select_set( '0' )
#self.autocompleteBox.focus()
self.autocompleteBox.bind( '<KeyPress>', self.OnAutocompleteChar )
self.autocompleteBox.bind( '<Double-Button-1>', self.doAcceptAutocompleteSelection )
self.autocompleteBox.bind( '<FocusOut>', self.removeAutocompleteBox )
# end of TSVEditWindowAddon.makeAutocompleteBox
def OnAutocompleteChar( self, event ):
"""
Used by autocomplete routines in onTextChange.
Handles key presses entered into the pop-up word selection (list) box.
"""
if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.OnAutocompleteChar( {!r}, {!r} )".format( event.char, event.keysym ) )
assert self.autocompleteBox is not None
#if event.keysym == 'ESC':
#if event.char==' ' or event.char in self.autocompleteWordChars:
#self.textBox.insert( tk.INSERT, event.char ) # Causes onTextChange which reassesses
if event.keysym == 'BackSpace':
row, column = self.textBox.index(tk.INSERT).split('.')
column = str( int(column) - 1 )
self.textBox.delete( row + '.' + column, tk.INSERT ) # parameters are fromPoint, toPoint
elif event.keysym == 'Delete':
row, column = self.textBox.index(tk.INSERT).split('.')
column = str( int(column) + 1 ) # Only works as far as the end of the line (won't delete a \n)
# Change the call below to a single parameter if you want it to work across lines
self.textBox.delete( tk.INSERT, row + '.' + column ) # parameters are fromPoint, toPoint
elif event.keysym == 'Return':
acceptAutocompleteSelection( self, includeTrailingSpace=False )
#elif event.keysym in ( 'Up', 'Down', 'Shift_R', 'Shift_L',
#'Control_L', 'Control_R', 'Alt_L',
#'Alt_R', 'parenleft', 'parenright'):
#pass
elif event.keysym == 'Escape':
self.removeAutocompleteBox()
#elif event.keysym in ( 'Delete', ): pass # Just ignore these keypresses
elif event.char:
#if event.char in '.,': acceptAutocompleteSelection( self, includeTrailingSpace=False )
self.textBox.insert( tk.INSERT, event.char ) # Causes onTextChange which reassesses
#+ (' ' if event.char in ',' else '') )
# end of TSVEditWindowAddon.OnAutocompleteChar
def doAcceptAutocompleteSelection( self, event=None ):
"""
Used by autocomplete routines in onTextChange.
Gets the chosen word and inserts the end of it into the text.
"""
if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.doAcceptAutocompleteSelection({} )".format( event ) )
assert self.autocompleteBox is not None
acceptAutocompleteSelection( self, includeTrailingSpace=False )
# end of TSVEditWindowAddon.doAcceptAutocompleteSelection
def removeAutocompleteBox( self, event=None ):
"""
Remove the pop-up Listbox (in a Frame in a Toplevel) when it's no longer required.
Used by autocomplete routines in onTextChange.
"""
if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.removeAutocompleteBox( {} )".format( event ) )
assert self.autocompleteBox is not None
self.textBox.focus()
self.autocompleteBox.master.master.destroy() # master is Frame, master.master is Toplevel
self.autocompleteBox = None
# end of TSVEditWindowAddon.removeAutocompleteBox
def onTextChange( self, result, *args ):
"""
Called (set-up as a call-back function) whenever the text box cursor changes
either with a mouse click or arrow keys.
Checks to see if they have moved to a new chapter/verse,
and if so, informs the parent app.
"""
if self.onTextNoChangeID:
self.after_cancel( self.onTextNoChangeID ) # Cancel any delayed checks which are scheduled
self.onTextNoChangeID = None
if self.loading: return # So we don't get called a million times for nothing
return # temp XXXX ...........................
# PRevents extra double spaces being inserted!!! WHY WHY
vPrint( 'Quiet', debuggingThisModule, f"TSVEditWindowAddon.onTextChange( {result!r}, {args} )…" )
#if 0: # Get line and column info
#lineColumn = self.textBox.index( tk.INSERT )
#dPrint( 'Quiet', debuggingThisModule, "lc", repr(lineColumn) )
#line, column = lineColumn.split( '.', 1 )
#dPrint( 'Quiet', debuggingThisModule, "l,c", repr(line), repr(column) )
#if 0: # get formatting tag info
#tagNames = self.textBox.tag_names( tk.INSERT )
#tagNames2 = self.textBox.tag_names( lineColumn )
#tagNames3 = self.textBox.tag_names( tk.INSERT + ' linestart' )
#tagNames4 = self.textBox.tag_names( lineColumn + ' linestart' )
#tagNames5 = self.textBox.tag_names( tk.INSERT + ' linestart+1c' )
#tagNames6 = self.textBox.tag_names( lineColumn + ' linestart+1c' )
#dPrint( 'Quiet', debuggingThisModule, "tN", tagNames )
#if tagNames2!=tagNames or tagNames3!=tagNames or tagNames4!=tagNames or tagNames5!=tagNames or tagNames6!=tagNames:
#dPrint( 'Quiet', debuggingThisModule, "tN2", tagNames2 )
#dPrint( 'Quiet', debuggingThisModule, "tN3", tagNames3 )
#dPrint( 'Quiet', debuggingThisModule, "tN4", tagNames4 )
#dPrint( 'Quiet', debuggingThisModule, "tN5", tagNames5 )
#dPrint( 'Quiet', debuggingThisModule, "tN6", tagNames6 )
#halt
#if 0: # show various mark strategies
#mark1 = self.textBox.mark_previous( tk.INSERT )
#mark2 = self.textBox.mark_previous( lineColumn )
#mark3 = self.textBox.mark_previous( tk.INSERT + ' linestart' )
#mark4 = self.textBox.mark_previous( lineColumn + ' linestart' )
#mark5 = self.textBox.mark_previous( tk.INSERT + ' linestart+1c' )
#mark6 = self.textBox.mark_previous( lineColumn + ' linestart+1c' )
#dPrint( 'Quiet', debuggingThisModule, "mark1", mark1 )
#if mark2!=mark1:
#dPrint( 'Quiet', debuggingThisModule, "mark2", mark1 )
#if mark3!=mark1 or mark4!=mark1 or mark5!=mark1 or mark6!=mark1:
#dPrint( 'Quiet', debuggingThisModule, "mark3", mark3 )
#if mark4!=mark3:
#dPrint( 'Quiet', debuggingThisModule, "mark4", mark4 )
#dPrint( 'Quiet', debuggingThisModule, "mark5", mark5 )
#if mark6!=mark5:
#dPrint( 'Quiet', debuggingThisModule, "mark6", mark6 )
if self.textBox.edit_modified():
#if 1:
#dPrint( 'Quiet', debuggingThisModule, 'args[0]', repr(args[0]) )
#dPrint( 'Quiet', debuggingThisModule, 'args[1]', repr(args[1]) )
#try: vPrint( 'Quiet', debuggingThisModule, 'args[2]', repr(args[2]) ) # Can be multiple characters (after autocomplete)
#except IndexError: vPrint( 'Quiet', debuggingThisModule, "No args[2]" ) # when deleting
# Handle substituted space characters
saveIndex = self.textBox.index( tk.INSERT ) # Remember where the cursor was
if args[0]=='insert' and args[1]=='insert':
before1After1 = self.textBox.get( tk.INSERT+'-2c', tk.INSERT+'+1c' ) # Get the characters before and after
if len(before1After1) == 3: before1, newChar, after1 = before1After1
else: before1 = newChar = after1 = '' # this can happen sometimes
#dPrint( 'Quiet', debuggingThisModule, '3', repr(before1), repr(newChar), repr(after1) )
# FALSE AFTER AUTOCOMPLETE assert newChar == args[2] # Char before cursor should be char just typed
if self.markMultipleSpacesFlag and newChar == ' ': # Check if we've typed multiple spaces
# NOTE: We DON'T make this into a TRAILING_SPACE_SUBSTITUTE -- too disruptive during regular typing
#elf.textBox.get( tk.INSERT+'-{}c'.format( maxCount ), tk.INSERT )
if before1 in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT+'-2c', tk.INSERT ) # Delete previous space/substitute plus new space
self.textBox.insert( tk.INSERT, DOUBLE_SPACE_SUBSTITUTE ) # Replace with substitute
else: # check after the cursor also
nextChar = self.textBox.get( tk.INSERT, tk.INSERT+'+1c' ) # Get the following character
if nextChar in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT+'-1c', tk.INSERT+'+1c' ) # Delete chars around cursor
self.textBox.insert( tk.INSERT, DOUBLE_SPACE_SUBSTITUTE ) # Replace with substitute
self.textBox.mark_set( tk.INSERT, saveIndex ) # Put the cursor back
elif newChar not in ' \n\r': # Check if we followed a trailing space substitute
if before1 == TRAILING_SPACE_SUBSTITUTE:
self.textBox.delete( tk.INSERT+'-2c', tk.INSERT ) # Delete trailing space substitute plus new char
self.textBox.insert( tk.INSERT, ' '+newChar ) # Replace with proper space and new char
before3After2 = self.textBox.get( tk.INSERT+'-3c', tk.INSERT+'+2c' ) # Get the pairs of characters before and after
if before1 == MULTIPLE_SPACE_SUBSTITUTE and before3After2[0] not in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT+'-2c', tk.INSERT ) # Delete previous space substitute plus new char
self.textBox.insert( tk.INSERT, ' '+newChar ) # Replace with normal space plus new char
try:
if before3After2[3] == MULTIPLE_SPACE_SUBSTITUTE and before3After2[4] not in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT, tk.INSERT+'+1c' ) # Delete following space substitute
self.textBox.insert( tk.INSERT, ' ' ) # Replace with normal space
self.textBox.mark_set( tk.INSERT, saveIndex ) # Put the cursor back
except IndexError: pass # Could be working at end of file
#previousText = self.getSubstitutedChararactersBeforeCursor()
elif args[0] == 'delete':
#if args[1] == 'insert': # we used the delete key
#dPrint( 'Quiet', debuggingThisModule, "Deleted" )
#elif args[1] == 'insert-1c': # we used the backspace key
#dPrint( 'Quiet', debuggingThisModule, "Backspaced" )
#else: vPrint( 'Quiet', debuggingThisModule, "What's this!", repr(args[1]) )
chars4 = self.textBox.get( tk.INSERT+'-2c', tk.INSERT+'+2c' ) # Get the characters (now forced together) around the cursor
if len(chars4) == 4: before2, before1, after1, after2 = chars4
else: before2 = before1 = after1 = after2 = '' # not sure about this
if before1 == ' ' and after1 == '\n': # Put trailing substitute
if self.markTrailingSpacesFlag:
self.textBox.delete( tk.INSERT+'-1c', tk.INSERT ) # Delete the space
self.textBox.insert( tk.INSERT, TRAILING_SPACE_SUBSTITUTE ) # Replace with trailing substitute
elif before1 in ALL_POSSIBLE_SPACE_CHARS and after1 in ALL_POSSIBLE_SPACE_CHARS: # Put multiple substitute
if self.markMultipleSpacesFlag:
self.textBox.delete( tk.INSERT+'-1c', tk.INSERT+'+1c' ) # Delete chars around cursor
self.textBox.insert( tk.INSERT, DOUBLE_SPACE_SUBSTITUTE ) # Replace with substitute
self.textBox.mark_set( tk.INSERT, saveIndex ) # Put the cursor back
if before1 == MULTIPLE_SPACE_SUBSTITUTE and after1 not in ALL_POSSIBLE_SPACE_CHARS and before2 not in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT+'-1c', tk.INSERT ) # Delete the space substitute
self.textBox.insert( tk.INSERT, ' ' ) # Replace with normal space
if after1 == MULTIPLE_SPACE_SUBSTITUTE and before1 not in ALL_POSSIBLE_SPACE_CHARS and after2 not in ALL_POSSIBLE_SPACE_CHARS:
self.textBox.delete( tk.INSERT, tk.INSERT+'+1c' ) # Delete the space substitute
self.textBox.insert( tk.INSERT, ' ' ) # Replace with normal space
self.textBox.mark_set( tk.INSERT, saveIndex ) # Put the cursor back
# Handle auto-correct
if self.autocorrectEntries and args[0]=='insert' and args[1]=='insert':
#dPrint( 'Quiet', debuggingThisModule, "Handle autocorrect" )
previousText = getCharactersBeforeCursor( self, self.maxAutocorrectLength )
#dPrint( 'Quiet', debuggingThisModule, "previousText", repr(previousText) )
for inChars,outChars in self.autocorrectEntries:
if previousText.endswith( inChars ):
#dPrint( 'Quiet', debuggingThisModule, "Going to replace {!r} with {!r}".format( inChars, outChars ) )
# Delete the typed character(s) and replace with the new one(s)
self.textBox.delete( tk.INSERT+'-{}c'.format( len(inChars) ), tk.INSERT )
self.textBox.insert( tk.INSERT, outChars )
break
# end of auto-correct section
# Handle auto-complete
if self.autocompleteMode is not None and self.autocompleteWords and args[0] in ('insert','delete',):
#dPrint( 'Quiet', debuggingThisModule, "Handle autocomplete1" )
lastAutocompleteWordText = self.existingAutocompleteWordText
self.existingAutocompleteWordText = getWordCharactersBeforeCursor( self, self.autocompleteMaxLength )
#dPrint( 'Quiet', debuggingThisModule, "existingAutocompleteWordText: {!r}".format( self.existingAutocompleteWordText ) )
if self.existingAutocompleteWordText != lastAutocompleteWordText:
# We've had an actual change in the entered text
possibleWords = None
if len(self.existingAutocompleteWordText) >= self.autocompleteMinLength:
# See if we have any words that start with the already typed letters
#dPrint( 'Quiet', debuggingThisModule, "Handle autocomplete1A with {!r}".format( self.existingAutocompleteWordText ) )
firstLetter, remainder = self.existingAutocompleteWordText[0], self.existingAutocompleteWordText[1:]
#dPrint( 'Quiet', debuggingThisModule, "firstletter={!r} remainder={!r}".format( firstLetter, remainder ) )
try: possibleWords = [firstLetter+thisBit for thisBit in self.autocompleteWords[firstLetter] \
if thisBit.startswith(remainder) and thisBit != remainder]
except KeyError: pass
self.autocompleteOverlap = self.existingAutocompleteWordText
#dPrint( 'Quiet', debuggingThisModule, 'possibleWordsA', possibleWords )
# Maybe we haven't typed enough yet to pop-up the standard box so we look ahead using the previous word
if not possibleWords:
previousStuff = getCharactersAndWordBeforeCursor( self, self.autocompleteMaxLength )
#dPrint( 'Quiet', debuggingThisModule, "Handle autocomplete1B with {!r}".format( previousStuff ) )
firstLetter, remainder = previousStuff[0], previousStuff[1:]
#dPrint( 'Quiet', debuggingThisModule, "firstletter={!r} remainder={!r}".format( firstLetter, remainder ) )
self.autocompleteOverlap = previousStuff
#try: possibleWords = [thisBit[remainderLength:] for thisBit in self.autocompleteWords[firstLetter] \
try: possibleWords = [firstLetter+thisBit for thisBit in self.autocompleteWords[firstLetter] \
if thisBit.startswith(remainder) and thisBit != remainder]
except KeyError: pass
self.autocompleteOverlap = previousStuff
#dPrint( 'Quiet', debuggingThisModule, 'possibleWordsB', possibleWords )
if possibleWords: # we have some word(s) to pop-up for possible selection
#dPrint( 'Quiet', debuggingThisModule, "Handle autocomplete2" )
if self.autocompleteBox is None:
self.makeAutocompleteBox()
else: # the Listbox is already made -- just empty it
#dPrint( 'Quiet', debuggingThisModule, 'empty listbox' )
self.autocompleteBox.delete( 0, tk.END ) # clear the listbox completely
# Now fill the Listbox
#dPrint( 'Quiet', debuggingThisModule, 'fill listbox' )
for word in possibleWords:
if BibleOrgSysGlobals.debugFlag: assert possibleWords.count( word ) == 1
self.autocompleteBox.insert( tk.END, word )
# Do a bit more set-up
#self.autocompleteBox.pack( side=tk.LEFT, fill=tk.BOTH )
self.autocompleteBox.select_set( '0' )
self.autocompleteBox.focus()
elif self.autocompleteBox is not None:
#dPrint( 'Quiet', debuggingThisModule, 'destroy1 autocomplete listbox -- no possible words' )
self.removeAutocompleteBox()
if self.addAllNewWords \
and args[0]=='insert' and args[1]=='insert' \
and args[2] in BibleOrgSysGlobals.TRAILING_WORD_END_CHARS:
# Just finished typing a word (by typing a space or something)
word = getWordBeforeSpace( self )
if word: # in the Bible modes, we also add new words as they're typed
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon: Adding/Updating autocomplete word", repr(word) )
addNewAutocompleteWord( self, word )
# NOTE: edited/deleted words aren't removed until the program restarts
elif self.autocompleteBox is not None:
#dPrint( 'Quiet', debuggingThisModule, 'destroy3 autocomplete listbox -- autocomplete is not enabled/appropriate' )
self.removeAutocompleteBox()
# end of auto-complete section
#self.lastTextChangeTime = time()
try: self.onTextNoChangeID = self.after( NO_TYPE_TIME, self.onTextNoChange ) # Reschedule no change function so we keep checking
except KeyboardInterrupt:
vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon: Got keyboard interrupt in onTextChange (A) -- saving my file" )
self.doSave() # Sometimes the above seems to lock up
if self.onTextNoChangeID:
self.after_cancel( self.onTextNoChangeID ) # Cancel any delayed no change checks which are scheduled
self.onTextNoChangeID = None
# end of TSVEditWindowAddon.onTextChange
def onTextNoChange( self ):
"""
Called whenever the text box HASN'T CHANGED for NO_TYPE_TIME msecs.
Checks for some types of formatting errors.
"""
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.onTextNoChange" )
try: pass
except KeyboardInterrupt:
vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon: Got keyboard interrupt in onTextNoChange (B) -- saving my file" )
self.doSave() # Sometimes the above seems to lock up
#self.after_cancel( self.onTextNoChangeID ) # Cancel any delayed no change checks which are scheduled
#self.onTextNoChangeID = None
# end of TSVEditWindowAddon.onTextNoChange
def doShowInfo( self, event=None ):
"""
Pop-up dialog giving text statistics and cursor location;
caveat (2.1): Tk insert position column counts a tab as one
character: translate to next multiple of 8 to match visual?
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doShowInfo( {} )".format( event ) )
text = self.getEntireText()
numChars = len( text )
numLines = len( text.split( '\n' ) )
numWords = len( text.split() )
index = self.textBox.index( tk.INSERT )
atLine, atColumn = index.split('.')
grandtotal = 0
for firstLetter in self.autocompleteWords:
vPrint( 'Quiet', debuggingThisModule, "fL", firstLetter )
grandtotal += len( self.autocompleteWords[firstLetter] )
infoString = 'Current location:\n' \
+ ' Row: {}\n'.format( self.current_row ) \
+ '\nFile text statistics:\n' \
+ ' Rows: {:,}\n Columns: {:,}\n Headers: {}\n'.format( self.numDataRows, self.num_columns, self.tsvHeaders ) \
+ '\nFile info:\n' \
+ ' Name: {}\n'.format( self.filename ) \
+ ' Folder: {}\n'.format( self.folderpath ) \
+ '\nChecking status:\n' \
+ ' References & IDs: {}\n'.format( 'unknown' ) \
+ ' Order: {}\n'.format( 'unknown' ) \
+ ' Quotes: {}\n'.format( 'unknown' ) \
+ ' Links: {}\n'.format( 'unknown' ) \
+ ' Markdown: {}\n'.format( 'unknown' ) \
+ '\nSettings:\n' \
+ ' Autocorrect entries: {:,}\n Autocomplete mode: {}\n Autocomplete entries: {:,}\n Autosave time: {} secs\n Save changes automatically: {}' \
.format( len(self.autocorrectEntries), self.autocompleteMode, grandtotal, round(self.autosaveTime/1000), self.saveChangesAutomatically )
showInfo( self, _("Window Information"), infoString )
# end of TSVEditWindowAddon.doShowInfo
def doUndo( self, event=None ):
vPrint( 'Never', debuggingThisModule, "TSVEditWindowAddon.doUndo( {} )".format( event ) )
try: self.textBox.edit_undo()
except tk.TclError: showInfo( self, APP_NAME, _("Nothing to undo") )
self.textBox.update() # force refresh
# end of TSVEditWindowAddon.doUndo
def doRedo( self, event=None ):
vPrint( 'Never', debuggingThisModule, "TSVEditWindowAddon.doRedo( {} )".format( event ) )
try: self.textBox.edit_redo()
except tk.TclError: showInfo( self, APP_NAME, _("Nothing to redo") )
self.textBox.update() # force refresh
# end of TSVEditWindowAddon.doRedo
def doDelete( self, event=None ): # delete selected text, no save
vPrint( 'Never', debuggingThisModule, "TSVEditWindowAddon.doDelete( {} )".format( event ) )
if not self.textBox.tag_ranges( tk.SEL ):
showError( self, APP_NAME, _("No text selected") )
else:
self.textBox.delete( tk.SEL_FIRST, tk.SEL_LAST )
# end of TSVEditWindowAddon.doDelete
def doCut( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doCut( {} )".format( event ) )
if not self.textBox.tag_ranges( tk.SEL ):
showError( self, APP_NAME, _("No text selected") )
else:
self.doCopy() # In ChildBox class
self.doDelete()
# end of TSVEditWindowAddon.doCut
def doPaste( self, event=None ) -> None:
"""
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doPaste( {} )".format( event ) )
dPrint( 'Never', debuggingThisModule, " doPaste: {!r} {!r}".format( event.char, event.keysym ) )
try:
text = self.selection_get( selection='CLIPBOARD')
except tk.TclError:
showError( self, APP_NAME, _("Nothing to paste") )
return
self.textBox.insert( tk.INSERT, text) # add at current insert cursor
self.textBox.tag_remove( tk.SEL, tkSTART, tk.END )
self.textBox.tag_add( tk.SEL, tk.INSERT+'-{}c'.format( len(text) ), tk.INSERT )
self.textBox.see( tk.INSERT ) # select it, so it can be cut
# end of TSVEditWindowAddon.doPaste
############################################################################
# Search menu commands
############################################################################
#def xxxdoGotoWindowLine( self, forceline=None):
#line = forceline or askinteger( APP_NAME, _("Enter line number") )
#self.textBox.update()
#self.textBox.focus()
#if line is not None:
#maxindex = self.textBox.index( tk.END+'-1c' )
#maxline = int( maxindex.split('.')[0] )
#if line > 0 and line <= maxline:
#self.textBox.mark_set( tk.INSERT, '{}.0'.format(line) ) # goto line
#self.textBox.tag_remove( tk.SEL, tkSTART, tk.END ) # delete selects
#self.textBox.tag_add( tk.SEL, tk.INSERT, 'insert + 1l' ) # select line
#self.textBox.see( tk.INSERT ) # scroll to line
#else:
#showError( self, APP_NAME, _("No such line number") )
## end of TSVEditWindowAddon.doGotoWindowLine
#def xxxdoBoxFind( self, lastkey=None):
#key = lastkey or askstring( APP_NAME, _("Enter search string") )
#self.textBox.update()
#self.textBox.focus()
#self.lastfind = key
#if key:
#nocase = self.optionsDict['caseinsens']
#where = self.textBox.search( key, tk.INSERT, tk.END, nocase=nocase )
#if not where: # don't wrap
#showError( self, APP_NAME, _("String not found") )
#else:
#pastkey = where + '+%dc' % len(key) # index past key
#self.textBox.tag_remove( tk.SEL, tkSTART, tk.END ) # remove any sel
#self.textBox.tag_add( tk.SEL, where, pastkey ) # select key
#self.textBox.mark_set( tk.INSERT, pastkey ) # for next find
#self.textBox.see( where ) # scroll display
## end of TSVEditWindowAddon.doBoxFind
#def xxxdoBoxRefind( self ):
#self.doBoxFind( self.lastfind)
## end of TSVEditWindowAddon.doBoxRefind
def doBoxFindReplace( self ):
"""
Non-modal find/change dialog
2.1: pass per-dialog inputs to callbacks, may be > 1 change dialog open
"""
newPopupWindow = tk.Toplevel( self )
newPopupWindow.title( '{} - change'.format( APP_NAME ) )
Label( newPopupWindow, text='Find text?', relief=tk.RIDGE, width=15).grid( row=0, column=0 )
Label( newPopupWindow, text='Change to?', relief=tk.RIDGE, width=15).grid( row=1, column=0 )
entry1 = BEntry( newPopupWindow )
entry2 = BEntry( newPopupWindow )
entry1.grid( row=0, column=1, sticky=tk.EW )
entry2.grid( row=1, column=1, sticky=tk.EW )
def doBoxFind(): # use my entry in enclosing scope
self.doBoxFind( entry1.get() ) # runs normal find dialog callback
def onApply():
self.onDoChange( entry1.get(), entry2.get() )
Button( newPopupWindow, text='Find', command=doBoxFind ).grid(row=0, column=2, sticky=tk.EW )
Button( newPopupWindow, text='Apply', command=onApply).grid(row=1, column=2, sticky=tk.EW )
newPopupWindow.columnconfigure( 1, weight=1 ) # expandable entries
# end of TSVEditWindowAddon.doBoxFindReplace
def onDoChange( self, findtext, changeto):
"""
on Apply in change dialog: change and refind
"""
if self.textBox.tag_ranges( tk.SEL ): # must find first
self.textBox.delete( tk.SEL_FIRST, tk.SEL_LAST)
self.textBox.insert( tk.INSERT, changeto) # deletes if empty
self.textBox.see( tk.INSERT )
self.doBoxFind( findtext ) # goto next appear
self.textBox.update() # force refresh
# end of TSVEditWindowAddon.onDoChange
############################################################################
# Utilities, useful outside this class
############################################################################
# def setFolderpath( self, newFolderpath ):
# """
# Store the folder path for where our files will be.
# We're still waiting for the filename.
# """
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.setFolderpath( {} )".format( repr(newFolderpath) ) )
# assert self.filename is None
# assert self.filepath is None
# self.folderpath = newFolderpath
# # end of TSVEditWindowAddon.setFolderpath
# def setFilename( self, filename, createFile=False ):
# """
# Store the filepath to our file.
# A complement to the above function.
# Also gets the file size and last edit time so we can detect if it's changed later.
# Returns True/False success flag.
# """
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.setFilename( {} )".format( repr(filename) ) )
# assert self.folderpath
# self.filename = filename
# self.filepath = os.path.join( self.folderpath, self.filename )
# if createFile: # Create a blank file
# with open( self.filepath, mode='wt', encoding='utf-8' ) as theBlankFile: pass # write nothing
# return self._checkFilepath()
# # end of TSVEditWindowAddon.setFilename
# def setPathAndFile( self, folderpath, filename ):
# """
# Store the filepath to our file.
# A more specific alternative to the above two functions. (The other alternative function is below.)
# Also gets the file size and last edit time so we can detect if it's changed later.
# Returns True/False success flag.
# """
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.setPathAndFile( {}, {} )".format( repr(folderpath), repr(filename) ) )
# self.folderpath, self.filename = folderpath, filename
# self.filepath = os.path.join( self.folderpath, self.filename )
# return self._checkFilepath()
# # end of TSVEditWindowAddon.setPathAndFile
# def setFilepath( self, newFilePath ):
# """
# Store the filepath to our file. (An alternative to the above function.)
# Also gets the file size and last edit time so we can detect if it's changed later.
# Returns True/False success flag.
# """
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.setFilepath( {!r} )".format( newFilePath ) )
# self.filepath = newFilePath
# self.folderpath, self.filename = os.path.split( newFilePath )
# return self._checkFilepath()
# # end of TSVEditWindowAddon.setFilepath
def _checkFilepath( self ):
"""
Checks to make sure that the file can be found and opened.
Also gets the file size and last edit time so we can detect if it's changed later.
Returns True/False success flag.
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon._checkFilepath()" )
if not os.path.isfile( self.filepath ):
showError( self, APP_NAME, _("No such filepath: {!r}").format( self.filepath ) )
return False
if not os.access( self.filepath, os.R_OK ):
showError( self, APP_NAME, _("No permission to read {!r} in {!r}").format( self.filename, self.folderpath ) )
return False
if not os.access( self.filepath, os.W_OK ):
showError( self, APP_NAME, _("No permission to write {!r} in {!r}").format( self.filename, self.folderpath ) )
return False
self.rememberFileTimeAndSize()
self.refreshTitle()
return True
# end of TSVEditWindowAddon._checkFilepath
def rememberFileTimeAndSize( self ) -> None:
"""
Just record the file modification time and size in bytes
so that we can check later if it's changed on-disk.
"""
self.lastFiletime = os.stat( self.filepath ).st_mtime
self.lastFilesize = os.stat( self.filepath ).st_size
vPrint( 'Never', debuggingThisModule, " rememberFileTimeAndSize: {} {}".format( self.lastFiletime, self.lastFilesize ) )
# end of TSVEditWindowAddon.rememberFileTimeAndSize
def setAllText( self, newText ):
"""
Sets the textBox (assumed to be enabled) to the given text
then positions the insert cursor at the BEGINNING of the text.
caller: call self.update() first if just packed, else the
initial position may be at line 2, not line 1 (2.1; Tk bug?)
"""
fnPrint( debuggingThisModule, f"TextEditWindowAddon.setAllText( ({len(newText)}) {newText!r} )" )
self.textBox.configure( state=tk.NORMAL ) # In case it was disabled
self.textBox.delete( tkSTART, tk.END ) # Delete everything that's existing
self.textBox.insert( tk.END, newText )
self.textBox.highlightAllPatterns( self.patternsToHighlight )
self.textBox.mark_set( tk.INSERT, tkSTART ) # move insert point to top
self.textBox.see( tk.INSERT ) # scroll to top, insert is set
self.textBox.edit_reset() # clear undo/redo stks
self.textBox.edit_modified( tk.FALSE ) # clear modified flag
# end of TextEditWindowAddon.setAllText
# def loadText( self ):
# """
# Opens the file, reads all the data, and sets it into the text box.
# Can also be used to RELOAD the text (e.g., if it has changed on the disk).
# Returns True/False success flag.
# """
# if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
# vPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.loadText()" )
# self.loading = True
# text = open( self.filepath, 'rt', encoding='utf-8' ).read()
# if text is None:
# showError( self, APP_NAME, 'Could not decode and open file ' + self.filepath )
# return False
# else:
# self.setAllText( text )
# self.loading = False
# return True
# # end of TSVEditWindowAddon.loadText
def getEntireText( self ) -> str:
"""
This function can be overloaded in super classes
(where the edit window might not display the entire text).
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.getEntireText()" )
self.doReassembleFile()
return self.newText
# end of TSVEditWindowAddon.getEntireText
def checkForDiskChanges( self, autoloadText:bool=False ) -> None:
"""
Check if the file has changed on disk.
If it has, and the user hasn't yet made any changes, offer to reload.
"""
#if BibleOrgSysGlobals.debugFlag and debuggingThisModule:
#dPrint( 'Quiet', debuggingThisModule, "TSVEditWindowAddon.checkForDiskChanges()" )
if self.filepath and os.path.isfile( self.filepath ) \
and ( ( self.lastFiletime and os.stat( self.filepath ).st_mtime != self.lastFiletime ) \
or ( self.lastFilesize and os.stat( self.filepath ).st_size != self.lastFilesize ) ):
if self.modified():
showError( self, APP_NAME, _("File {} has also changed on disk").format( repr(self.filename) ) )
else: # We haven't modified the file since loading it
yndResult = False
if autoloadText: yndResult = True
else: # ask the user
ynd = YesNoDialog( self, _("File {} has changed on disk. Reload?").format( repr(self.filename) ), title=_('Reload?') )
#dPrint( 'Quiet', debuggingThisModule, "yndResult", repr(ynd.result) )
if ynd.result == True: yndResult = True # Yes was chosen
if yndResult:
self.loadText() # reload
self.rememberFileTimeAndSize()
self.after( CHECK_DISK_CHANGES_TIME, self.checkForDiskChanges ) # Redo it so we keep checking
# end if TSVEditWindowAddon.checkForDiskChanges
def doReassembleFile( self ) -> None:
"""
Undoes this:
fileLines = self.originalText.split( '\n' )
if fileLines and fileLines[-1] == '':
print( "Deleting final blank line" )
fileLines = fileLines[:-1]
self.hadTrailingNL = True
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doReassembleFile()" )
if 'tsvTable' not in self.__dict__ or not self.tsvTable:
return
self.retrieveCurrentRowData( updateTable=True ) # in case current row was edited
fileLines = ['\t'.join( rowData ) for rowData in self.tsvTable]
vPrint( 'Info', debuggingThisModule, f" Reassembled {len(fileLines)} table lines (incl. header) cf. {self.numOriginalLines} lines read" )
self.newText = '\n'.join( fileLines )
if self.hadTrailingNL: self.newText = f'{self.newText}\n'
vPrint( 'Never', debuggingThisModule, f" New text is {len(self.newText):,} characters cf. {len(self.originalText):,} characters read" )
# end of TSVEditWindowAddon.doReassembleFile
def modified( self ) -> bool:
"""
Overrides the ChildWindows one, which only works from the one TextBox
"""
self.doReassembleFile()
return self.newText != self.originalText
# end of TSVEditWindowAddon.modified
def doSaveAs( self, event=None ) -> None:
"""
Called if the user requests a saveAs from the GUI.
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.doSaveAs( {event} ) with {self.modified()}" )
if self.modified():
saveAsFilepath = asksaveasfilename( parent=self )
#dPrint( 'Quiet', debuggingThisModule, "saveAsFilepath", repr(saveAsFilepath) )
if saveAsFilepath:
if self.setFilepath( saveAsFilepath ):
self.doSave()
# end of TSVEditWindowAddon.doSaveAs
def doSave( self, event=None ) -> None:
"""
Called if the user requests a save from the GUI.
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.doSave( {event} ) with {self.modified()}" )
if self.modified():
if self.folderpath and self.filename:
filepath = os.path.join( self.folderpath, self.filename )
BibleOrgSysGlobals.backupAnyExistingFile( filepath, numBackups=4 )
allText = self.getEntireText() # from the displayed edit window
vPrint( 'Quiet', debuggingThisModule, f"Writing {len(allText):,} characters to {filepath}" )
with open( filepath, mode='wt', encoding='utf-8' ) as theFile:
theFile.write( allText )
self.rememberFileTimeAndSize()
# self.textBox.edit_modified( tk.FALSE ) # clear Tkinter modified flag
#self.bookTextModified = False
self.refreshTitle()
else: self.doSaveAs()
# end of TSVEditWindowAddon.doSave
def doAutosave( self ):
"""
Called on a timer to save a copy of the file in a separate location
if it's been modified.
Also saves a daily copy of the file into a sub-folder.
Schedules another call.
Doesn't use a hidden folder for the autosave files so the user can find them:
If a save has been done, an AutoSave folder is created in the save folder,
if not, the AutoSave folder is created in the home folder.
(Yes, this can result in old AutoSave files in the home folder.)
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doAutosave()" )
if self.modified():
partialAutosaveFolderpath = self.folderpath if self.folderpath else BiblelatorGlobals.theApp.homeFolderpath
# NOTE: Don't use a hidden folder coz user might not be able to find it
autosaveFolderpath = os.path.join( partialAutosaveFolderpath, 'AutoSave/' ) \
if APP_NAME in partialAutosaveFolderpath \
else os.path.join( partialAutosaveFolderpath, DATA_SUBFOLDER_NAME, 'AutoSave/' )
if not os.path.exists( autosaveFolderpath ): os.makedirs( autosaveFolderpath )
lastDayFolderpath = os.path.join( autosaveFolderpath, 'LastDay/' )
if not os.path.exists( lastDayFolderpath ): os.mkdir( lastDayFolderpath )
autosaveFilename = self.filename if self.filename else 'Autosave.txt'
#dPrint( 'Quiet', debuggingThisModule, 'autosaveFolderpath', repr(autosaveFolderpath), 'autosaveFilename', repr(autosaveFilename) )
autosaveFilepath = os.path.join( autosaveFolderpath, autosaveFilename )
lastDayFilepath = os.path.join( lastDayFolderpath, autosaveFilename )
# Check if we need a daily save
if os.path.isfile( autosaveFilepath ) \
and ( not os.path.isfile( lastDayFilepath ) \
or datetime.fromtimestamp( os.stat( lastDayFilepath ).st_mtime ).date() != datetime.today().date() ):
#or not self.filepath \
vPrint( 'Quiet', debuggingThisModule, "doAutosave: saving daily file", lastDayFilepath )
shutil.copyfile( autosaveFilepath, lastDayFilepath ) # We save a copy of the PREVIOUS autosaved file
# Now save this updated file
allText = self.getEntireText() # from the displayed edit window and/or elsewhere
with open( autosaveFilepath, mode='wt', encoding='utf-8' ) as theFile:
theFile.write( allText )
self.after( self.autosaveTime, self.doAutosave )
else:
self.autosaveScheduled = False # Will be set again by refreshTitle
# end of TSVEditWindowAddon.doAutosave
def doViewSettings( self ):
"""
Open a pop-up text window with the current settings displayed.
"""
if BibleOrgSysGlobals.debugFlag:
vPrint( 'Quiet', debuggingThisModule, "doViewSettings()" )
BiblelatorGlobals.theApp.setDebugText( "doViewSettings…" )
tEW = TSVEditWindow( theApp )
#if windowGeometry: tEW.geometry( windowGeometry )
if not tEW.setFilepath( self.settings.settingsFilepath ) \
or not tEW.loadText():
tEW.doClose()
showError( self, APP_NAME, _("Sorry, unable to open settings file") )
if BibleOrgSysGlobals.debugFlag: BiblelatorGlobals.theApp.setDebugText( "Failed doViewSettings" )
else:
BiblelatorGlobals.theApp.childWindows.append( tEW )
if BibleOrgSysGlobals.debugFlag: BiblelatorGlobals.theApp.setDebugText( "Finished doViewSettings" )
BiblelatorGlobals.theApp.setReadyStatus()
# end of TSVEditWindowAddon.doViewSettings
def doViewLog( self ):
"""
Open a pop-up text window with the current log displayed.
"""
fnPrint( debuggingThisModule, "doViewLog()" )
if debuggingThisModule: BiblelatorGlobals.theApp.setDebugText( "doViewLog…" )
filename = PROGRAM_NAME.replace('/','-').replace(':','_').replace('\\','_') + '_log.txt'
tEW = TSVEditWindow( theApp )
#if windowGeometry: tEW.geometry( windowGeometry )
if not tEW.setPathAndFile( BiblelatorGlobals.theApp.loggingFolderpath, filename ) \
or not tEW.loadText():
tEW.doClose()
showError( self, APP_NAME, _("Sorry, unable to open log file") )
if BibleOrgSysGlobals.debugFlag: BiblelatorGlobals.theApp.setDebugText( "Failed doViewLog" )
else:
BiblelatorGlobals.theApp.childWindows.append( tEW )
#if BibleOrgSysGlobals.debugFlag: self.setDebugText( "Finished doViewLog" ) # Don't do this -- adds to the log immediately
BiblelatorGlobals.theApp.setReadyStatus()
# end of TSVEditWindowAddon.doViewLog
def doHelp( self, event=None ):
"""
Display a help box.
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doHelp( {} )".format( event ) )
from Biblelator.Dialogs.Help import HelpBox
helpInfo = programNameVersion
helpInfo += '\n' + _("Help for {}").format( self.windowType )
helpInfo += '\n ' + _("Keyboard shortcuts:")
for name,shortcut in self.myKeyboardBindingsList:
helpInfo += "\n {}\t{}".format( name, shortcut )
hb = HelpBox( self, self.genericWindowType, helpInfo )
return tkBREAK # so we don't do the main window help also
# end of TSVEditWindowAddon.doHelp
def doAbout( self, event=None ):
"""
Display an about box.
"""
fnPrint( debuggingThisModule, "TSVEditWindowAddon.doAbout( {} )".format( event ) )
from Biblelator.Dialogs.About import AboutBox
aboutInfo = programNameVersion
aboutInfo += "\nInformation about {}".format( self.windowType )
ab = AboutBox( self, self.genericWindowType, aboutInfo )
return tkBREAK # so we don't do the main window about also
# end of TSVEditWindowAddon.doAbout
def doClose( self, event=None ):
"""
Called if the window is about to be destroyed.
Determines if we want/need to save any changes.
"""
fnPrint( debuggingThisModule, f"TSVEditWindowAddon.doClose( {event} )" )
if self.modified():
saveWork = False
if self.saveChangesAutomatically and self.folderpath and self.filename:
#self.doSave( 'Auto from win close' )
#self.doClose()
saveWork = True
else:
#if self.folderpath and self.filename:
#self.doSave()
#self.doClose()
#else: # we need to ask where to save it
place = ' in {}'.format( self.filename) if self.folderpath and self.filename else ''
ocd = OkCancelDialog( self, _('Do you want to save your work{}?').format( place ), title=_('Save work?') )
#dPrint( 'Quiet', debuggingThisModule, "ocdResult", repr(ocd.result) )
if ocd.result == True: # Yes was chosen
saveWork = True
else:
# place = 'to {}'.format( self.filename) if self.folderpath and self.filename else ''
ynd = YesNoDialog( self, _('Are you sure you want to lose your changes?'), title=_('Lose changes?') )
#dPrint( 'Quiet', debuggingThisModule, "yndResult", repr(ynd.result) )
if ynd.result == True: # Yes was chosen
self.textBox.edit_modified( tk.FALSE ) # clear Tkinter modified flag
self.bookTextModified = False
#else: saveWork = True
if saveWork:
self.doSave()
if self.folderpath and self.filename: # assume we saved it
ChildWindow.doClose( self )
return
if 1 or not self.modified():
#dPrint( 'Quiet', debuggingThisModule, "HEREEEEEEEEE" )
ChildWindow.doClose( self )
# end of TSVEditWindowAddon.doClose
# end of TSVEditWindowAddon class
class TSVEditWindow( TSVEditWindowAddon, ChildWindow ):
"""
"""
def __init__( self, parentWindow, folderpath:str ):
"""
"""
if 1 or BibleOrgSysGlobals.debugFlag and debuggingThisModule:
vPrint( 'Quiet', debuggingThisModule, f"TSVEditWindow.__init__( pW={parentWindow}, fp={folderpath} )…" )
self.folderpath = folderpath
BiblelatorGlobals.theApp.logUsage( PROGRAM_NAME, debuggingThisModule, f"TSVEditWindow __init__ {folderpath}" )
# NOTE: Bible is included in the names so we get BCV alerts
windowType, genericWindowType = 'TSVBibleEditWindow', 'TSVBibleEditor'
self.moduleID = 'TSV'
self.BCVUpdateType = DEFAULT # For BCV updates
self._groupCode = 'A' # Fixed
self.BBB = 'UNK' # Unknown book
self.currentVerseKey = SimpleVerseKey( self.BBB,'1','1' )
ChildWindow.__init__( self, parentWindow, genericWindowType )
# BibleWindowAddon.__init__( self, genericWindowType )
TSVEditWindowAddon.__init__( self, windowType, folderpath )
self.updateShownBCV( BiblelatorGlobals.theApp.getVerseKey( self._groupCode),
originator='TSVEditWindow.__init__')
#self.filepath = os.path.join( folderpath, filename ) if folderpath and filename else None
#self.moduleID = None
##self.windowType = 'TSVBibleEditWindow'
#self.protocol( 'WM_DELETE_WINDOW', self.doClose ) # Catch when window is closed
#self.loading = True
#self.onTextNoChangeID = None
#self.editStatus = 'Editable'
## Make our own custom textBox which allows a callback function
## Delete these lines and the callback line if you don't need either autocorrect or autocomplete
#self.textBox.destroy() # from the ChildWindow default
#self.myKeyboardBindingsList = []
#if BibleOrgSysGlobals.debugFlag: self.myKeyboardShortcutsList = []
#self.customFont = tk.font.Font( family="sans-serif", size=12 )
#self.customFontBold = tk.font.Font( family="sans-serif", size=12, weight='bold' )
#self.textBox = CustomText( self, yscrollcommand=self.vScrollbar.set, wrap='word', font=self.customFont )
#self.defaultBackgroundColour = 'gold2'
#self.textBox.configure( background=self.defaultBackgroundColour )
#self.textBox.configure( selectbackground='blue' )
#self.textBox.configure( highlightbackground='orange' )
#self.textBox.configure( inactiveselectbackground='green' )
#self.textBox.configure( wrap='word', undo=True, autoseparators=True )
#self.textBox.pack( side=tk.TOP, fill=tk.BOTH, expand=tk.YES )
#self.vScrollbar.configure( command=self.textBox.yview ) # link the scrollbar to the text box
#self.textBox.setTextChangeCallback( self.onTextChange )
#self.createEditorKeyboardBindings()
self.createMenuBar()
#self.createContextMenu() # Enable right-click menu
#self.lastFiletime = self.lastFilesize = None
#self.clearText()
#self.markMultipleSpacesFlag = True
#self.markTrailingSpacesFlag = True
#self.autocorrectEntries = []
## Temporarily include some default autocorrect values
#setDefaultAutocorrectEntries( self )
##setAutocorrectEntries( self, ourAutocorrectEntries )
#self.autocompleteBox, self.autocompleteWords, self.existingAutocompleteWordText = None, {}, ''
#self.autocompleteWordChars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_'
## Note: I guess we could have used non-word chars instead (to stop the backwards word search)
#self.autocompleteMinLength = 3 # Show the normal window after this many characters have been typed
#self.autocompleteMaxLength = 15 # Remove window after this many characters have been typed
#self.autocompleteMode = None # None or Dictionary1 or Dictionary2 (or Bible or BibleBook)
#self.addAllNewWords = False
#self.invalidCombinations = [] # characters or character combinations that shouldn't occur
## Temporarily include some default invalid values
#self.invalidCombinations = [',,',' ,',] # characters or character combinations that shouldn't occur
#self.patternsToHighlight = []
## Temporarily include some default values -- simplistic demonstration examples
#self.patternsToHighlight.append( (False,'import','red',{'background':'red'}) )
#self.patternsToHighlight.append( (False,'self','green',{'foreground':'green'}) )
#self.patternsToHighlight.append( (True,'\\d','blue',{'foreground':'blue'}) )
#self.patternsToHighlight.append( (True,'#.*?\\n','grey',{'foreground':'grey'}) )
#boldDict = {'font':self.customFontBold } #, 'background':'green'}
#for pythonKeyword in ( 'from','import', 'class','def', 'if','and','or','else','elif',
#'for','while', 'return', 'try','accept','finally', 'assert', ):
#self.patternsToHighlight.append( (True,'\\y'+pythonKeyword+'\\y','bold',boldDict) )
#self.saveChangesAutomatically = False # different from AutoSave (which is in different files)
#self.autosaveTime = 2*60*1000 # msecs (zero is no autosaves)
#self.autosaveScheduled = False
#self.after( CHECK_DISK_CHANGES_TIME, self.checkForDiskChanges )
##self.after( REFRESH_TITLE_TIME, self.refreshTitle )
#self.loading = self.hadTextWarning = False
##self.lastTextChangeTime = time()
vPrint( 'Never', debuggingThisModule, "TSVEditWindow.__init__ finished." )
# end of TSVEditWindow.__init__
# end of TSVEditWindow class
def briefDemo() -> None:
"""
Demo program to handle command line parameters and then run what they want.
"""
BibleOrgSysGlobals.introduceProgram( __name__, programNameVersion, LAST_MODIFIED_DATE )
vPrint( 'Quiet', debuggingThisModule, "Running demo…" )
tkRootWindow = tk.Tk()
tkRootWindow.title( programNameVersion )
tkRootWindow.textBox = tk.Text( tkRootWindow )
tEW = TSVEditWindow( tkRootWindow )
# Program a shutdown
tkRootWindow.after( 2_000, tkRootWindow.destroy ) # Destroy the widget after 2 seconds
# Start the program running
tkRootWindow.mainloop()
# end of TSVEditWindow.briefDemo
def fullDemo() -> None:
"""
Full demo to check class is working
"""
BibleOrgSysGlobals.introduceProgram( __name__, programNameVersion, LAST_MODIFIED_DATE )
vPrint( 'Quiet', debuggingThisModule, "Running demo…" )
tkRootWindow = tk.Tk()
tkRootWindow.title( programNameVersion )
tkRootWindow.textBox = tk.Text( tkRootWindow )
tEW = TSVEditWindow( tkRootWindow )
# Program a shutdown
tkRootWindow.after( 30_000, tkRootWindow.destroy ) # Destroy the widget after 30 seconds
# Start the program running
tkRootWindow.mainloop()
# end of TSVEditWindow.fullDemo
if __name__ == '__main__':
from multiprocessing import freeze_support
freeze_support() # Multiprocessing support for frozen Windows executables
# Configure basic set-up
parser = BibleOrgSysGlobals.setup( SHORT_PROGRAM_NAME, PROGRAM_VERSION, LAST_MODIFIED_DATE )
BibleOrgSysGlobals.addStandardOptionsAndProcess( parser )
fullDemo()
BibleOrgSysGlobals.closedown( PROGRAM_NAME, PROGRAM_VERSION )
# end of TSVEditWindow.py
|
gpl-3.0
|
nicholedwight/nichole-theme
|
node_modules/grunt-docker/node_modules/docker/node_modules/pygmentize-bundled/vendor/pygments/build-3.3/pygments/lexers/parsers.py
|
363
|
25835
|
# -*- coding: utf-8 -*-
"""
pygments.lexers.parsers
~~~~~~~~~~~~~~~~~~~~~~~
Lexers for parser generators.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, DelegatingLexer, \
include, bygroups, using
from pygments.token import Punctuation, Other, Text, Comment, Operator, \
Keyword, Name, String, Number, Whitespace
from pygments.lexers.compiled import JavaLexer, CLexer, CppLexer, \
ObjectiveCLexer, DLexer
from pygments.lexers.dotnet import CSharpLexer
from pygments.lexers.agile import RubyLexer, PythonLexer, PerlLexer
from pygments.lexers.web import ActionScriptLexer
__all__ = ['RagelLexer', 'RagelEmbeddedLexer', 'RagelCLexer', 'RagelDLexer',
'RagelCppLexer', 'RagelObjectiveCLexer', 'RagelRubyLexer',
'RagelJavaLexer', 'AntlrLexer', 'AntlrPythonLexer',
'AntlrPerlLexer', 'AntlrRubyLexer', 'AntlrCppLexer',
#'AntlrCLexer',
'AntlrCSharpLexer', 'AntlrObjectiveCLexer',
'AntlrJavaLexer', "AntlrActionScriptLexer",
'TreetopLexer']
class RagelLexer(RegexLexer):
"""
A pure `Ragel <http://www.complang.org/ragel/>`_ lexer. Use this for
fragments of Ragel. For ``.rl`` files, use RagelEmbeddedLexer instead
(or one of the language-specific subclasses).
*New in Pygments 1.1.*
"""
name = 'Ragel'
aliases = ['ragel']
filenames = []
tokens = {
'whitespace': [
(r'\s+', Whitespace)
],
'comments': [
(r'\#.*$', Comment),
],
'keywords': [
(r'(access|action|alphtype)\b', Keyword),
(r'(getkey|write|machine|include)\b', Keyword),
(r'(any|ascii|extend|alpha|digit|alnum|lower|upper)\b', Keyword),
(r'(xdigit|cntrl|graph|print|punct|space|zlen|empty)\b', Keyword)
],
'numbers': [
(r'0x[0-9A-Fa-f]+', Number.Hex),
(r'[+-]?[0-9]+', Number.Integer),
],
'literals': [
(r'"(\\\\|\\"|[^"])*"', String), # double quote string
(r"'(\\\\|\\'|[^'])*'", String), # single quote string
(r'\[(\\\\|\\\]|[^\]])*\]', String), # square bracket literals
(r'/(?!\*)(\\\\|\\/|[^/])*/', String.Regex), # regular expressions
],
'identifiers': [
(r'[a-zA-Z_][a-zA-Z_0-9]*', Name.Variable),
],
'operators': [
(r',', Operator), # Join
(r'\||&|--?', Operator), # Union, Intersection and Subtraction
(r'\.|<:|:>>?', Operator), # Concatention
(r':', Operator), # Label
(r'->', Operator), # Epsilon Transition
(r'(>|\$|%|<|@|<>)(/|eof\b)', Operator), # EOF Actions
(r'(>|\$|%|<|@|<>)(!|err\b)', Operator), # Global Error Actions
(r'(>|\$|%|<|@|<>)(\^|lerr\b)', Operator), # Local Error Actions
(r'(>|\$|%|<|@|<>)(~|to\b)', Operator), # To-State Actions
(r'(>|\$|%|<|@|<>)(\*|from\b)', Operator), # From-State Actions
(r'>|@|\$|%', Operator), # Transition Actions and Priorities
(r'\*|\?|\+|{[0-9]*,[0-9]*}', Operator), # Repetition
(r'!|\^', Operator), # Negation
(r'\(|\)', Operator), # Grouping
],
'root': [
include('literals'),
include('whitespace'),
include('comments'),
include('keywords'),
include('numbers'),
include('identifiers'),
include('operators'),
(r'{', Punctuation, 'host'),
(r'=', Operator),
(r';', Punctuation),
],
'host': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^{}\'"/#]+', # exclude unsafe characters
r'[^\\][\\][{}]', # allow escaped { or }
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'\#.*$\n?', # ruby comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# / is safe now that we've handled regex and javadoc comments
r'/',
)) + r')+', Other),
(r'{', Punctuation, '#push'),
(r'}', Punctuation, '#pop'),
],
}
class RagelEmbeddedLexer(RegexLexer):
"""
A lexer for `Ragel`_ embedded in a host language file.
This will only highlight Ragel statements. If you want host language
highlighting then call the language-specific Ragel lexer.
*New in Pygments 1.1.*
"""
name = 'Embedded Ragel'
aliases = ['ragel-em']
filenames = ['*.rl']
tokens = {
'root': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^%\'"/#]+', # exclude unsafe characters
r'%(?=[^%]|$)', # a single % sign is okay, just not 2 of them
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'//.*$\n?', # single line comment
r'\#.*$\n?', # ruby/ragel comment
r'/(?!\*)(\\\\|\\/|[^/])*/', # regular expression
# / is safe now that we've handled regex and javadoc comments
r'/',
)) + r')+', Other),
# Single Line FSM.
# Please don't put a quoted newline in a single line FSM.
# That's just mean. It will break this.
(r'(%%)(?![{%])(.*)($|;)(\n?)', bygroups(Punctuation,
using(RagelLexer),
Punctuation, Text)),
# Multi Line FSM.
(r'(%%%%|%%){', Punctuation, 'multi-line-fsm'),
],
'multi-line-fsm': [
(r'(' + r'|'.join(( # keep ragel code in largest possible chunks.
r'(' + r'|'.join((
r'[^}\'"\[/#]', # exclude unsafe characters
r'}(?=[^%]|$)', # } is okay as long as it's not followed by %
r'}%(?=[^%]|$)', # ...well, one %'s okay, just not two...
r'[^\\][\\][{}]', # ...and } is okay if it's escaped
# allow / if it's preceded with one of these symbols
# (ragel EOF actions)
r'(>|\$|%|<|@|<>)/',
# specifically allow regex followed immediately by *
# so it doesn't get mistaken for a comment
r'/(?!\*)(\\\\|\\/|[^/])*/\*',
# allow / as long as it's not followed by another / or by a *
r'/(?=[^/\*]|$)',
# We want to match as many of these as we can in one block.
# Not sure if we need the + sign here,
# does it help performance?
)) + r')+',
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r"\[(\\\\|\\\]|[^\]])*\]", # square bracket literal
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'//.*$\n?', # single line comment
r'\#.*$\n?', # ruby/ragel comment
)) + r')+', using(RagelLexer)),
(r'}%%', Punctuation, '#pop'),
]
}
def analyse_text(text):
return '@LANG: indep' in text or 0.1
class RagelRubyLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a Ruby host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Ruby Host'
aliases = ['ragel-ruby', 'ragel-rb']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelRubyLexer, self).__init__(RubyLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: ruby' in text
class RagelCLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a C host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in C Host'
aliases = ['ragel-c']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelCLexer, self).__init__(CLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: c' in text
class RagelDLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a D host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in D Host'
aliases = ['ragel-d']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelDLexer, self).__init__(DLexer, RagelEmbeddedLexer, **options)
def analyse_text(text):
return '@LANG: d' in text
class RagelCppLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a CPP host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in CPP Host'
aliases = ['ragel-cpp']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelCppLexer, self).__init__(CppLexer, RagelEmbeddedLexer, **options)
def analyse_text(text):
return '@LANG: c++' in text
class RagelObjectiveCLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in an Objective C host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Objective C Host'
aliases = ['ragel-objc']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelObjectiveCLexer, self).__init__(ObjectiveCLexer,
RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: objc' in text
class RagelJavaLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a Java host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Java Host'
aliases = ['ragel-java']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelJavaLexer, self).__init__(JavaLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: java' in text
class AntlrLexer(RegexLexer):
"""
Generic `ANTLR`_ Lexer.
Should not be called directly, instead
use DelegatingLexer for your target language.
*New in Pygments 1.1.*
.. _ANTLR: http://www.antlr.org/
"""
name = 'ANTLR'
aliases = ['antlr']
filenames = []
_id = r'[A-Za-z][A-Za-z_0-9]*'
_TOKEN_REF = r'[A-Z][A-Za-z_0-9]*'
_RULE_REF = r'[a-z][A-Za-z_0-9]*'
_STRING_LITERAL = r'\'(?:\\\\|\\\'|[^\']*)\''
_INT = r'[0-9]+'
tokens = {
'whitespace': [
(r'\s+', Whitespace),
],
'comments': [
(r'//.*$', Comment),
(r'/\*(.|\n)*?\*/', Comment),
],
'root': [
include('whitespace'),
include('comments'),
(r'(lexer|parser|tree)?(\s*)(grammar\b)(\s*)(' + _id + ')(;)',
bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Class,
Punctuation)),
# optionsSpec
(r'options\b', Keyword, 'options'),
# tokensSpec
(r'tokens\b', Keyword, 'tokens'),
# attrScope
(r'(scope)(\s*)(' + _id + ')(\s*)({)',
bygroups(Keyword, Whitespace, Name.Variable, Whitespace,
Punctuation), 'action'),
# exception
(r'(catch|finally)\b', Keyword, 'exception'),
# action
(r'(@' + _id + ')(\s*)(::)?(\s*)(' + _id + ')(\s*)({)',
bygroups(Name.Label, Whitespace, Punctuation, Whitespace,
Name.Label, Whitespace, Punctuation), 'action'),
# rule
(r'((?:protected|private|public|fragment)\b)?(\s*)(' + _id + ')(!)?', \
bygroups(Keyword, Whitespace, Name.Label, Punctuation),
('rule-alts', 'rule-prelims')),
],
'exception': [
(r'\n', Whitespace, '#pop'),
(r'\s', Whitespace),
include('comments'),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
],
'rule-prelims': [
include('whitespace'),
include('comments'),
(r'returns\b', Keyword),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
# throwsSpec
(r'(throws)(\s+)(' + _id + ')',
bygroups(Keyword, Whitespace, Name.Label)),
(r'(,)(\s*)(' + _id + ')',
bygroups(Punctuation, Whitespace, Name.Label)), # Additional throws
# optionsSpec
(r'options\b', Keyword, 'options'),
# ruleScopeSpec - scope followed by target language code or name of action
# TODO finish implementing other possibilities for scope
# L173 ANTLRv3.g from ANTLR book
(r'(scope)(\s+)({)', bygroups(Keyword, Whitespace, Punctuation),
'action'),
(r'(scope)(\s+)(' + _id + ')(\s*)(;)',
bygroups(Keyword, Whitespace, Name.Label, Whitespace, Punctuation)),
# ruleAction
(r'(@' + _id + ')(\s*)({)',
bygroups(Name.Label, Whitespace, Punctuation), 'action'),
# finished prelims, go to rule alts!
(r':', Punctuation, '#pop')
],
'rule-alts': [
include('whitespace'),
include('comments'),
# These might need to go in a separate 'block' state triggered by (
(r'options\b', Keyword, 'options'),
(r':', Punctuation),
# literals
(r"'(\\\\|\\'|[^'])*'", String),
(r'"(\\\\|\\"|[^"])*"', String),
(r'<<([^>]|>[^>])>>', String),
# identifiers
# Tokens start with capital letter.
(r'\$?[A-Z_][A-Za-z_0-9]*', Name.Constant),
# Rules start with small letter.
(r'\$?[a-z_][A-Za-z_0-9]*', Name.Variable),
# operators
(r'(\+|\||->|=>|=|\(|\)|\.\.|\.|\?|\*|\^|!|\#|~)', Operator),
(r',', Punctuation),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
(r';', Punctuation, '#pop')
],
'tokens': [
include('whitespace'),
include('comments'),
(r'{', Punctuation),
(r'(' + _TOKEN_REF + r')(\s*)(=)?(\s*)(' + _STRING_LITERAL
+ ')?(\s*)(;)',
bygroups(Name.Label, Whitespace, Punctuation, Whitespace,
String, Whitespace, Punctuation)),
(r'}', Punctuation, '#pop'),
],
'options': [
include('whitespace'),
include('comments'),
(r'{', Punctuation),
(r'(' + _id + r')(\s*)(=)(\s*)(' +
'|'.join((_id, _STRING_LITERAL, _INT, '\*'))+ ')(\s*)(;)',
bygroups(Name.Variable, Whitespace, Punctuation, Whitespace,
Text, Whitespace, Punctuation)),
(r'}', Punctuation, '#pop'),
],
'action': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^\${}\'"/\\]+', # exclude unsafe characters
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# backslashes are okay, as long as we are not backslashing a %
r'\\(?!%)',
# Now that we've handled regex and javadoc comments
# it's safe to let / through.
r'/',
)) + r')+', Other),
(r'(\\)(%)', bygroups(Punctuation, Other)),
(r'(\$[a-zA-Z]+)(\.?)(text|value)?',
bygroups(Name.Variable, Punctuation, Name.Property)),
(r'{', Punctuation, '#push'),
(r'}', Punctuation, '#pop'),
],
'nested-arg-action': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks.
r'[^\$\[\]\'"/]+', # exclude unsafe characters
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# Now that we've handled regex and javadoc comments
# it's safe to let / through.
r'/',
)) + r')+', Other),
(r'\[', Punctuation, '#push'),
(r'\]', Punctuation, '#pop'),
(r'(\$[a-zA-Z]+)(\.?)(text|value)?',
bygroups(Name.Variable, Punctuation, Name.Property)),
(r'(\\\\|\\\]|\\\[|[^\[\]])+', Other),
]
}
def analyse_text(text):
return re.search(r'^\s*grammar\s+[a-zA-Z0-9]+\s*;', text, re.M)
# http://www.antlr.org/wiki/display/ANTLR3/Code+Generation+Targets
# TH: I'm not aware of any language features of C++ that will cause
# incorrect lexing of C files. Antlr doesn't appear to make a distinction,
# so just assume they're C++. No idea how to make Objective C work in the
# future.
#class AntlrCLexer(DelegatingLexer):
# """
# ANTLR with C Target
#
# *New in Pygments 1.1*
# """
#
# name = 'ANTLR With C Target'
# aliases = ['antlr-c']
# filenames = ['*.G', '*.g']
#
# def __init__(self, **options):
# super(AntlrCLexer, self).__init__(CLexer, AntlrLexer, **options)
#
# def analyse_text(text):
# return re.match(r'^\s*language\s*=\s*C\s*;', text)
class AntlrCppLexer(DelegatingLexer):
"""
`ANTLR`_ with CPP Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With CPP Target'
aliases = ['antlr-cpp']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrCppLexer, self).__init__(CppLexer, AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*C\s*;', text, re.M)
class AntlrObjectiveCLexer(DelegatingLexer):
"""
`ANTLR`_ with Objective-C Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With ObjectiveC Target'
aliases = ['antlr-objc']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrObjectiveCLexer, self).__init__(ObjectiveCLexer,
AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*ObjC\s*;', text)
class AntlrCSharpLexer(DelegatingLexer):
"""
`ANTLR`_ with C# Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With C# Target'
aliases = ['antlr-csharp', 'antlr-c#']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrCSharpLexer, self).__init__(CSharpLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*CSharp2\s*;', text, re.M)
class AntlrPythonLexer(DelegatingLexer):
"""
`ANTLR`_ with Python Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Python Target'
aliases = ['antlr-python']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrPythonLexer, self).__init__(PythonLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Python\s*;', text, re.M)
class AntlrJavaLexer(DelegatingLexer):
"""
`ANTLR`_ with Java Target
*New in Pygments 1.1*
"""
name = 'ANTLR With Java Target'
aliases = ['antlr-java']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrJavaLexer, self).__init__(JavaLexer, AntlrLexer,
**options)
def analyse_text(text):
# Antlr language is Java by default
return AntlrLexer.analyse_text(text) and 0.9
class AntlrRubyLexer(DelegatingLexer):
"""
`ANTLR`_ with Ruby Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Ruby Target'
aliases = ['antlr-ruby', 'antlr-rb']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrRubyLexer, self).__init__(RubyLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Ruby\s*;', text, re.M)
class AntlrPerlLexer(DelegatingLexer):
"""
`ANTLR`_ with Perl Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Perl Target'
aliases = ['antlr-perl']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrPerlLexer, self).__init__(PerlLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Perl5\s*;', text, re.M)
class AntlrActionScriptLexer(DelegatingLexer):
"""
`ANTLR`_ with ActionScript Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With ActionScript Target'
aliases = ['antlr-as', 'antlr-actionscript']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrActionScriptLexer, self).__init__(ActionScriptLexer,
AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*ActionScript\s*;', text, re.M)
class TreetopBaseLexer(RegexLexer):
"""
A base lexer for `Treetop <http://treetop.rubyforge.org/>`_ grammars.
Not for direct use; use TreetopLexer instead.
*New in Pygments 1.6.*
"""
tokens = {
'root': [
include('space'),
(r'require[ \t]+[^\n\r]+[\n\r]', Other),
(r'module\b', Keyword.Namespace, 'module'),
(r'grammar\b', Keyword, 'grammar'),
],
'module': [
include('space'),
include('end'),
(r'module\b', Keyword, '#push'),
(r'grammar\b', Keyword, 'grammar'),
(r'[A-Z][A-Za-z_0-9]*(?:::[A-Z][A-Za-z_0-9]*)*', Name.Namespace),
],
'grammar': [
include('space'),
include('end'),
(r'rule\b', Keyword, 'rule'),
(r'include\b', Keyword, 'include'),
(r'[A-Z][A-Za-z_0-9]*', Name),
],
'include': [
include('space'),
(r'[A-Z][A-Za-z_0-9]*(?:::[A-Z][A-Za-z_0-9]*)*', Name.Class, '#pop'),
],
'rule': [
include('space'),
include('end'),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
(r'([A-Za-z_][A-Za-z_0-9]*)(:)', bygroups(Name.Label, Punctuation)),
(r'[A-Za-z_][A-Za-z_0-9]*', Name),
(r'[()]', Punctuation),
(r'[?+*/&!~]', Operator),
(r'\[(?:\\.|\[:\^?[a-z]+:\]|[^\\\]])+\]', String.Regex),
(r'([0-9]*)(\.\.)([0-9]*)',
bygroups(Number.Integer, Operator, Number.Integer)),
(r'(<)([^>]+)(>)', bygroups(Punctuation, Name.Class, Punctuation)),
(r'{', Punctuation, 'inline_module'),
(r'\.', String.Regex),
],
'inline_module': [
(r'{', Other, 'ruby'),
(r'}', Punctuation, '#pop'),
(r'[^{}]+', Other),
],
'ruby': [
(r'{', Other, '#push'),
(r'}', Other, '#pop'),
(r'[^{}]+', Other),
],
'space': [
(r'[ \t\n\r]+', Whitespace),
(r'#[^\n]*', Comment.Single),
],
'end': [
(r'end\b', Keyword, '#pop'),
],
}
class TreetopLexer(DelegatingLexer):
"""
A lexer for `Treetop <http://treetop.rubyforge.org/>`_ grammars.
*New in Pygments 1.6.*
"""
name = 'Treetop'
aliases = ['treetop']
filenames = ['*.treetop', '*.tt']
def __init__(self, **options):
super(TreetopLexer, self).__init__(RubyLexer, TreetopBaseLexer, **options)
|
mit
|
Varun-Teja/Projects
|
Encryption/Security.py
|
1
|
8006
|
'''Copyright (c) 2015 HG,DL,UTA
Python program runs on local host, uploads, downloads, encrypts local files to google.
Please use python 2.7.X, pycrypto 2.6.1 and Google Cloud python module '''
#import statements.
import argparse
import httplib2
import os
import sys
import json
import time
import datetime
import io
import hashlib
#Google apliclient (Google App Engine specific) libraries.
from apiclient import discovery
from oauth2client import file
from oauth2client import client
from oauth2client import tools
from apiclient.http import MediaIoBaseDownload
#pycry#pto libraries.
from Crypto import Random
from Crypto.Cipher import AES
# Encryption using AES
#http://stackoverflow.com/questions/20852664/
#You can read more about this in the following link
#http://eli.thegreenplace.net/2010/06/25/aes-encryption-of-files-in-python-with-pycrypto
#this implementation of AES works on blocks of "text", put "0"s at the end if too small.
def pad(s):
return s + b"\0" * (AES.block_size - len(s) % AES.block_size)
#Function to encrypt the message
def encrypt(message, key, key_size=256):
message = pad(message)
#iv is the initialization vector
iv = Random.new().read(AES.block_size)
#encrypt entire message
cipher = AES.new(key, AES.MODE_CBC, iv)
return iv + cipher.encrypt(message)
#Function to decrypt the message
def decrypt(ciphertext, key):
iv = ciphertext[:AES.block_size]
cipher = AES.new(key, AES.MODE_CBC, iv)
plaintext = cipher.decrypt(ciphertext[AES.block_size:])
return plaintext.rstrip(b"\0")
#Function to encrypt a given file
def encrypt_file(file_name, key):
#Open file to read content in the file, encrypt the file data and
#create a new file and then write the encrypted data to it
with open(file_name, 'rb') as f:
str = f.read()
e = encrypt(str, key)
with open("e" + file_name, 'wb') as f:
f.write(e)
return "e" + file_name
f.close()
#Function to decrypt a given file.
def decrypt_file(file_name, key):
#open file read the data of the file, decrypt the file data and
#create a new file and then write the decrypted data to the file.
with open(file_name, 'rb') as f:
str1 = f.read()
d = decrypt(str1, key)
with open(file_name, 'wb') as f:
f.write(d)
f.close()
#End of encryption.
_BUCKET_NAME = 'xack' #name of your google bucket.
_API_VERSION = 'v1'
# Parser for command-line arguments.
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[tools.argparser])
# client_secret.json is the JSON file that contains the client ID and Secret.
#You can download the json file from your google cloud console.
CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secret.json')
# Set up a Flow object to be used for authentication.
# Add one or more of the following scopes.
# These scopes are used to restrict the user to only specified permissions (in this case only to devstorage)
FLOW = client.flow_from_clientsecrets(CLIENT_SECRETS,
scope=[
'https://www.googleapis.com/auth/devstorage.full_control',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/devstorage.read_write',
],
message=tools.message_if_missing(CLIENT_SECRETS))
#Downloads the specified object from the given bucket and deletes it from the bucket.
def get(service):
file_get = raw_input("Enter the file name to be downloaded, along with it's extension \n ")
password = raw_input("Enter the key for the file \n")
key = hashlib.sha256(password).digest()
try:
# Get Metadata
req = service.objects().get(
bucket=_BUCKET_NAME,
object=file_get,
fields='bucket,name,metadata(my-key)',
)
resp = req.execute()
print json.dumps(resp, indent=2)
# Get Payload Data
req = service.objects().get_media(
bucket=_BUCKET_NAME ,
object=file_get,
)
# The BytesIO object may be replaced with any io.Base instance.
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, req, chunksize=1024*1024) #show progress at download
done = False
while not done:
status, done = downloader.next_chunk()
if status:
print 'Download %d%%.' % int(status.progress() * 100)
print 'Download Complete!'
dec = decrypt(fh.getvalue(),key)
with open(file_get, 'wb') as fo:
fo.write(dec)
print json.dumps(resp, indent=2)
except client.AccessTokenRefreshError:
print ("Error in the credentials")
#Puts a object into file after encryption and deletes the object from the local PC.
def put(service):
try:
file_put = raw_input("Enter the file name to be uploaded, along with it's extension \n")
password = raw_input("Enter the key for the file \n")
key = hashlib.sha256(password).digest()
start_time = time.time()
e_file = encrypt_file(file_put, key)
req = service.objects().insert(
bucket=_BUCKET_NAME,
name=file_put,
media_body=e_file)
resp = req.execute()
end_time = time.time()
elapsed_time = end_time - start_time
os.remove(file_put) #to remove the local copies
os.remove(e_file)
print json.dumps(resp, indent=2)
print "the time taken is " + str(elapsed_time)
except client.AccessTokenRefreshError:
print ("Error in the credentials")
#Lists all the objects from the given bucket name
def listobj(service):
fields_to_return = 'nextPageToken,items(name,size,contentType,metadata(my-key))'
req = service.objects().list(bucket=_BUCKET_NAME, fields=fields_to_return)
# If you have too many items to list in one request, list_next() will
# automatically handle paging with the pageToken.
while req is not None:
resp = req.execute()
print json.dumps(resp, indent=2)
req = service.objects().list_next(req, resp)
#This deletes the object from the bucket
def deleteobj(service):
file_delete = raw_input("Enter the file name to be deleted, along with it's extension \n")
try:
service.objects().delete(
bucket=_BUCKET_NAME,
object=file_delete).execute()
print file_delete+" deleted"
except client.AccessTokenRefreshError:
print ("Error in the credentials")
def main(argv):
# Parse the command-line flags.
flags = parser.parse_args(argv[1:])
#sample.dat file stores the short lived access tokens, which your application requests user data, attaching the access token to the request.
#so that user need not validate through the browser everytime. This is optional. If the credentials don't exist
#or are invalid run through the native client flow. The Storage object will ensure that if successful the good
# credentials will get written back to the file (sample.dat in this case).
storage = file.Storage('sample.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(FLOW, storage, flags)
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with our good Credentials.
http = httplib2.Http()
http = credentials.authorize(http)
# Construct the service object for the interacting with the Cloud Storage API.
service = discovery.build('storage', _API_VERSION, http=http)
#This is kind of switch equivalent in C or Java.
#Store the option and name of the function as the key value pair in the dictionary.
options = {1: put, 2: get, 3:listobj, 4:deleteobj}
option = input("Enter your choice \n 1 - put \n 2 - get \n 3 - list the objects \n 4 - delete the objects \n ")
print "The chosen option is ", option
#for example if user gives the option 1, then it executes the below line as put(service) which calls the put function defined above.
options[option](service)
if __name__ == '__main__':
main(sys.argv)
# [END all]
|
mit
|
toumorokoshi/yelo
|
yelo/views.py
|
1
|
3859
|
import json
from django.contrib.auth.models import User, Group
from django.http import JsonResponse
from django.shortcuts import render
from django.views.decorators.csrf import csrf_exempt
from rest_framework import viewsets
from yelo.lib.elo_utils import play_match
from yelo.lib.http import api_error
from yelo.models import Elo, Match
from yelo.serializers import (
EloSerializer,
MatchSerializer,
GroupSerializer,
UserSerializer
)
import collections
# Create your views here.
def index(request):
return render(request, "index.html", {
'title': 'yelo'
})
def profile(request, player):
return render(request, "profile.html", {
'title': 'yelo',
'player': player,
'rating_history': get_matches_by_player(player)
})
@csrf_exempt
def record_match(request):
if request.method != 'POST':
return api_error('record_match must be called as a POST')
form = json.loads(request.body.decode('utf-8'))
if form['winner'] == form['loser']:
return api_error(form['winner'] + ' cannot be both the winner and the loser of a match.')
winner = User.objects.get(username=form['winner'])
winner_elo = winner.elo.elo
loser = User.objects.get(username=form['loser'])
loser_elo = loser.elo.elo
new_winner_elo, new_loser_elo = play_match(winner_elo, loser_elo)
match = Match(
winner=winner,
winner_before_elo=winner_elo,
winner_after_elo=new_winner_elo,
loser=loser,
loser_before_elo=loser_elo,
loser_after_elo=new_loser_elo
)
match.save()
winner.elo.elo = new_winner_elo
winner.elo.save()
loser.elo.elo = new_loser_elo
loser.elo.save()
return JsonResponse({
'success': True
})
@csrf_exempt
def add_player(request):
if request.method != 'POST':
return api_error('add_player must be called as a POST')
form = json.loads(request.body.decode('utf-8'))
elo = Elo(
player=User.objects.create_user(form['name'])
)
elo.save()
return JsonResponse({
'success': True
})
@csrf_exempt
def get_matches_by_player(player):
user = User.objects.get(username=player)
wins = Match.objects.filter(winner=user)
losses = Match.objects.filter(loser=user)
rating_history = dict()
for win in wins:
rating_history[win.match_date] = win.winner_after_elo
for loss in losses:
rating_history[loss.match_date] = loss.loser_after_elo
sorted_rating_history = collections.OrderedDict()
for k in sorted(rating_history.keys()):
sorted_rating_history[k] = rating_history[k]
#javascript wants a zero based month, what the hell?
resp = [{'x': 'new Date(' + ','.join(str(a) for a in [date.year, date.month - 1, date.day, date.hour, date.minute, date.second]) + ')', 'y': elo} for date, elo in sorted_rating_history.items()]
resp = JsonResponse(
resp,
safe=False
)
#Unfortunate formatting adjustments to give canvasjs what it wants
resp = str(resp.content.decode('utf-8')).replace("\"", '')
resp = resp.replace("Content-Type: application/json", "")
return resp
class EloViewSet(viewsets.ModelViewSet):
queryset = Elo.objects.all()
serializer_class = EloSerializer
ordering = ('-elo',)
class UserViewSet(viewsets.ModelViewSet):
"""
API endpoint that allows users to be viewed or edited.
"""
queryset = User.objects.all()
serializer_class = UserSerializer
class GroupViewSet(viewsets.ModelViewSet):
"""
API endpoint that allows groups to be viewed or edited.
"""
queryset = Group.objects.all()
serializer_class = GroupSerializer
class MatchViewSet(viewsets.ModelViewSet):
queryset = Match.objects.all()
serializer_class = MatchSerializer
ordering = ('-match_date',)
|
mit
|
munnerz/CouchPotatoServer
|
libs/caper/matcher.py
|
81
|
4952
|
# Copyright 2013 Dean Gardiner <[email protected]>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from caper.helpers import is_list_type, update_dict, delta_seconds
from datetime import datetime
from logr import Logr
import re
class FragmentMatcher(object):
def __init__(self, pattern_groups):
self.regex = {}
self.construct_patterns(pattern_groups)
def construct_patterns(self, pattern_groups):
compile_start = datetime.now()
compile_count = 0
for group_name, patterns in pattern_groups:
if group_name not in self.regex:
self.regex[group_name] = []
# Transform into weight groups
if type(patterns[0]) is str or type(patterns[0][0]) not in [int, float]:
patterns = [(1.0, patterns)]
for weight, patterns in patterns:
weight_patterns = []
for pattern in patterns:
# Transform into multi-fragment patterns
if type(pattern) is str:
pattern = (pattern,)
if type(pattern) is tuple and len(pattern) == 2:
if type(pattern[0]) is str and is_list_type(pattern[1], str):
pattern = (pattern,)
result = []
for value in pattern:
if type(value) is tuple:
if len(value) == 2:
# Construct OR-list pattern
value = value[0] % '|'.join(value[1])
elif len(value) == 1:
value = value[0]
result.append(re.compile(value, re.IGNORECASE))
compile_count += 1
weight_patterns.append(tuple(result))
self.regex[group_name].append((weight, weight_patterns))
Logr.info("Compiled %s patterns in %ss", compile_count, delta_seconds(datetime.now() - compile_start))
def find_group(self, name):
for group_name, weight_groups in self.regex.items():
if group_name and group_name == name:
return group_name, weight_groups
return None, None
def value_match(self, value, group_name=None, single=True):
result = None
for group, weight_groups in self.regex.items():
if group_name and group != group_name:
continue
# TODO handle multiple weights
weight, patterns = weight_groups[0]
for pattern in patterns:
match = pattern[0].match(value)
if not match:
continue
if result is None:
result = {}
if group not in result:
result[group] = {}
result[group].update(match.groupdict())
if single:
return result
return result
def fragment_match(self, fragment, group_name=None):
"""Follow a fragment chain to try find a match
:type fragment: caper.objects.CaperFragment
:type group_name: str or None
:return: The weight of the match found between 0.0 and 1.0,
where 1.0 means perfect match and 0.0 means no match
:rtype: (float, dict, int)
"""
group_name, weight_groups = self.find_group(group_name)
for weight, patterns in weight_groups:
for pattern in patterns:
cur_fragment = fragment
success = True
result = {}
# Ignore empty patterns
if len(pattern) < 1:
break
for fragment_pattern in pattern:
if not cur_fragment:
success = False
break
match = fragment_pattern.match(cur_fragment.value)
if match:
update_dict(result, match.groupdict())
else:
success = False
break
cur_fragment = cur_fragment.right if cur_fragment else None
if success:
Logr.debug("Found match with weight %s" % weight)
return float(weight), result, len(pattern)
return 0.0, None, 1
|
gpl-3.0
|
Talkdesk/graphite-web
|
webapp/graphite/whitelist/views.py
|
5
|
1960
|
"""Copyright 2008 Orbitz WorldWide
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License."""
import os
from random import randint
from django.http import HttpResponse
from django.conf import settings
from graphite.util import unpickle
def add(request):
metrics = set( request.POST['metrics'].split() )
whitelist = load_whitelist()
new_whitelist = whitelist | metrics
save_whitelist(new_whitelist)
return HttpResponse(mimetype="text/plain", content="OK")
def remove(request):
metrics = set( request.POST['metrics'].split() )
whitelist = load_whitelist()
new_whitelist = whitelist - metrics
save_whitelist(new_whitelist)
return HttpResponse(mimetype="text/plain", content="OK")
def show(request):
whitelist = load_whitelist()
members = '\n'.join( sorted(whitelist) )
return HttpResponse(mimetype="text/plain", content=members)
def load_whitelist():
fh = open(settings.WHITELIST_FILE, 'rb')
whitelist = unpickle.load(fh)
fh.close()
return whitelist
def save_whitelist(whitelist):
serialized = pickle.dumps(whitelist, protocol=-1) #do this instead of dump() to raise potential exceptions before open()
tmpfile = '%s-%d' % (settings.WHITELIST_FILE, randint(0, 100000))
try:
fh = open(tmpfile, 'wb')
fh.write(serialized)
fh.close()
if os.path.exists(settings.WHITELIST_FILE):
os.unlink(settings.WHITELIST_FILE)
os.rename(tmpfile, settings.WHITELIST_FILE)
finally:
if os.path.exists(tmpfile):
os.unlink(tmpfile)
|
apache-2.0
|
petemounce/ansible
|
lib/ansible/modules/cloud/openstack/os_volume.py
|
11
|
5663
|
#!/usr/bin/python
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# This module is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this software. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: os_volume
short_description: Create/Delete Cinder Volumes
extends_documentation_fragment: openstack
version_added: "2.0"
author: "Monty Taylor (@emonty)"
description:
- Create or Remove cinder block storage volumes
options:
size:
description:
- Size of volume in GB. This parameter is required when the
I(state) parameter is 'present'.
required: false
default: None
display_name:
description:
- Name of volume
required: true
display_description:
description:
- String describing the volume
required: false
default: None
volume_type:
description:
- Volume type for volume
required: false
default: None
image:
description:
- Image name or id for boot from volume
required: false
default: None
snapshot_id:
description:
- Volume snapshot id to create from
required: false
default: None
volume:
description:
- Volume name or id to create from
required: false
default: None
version_added: "2.3"
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
default: present
availability_zone:
description:
- Ignored. Present for backwards compatability
required: false
requirements:
- "python >= 2.6"
- "shade"
'''
EXAMPLES = '''
# Creates a new volume
- name: create a volume
hosts: localhost
tasks:
- name: create 40g test volume
os_volume:
state: present
cloud: mordred
availability_zone: az2
size: 40
display_name: test_volume
'''
try:
import shade
HAS_SHADE = True
except ImportError:
HAS_SHADE = False
def _present_volume(module, cloud):
if cloud.volume_exists(module.params['display_name']):
v = cloud.get_volume(module.params['display_name'])
module.exit_json(changed=False, id=v['id'], volume=v)
volume_args = dict(
size=module.params['size'],
volume_type=module.params['volume_type'],
display_name=module.params['display_name'],
display_description=module.params['display_description'],
snapshot_id=module.params['snapshot_id'],
availability_zone=module.params['availability_zone'],
)
if module.params['image']:
image_id = cloud.get_image_id(module.params['image'])
volume_args['imageRef'] = image_id
if module.params['volume']:
volume_id = cloud.get_volume_id(module.params['volume'])
if not volume_id:
module.fail_json(msg="Failed to find volume '%s'" % module.params['volume'])
volume_args['source_volid'] = volume_id
volume = cloud.create_volume(
wait=module.params['wait'], timeout=module.params['timeout'],
**volume_args)
module.exit_json(changed=True, id=volume['id'], volume=volume)
def _absent_volume(module, cloud):
changed = False
if cloud.volume_exists(module.params['display_name']):
try:
changed = cloud.delete_volume(name_or_id=module.params['display_name'],
wait=module.params['wait'],
timeout=module.params['timeout'])
except shade.OpenStackCloudTimeout:
module.exit_json(changed=changed)
module.exit_json(changed=changed)
def main():
argument_spec = openstack_full_argument_spec(
size=dict(default=None),
volume_type=dict(default=None),
display_name=dict(required=True, aliases=['name']),
display_description=dict(default=None, aliases=['description']),
image=dict(default=None),
snapshot_id=dict(default=None),
volume=dict(default=None),
state=dict(default='present', choices=['absent', 'present']),
)
module_kwargs = openstack_module_kwargs(
mutually_exclusive=[
['image', 'snapshot_id', 'volume'],
],
)
module = AnsibleModule(argument_spec=argument_spec, **module_kwargs)
if not HAS_SHADE:
module.fail_json(msg='shade is required for this module')
state = module.params['state']
if state == 'present' and not module.params['size']:
module.fail_json(msg="Size is required when state is 'present'")
try:
cloud = shade.openstack_cloud(**module.params)
if state == 'present':
_present_volume(module, cloud)
if state == 'absent':
_absent_volume(module, cloud)
except shade.OpenStackCloudException as e:
module.fail_json(msg=str(e))
# this is magic, see lib/ansible/module_common.py
from ansible.module_utils.basic import *
from ansible.module_utils.openstack import *
if __name__ == '__main__':
main()
|
gpl-3.0
|
igemsoftware/SYSU-Software2013
|
project/Python27_32/Lib/lib2to3/fixes/fix_exitfunc.py
|
291
|
2505
|
"""
Convert use of sys.exitfunc to use the atexit module.
"""
# Author: Benjamin Peterson
from lib2to3 import pytree, fixer_base
from lib2to3.fixer_util import Name, Attr, Call, Comma, Newline, syms
class FixExitfunc(fixer_base.BaseFix):
keep_line_order = True
BM_compatible = True
PATTERN = """
(
sys_import=import_name<'import'
('sys'
|
dotted_as_names< (any ',')* 'sys' (',' any)* >
)
>
|
expr_stmt<
power< 'sys' trailer< '.' 'exitfunc' > >
'=' func=any >
)
"""
def __init__(self, *args):
super(FixExitfunc, self).__init__(*args)
def start_tree(self, tree, filename):
super(FixExitfunc, self).start_tree(tree, filename)
self.sys_import = None
def transform(self, node, results):
# First, find a the sys import. We'll just hope it's global scope.
if "sys_import" in results:
if self.sys_import is None:
self.sys_import = results["sys_import"]
return
func = results["func"].clone()
func.prefix = u""
register = pytree.Node(syms.power,
Attr(Name(u"atexit"), Name(u"register"))
)
call = Call(register, [func], node.prefix)
node.replace(call)
if self.sys_import is None:
# That's interesting.
self.warning(node, "Can't find sys import; Please add an atexit "
"import at the top of your file.")
return
# Now add an atexit import after the sys import.
names = self.sys_import.children[1]
if names.type == syms.dotted_as_names:
names.append_child(Comma())
names.append_child(Name(u"atexit", u" "))
else:
containing_stmt = self.sys_import.parent
position = containing_stmt.children.index(self.sys_import)
stmt_container = containing_stmt.parent
new_import = pytree.Node(syms.import_name,
[Name(u"import"), Name(u"atexit", u" ")]
)
new = pytree.Node(syms.simple_stmt, [new_import])
containing_stmt.insert_child(position + 1, Newline())
containing_stmt.insert_child(position + 2, new)
|
mit
|
andim/scipy
|
scipy/fftpack/tests/test_helper.py
|
63
|
1934
|
#!/usr/bin/env python
# Created by Pearu Peterson, September 2002
from __future__ import division, print_function, absolute_import
__usage__ = """
Build fftpack:
python setup_fftpack.py build
Run tests if scipy is installed:
python -c 'import scipy;scipy.fftpack.test(<level>)'
Run tests if fftpack is not installed:
python tests/test_helper.py [<level>]
"""
from numpy.testing import (TestCase, assert_array_almost_equal, rand,
run_module_suite)
from scipy.fftpack import fftshift,ifftshift,fftfreq,rfftfreq
from numpy import pi
def random(size):
return rand(*size)
class TestFFTShift(TestCase):
def test_definition(self):
x = [0,1,2,3,4,-4,-3,-2,-1]
y = [-4,-3,-2,-1,0,1,2,3,4]
assert_array_almost_equal(fftshift(x),y)
assert_array_almost_equal(ifftshift(y),x)
x = [0,1,2,3,4,-5,-4,-3,-2,-1]
y = [-5,-4,-3,-2,-1,0,1,2,3,4]
assert_array_almost_equal(fftshift(x),y)
assert_array_almost_equal(ifftshift(y),x)
def test_inverse(self):
for n in [1,4,9,100,211]:
x = random((n,))
assert_array_almost_equal(ifftshift(fftshift(x)),x)
class TestFFTFreq(TestCase):
def test_definition(self):
x = [0,1,2,3,4,-4,-3,-2,-1]
assert_array_almost_equal(9*fftfreq(9),x)
assert_array_almost_equal(9*pi*fftfreq(9,pi),x)
x = [0,1,2,3,4,-5,-4,-3,-2,-1]
assert_array_almost_equal(10*fftfreq(10),x)
assert_array_almost_equal(10*pi*fftfreq(10,pi),x)
class TestRFFTFreq(TestCase):
def test_definition(self):
x = [0,1,1,2,2,3,3,4,4]
assert_array_almost_equal(9*rfftfreq(9),x)
assert_array_almost_equal(9*pi*rfftfreq(9,pi),x)
x = [0,1,1,2,2,3,3,4,4,5]
assert_array_almost_equal(10*rfftfreq(10),x)
assert_array_almost_equal(10*pi*rfftfreq(10,pi),x)
if __name__ == "__main__":
run_module_suite()
|
bsd-3-clause
|
Julioocz/SIMNAV
|
simnav/termodinamica/ideal.py
|
1
|
2516
|
"""Ecuaciones termodinamicas ideales para el calculo de propiedades termodinamicas
del Manual del ingeniero quimico Perry"""
from math import sinh, cosh
from scipy import integrate
import numpy as np
def antoine(T, C1, C2, C3, C4, C5):
"""Ecuación para calculo de presión de vapor (Pa)"""
return np.exp(C1 + C2/T + C3*np.log(T) + C4*T**C5)
def calor_vaporizacion(T, Tc, C1, C2, C3, C4):
"""Calor de vaporización de liquidos organicos e inorganicos [J/(mol.K)]"""
Tr = T / Tc
return C1*(1 - Tr)**(C2 + C3*Tr + C4*Tr**2) / 1000
def cp_polinomial(T, C1, C2, C3, C4, C5):
"""Capacidad calorifica a presion constante de compuestos inorganicos y organicos en el
estado ideal como una ecuación polinomial [J/(mol.K)]"""
return (C1 + C2*T + C3*T**2 + C4*T**3 + C5*T**4) / 1000
def cp_hiperbolico(T, C1, C2, C3, C4, C5):
"""Capacidad calorifica a presión constante para compuestos organicos e inorganicos
en el estado ideal como una ecuación hiperbolica [J/(mol.K)]"""
return (C1 + C2*((C3/T) / sinh(C3/T))**2 + C4*((C5/T) / cosh(C5/T))**2) / 1000
def cp_liquido1(T, C1, C2, C3, C4, C5):
"""Ecuación 1 para el calculo de capacidad calorifica de liquidos inorganicos y
organicos [J/(mol.K)]"""
return (C1 + C2*T +C3*T**2 + C4*T**3 + C5*T**4) / 1000
def cp_liquido2(T, Tc, C1, C2, C3, C4):
"""Ecuación 2 para el calculo de capacidad calorifica de liquidos inorganicos y
organicos [J/(mol.K)]"""
t = 1 - T/Tc # t = 1 - Tr
return (((C1**2)/t + C2 - 2*C1*C3*t - C1*C4*t**2 - (C3**2*t**3)/3 - (C3*C4*t**4)/2
- (C4**2*t**5)/5) / 1000)
def entalpia_cp(T1, T2, ecuacion_cp):
"""Calcula la entalpia para una sustancia pura usando la capacidad calorifica a presión
constante con la siguiente ecuación: dH = ∫cp dT
:param T1: temperatura inicial para la integración
:param T2: temperatura final para la integración
:param ecuacion_cp: ecuación para calculo de cp en función de solo la temperatura
:return: delta de entalpia entre las temperaturas provistas
"""
return integrate.quad(ecuacion_cp, T1, T2)[0]
def raoult_vapor(fraccion_liquido, presion_vapor, presion):
"""Calculo de fracción molar de vapor mediante la ecuación de Raoult"""
return fraccion_liquido * presion_vapor / presion
def raoult_liquido(fraccion_vapor, presion_vapor, presion):
"""Calcula la fraccion molar de liquido mediante la ec de Raoult"""
return fraccion_vapor * presion / presion_vapor
|
mit
|
andref/Unnatural-Sublime-Package
|
events.py
|
1
|
1798
|
# encoding: utf-8
import sublime, sublime_plugin
try:
from . import util
except ValueError:
import util
class PerformEventListener(sublime_plugin.EventListener):
"""Suggest subroutine completions for the perform statement."""
def on_query_completions(self, view, prefix, points):
if not util.is_natural_file(view):
return None
texts = util.text_preceding_points(view, points)
if all([text.strip().lower().endswith(u'perform') for text in texts]):
subroutines = util.find_text_by_selector(view,
'entity.name.function.natural')
if not subroutines:
return None
subroutines.sort()
completions = [(sub, sub) for sub in subroutines]
return (completions, sublime.INHIBIT_WORD_COMPLETIONS)
class AddRulerToColumn72Listener(sublime_plugin.EventListener):
"""Add a ruler to column 72 when a Natural file is opened. If the user has
other rulers, they're not messed with."""
def on_load(self, view):
if not util.is_natural_file(view):
return
rulers = view.settings().get(u'rulers')
if 72 not in rulers:
rulers.append(72)
rulers.sort() # why? to be neat.
view.settings().set(u'rulers', rulers)
class FixWordSeparatorsListener(sublime_plugin.EventListener):
"""Fix the `word_separators` setting for a Natural view so that the hyphen
and hash sign can be considered parts of words."""
def on_load(self, view):
if not util.is_natural_file(view):
return
separators = view.settings().get(u'word_separators')
separators = separators.replace(u'-', u'').replace(u'#', u'')
view.settings().set(u'word_separators', separators)
|
mit
|
vetalypp/e2openplugin-CrossEPG
|
scripts/lib/scriptlib.py
|
2
|
10585
|
#!/usr/bin/python
# scriptlib.py by Ambrosa http://www.ambrosa.net
# derived from E2_LOADEPG
# 22-Dec-2011
__author__ = "ambrosa http://www.ambrosa.net"
__copyright__ = "Copyright (C) 2008-2011 Alessandro Ambrosini"
__license__ = "CreativeCommons by-nc-sa http://creativecommons.org/licenses/by-nc-sa/3.0/"
import os
import sys
import time
import codecs
import crossepg
# escape some incorrect chars from filename
def fn_escape(s):
if type(s).__name__ == 'str':
s = s.decode('utf-8')
s = s.replace(' ','_')
s = s.replace('/','_')
s = s.replace(':','_')
s = s.replace('.','_')
s = s.replace('|','_')
s = s.replace('!','_')
return(s.encode('utf-8'))
# logging class
class logging_class:
FDlog = None
def __init__(self, fname=''):
# get where CrossEPG save data (dbroot) and use it for opening crossepg.log
dbroot = crossepg.epgdb_get_dbroot()
if dbroot != False:
if fname != '' :
self.FDlog = open(dbroot+'/'+fname,'w')
else :
crossepg.log_open(dbroot)
else:
print "[scriptlib] WARNING: cannot open crossepg dbroot. Log not initialized !!"
def log(self,s):
if self.FDlog != None :
self.FDlog.write("%s %s\n" % (time.strftime("%d/%m/%Y %H:%M:%S"), s) )
else:
crossepg.log_add(str(s))
def log2video_status(self,s):
print("LOGTEXT %s" % s)
sys.stdout.flush()
def log2video_pbar_on(self):
print("PROGRESS ON")
sys.stdout.flush()
def log2video_pbar_off(self):
print("PROGRESS OFF")
sys.stdout.flush()
def log2video_pbar(self,i):
if i > 100:
i = 100
if i < 0:
i = 0
print("PROGRESS %d" % i)
sys.stdout.flush()
def log2video_scriptname(self,s):
print("TYPE RUNNING CSCRIPT %s" % s)
sys.stdout.flush()
# decompress gzipped data
class zlib_class:
GZTMP_FILE = "gunzip_temp.gz"
UNGZTMP_FILE = "gunzip_temp"
BIN_GZUNZIP = "gunzip -c " + GZTMP_FILE
def gzuncompress(self,data):
fd = open(self.GZTMP_FILE,'w')
fd.write(data)
fd.close()
fd = os.popen(self.BIN_GZUNZIP)
data_ungz = fd.read()
fd.close()
os.unlink(self.GZTMP_FILE)
return(data_ungz)
# removing old cached epg files **
def cleanup_oldcachedfiles(cachedir, field_separator):
TODAY = time.strftime("%Y%m%d")
for cachedfile in os.listdir(cachedir):
# extract date from filename
if cachedfile.split(field_separator)[-1] < TODAY :
os.unlink(os.path.join(cachedir,cachedfile))
# return LOCALTIME - GMTIME (with DST)
# return negative number if timezone is east of GMT (like Italy)
# return postive number if timezone is west of GMT (like USA)
def delta_utc():
if time.localtime().tm_isdst == 0 :
# return localtime - gmtime (in seconds)
return time.timezone
else:
# return (localtime - gmtime - DST)
return time.altzone
# return DST time difference (in seconds)
def delta_dst():
if time.localtime().tm_isdst == 0 :
return 0
else:
# return DST difference
return abs(time.altzone - time.timezone)
# manage channel list from lamedb
class lamedb_class:
LAMEDB='/etc/enigma2/lamedb'
# initialize an empty dictionary (Python array) indexed by channel name
# format: { channel_name : [ (sid , provider) , (sid , provider) , .... ] }
INDEXBYCHNAME = True
lamedb_dict = {}
# lamedb indexed by provider name
# format: { provider_name : [ (sid , channel_name) , (sid , channel_name) , .... ] }
INDEXBYPROVID = False # if True, also create the array lamedb_dict_prov, usually false for saving memory
lamedb_provid_dict = {}
def __init__(self, index_by_chname = True, index_by_provid = False):
self.INDEXBYCHNAME = index_by_chname
self.INDEXBYPROVID = index_by_provid
self.read_lamedb()
# first of all try to decode a string using UTF-8, if it fails then try with ISO-8859-1
# always return an Unicode string
def decode_charset(self,s):
u = None
charset_list = ('utf-8','iso-8859-1','iso-8859-2','iso-8859-15')
for charset in charset_list:
try:
u = unicode(s,charset,"strict")
except:
pass
else:
break
if u == None:
print("CHARSET ERROR while decoding lamedb")
sys.exit(1)
else:
return(u)
def read_lamedb(self):
if not os.path.exists(self.LAMEDB):
print("ERROR ! \'%s\' NOT FOUND" % self.LAMEDB)
sys.exit(1)
# lamedb mix UTF-8 + iso-8859-* inside it
# need charset decoding line by line
fd = open(self.LAMEDB,"r")
# skip transponder section
# read lamedb until are found "end" and "services" lines
while True:
temp = self.decode_charset(fd.readline())
if temp == '' :
print("ERROR parsing lamedb, transponder section: end of file")
sys.exit(1)
temp = temp.strip(' \n\r')
if temp == u"end":
# next line should be "services"
temp = self.decode_charset(fd.readline())
temp = temp.strip(' \n\r')
if temp == u'services':
# reached end of transponder section, end loop and continue with parsing channel section
break
else:
print("ERROR parsing lamedb, transponder section: not found \"end + services\" lines")
sys.exit(1)
# parsing lamedb channel section
while True:
sid = self.decode_charset(fd.readline()) # read SID , it's the first line
if sid == '' :
print("ERROR parsing lamedb, channel_name section: end of file")
sys.exit(1)
sid = sid.strip(' \n\r')
if sid == u'end':
# reached end of channel section, end loop
break;
channel_name = self.decode_charset(fd.readline()) # read channel name, this is the second line
channel_name = channel_name.strip(' \n\r').lower() # force channel name lowercase
temp = self.decode_charset(fd.readline()) # read provider , this is the third line
temp = temp.strip(' \n\r').lower()
temp_P = temp.find('p:')
if temp_P == -1 :
print("ERROR parsing lamedb, channel_name section: provider name \'p:\' not present")
sys.exit(1)
else:
temp = temp[(temp_P + 2):]
temp = temp.split(',')[0]
temp = temp.strip(' \n\r')
if temp == '':
provider_name = u'noprovider'
else:
provider_name = temp.lower()
#channel_name=channel_name.encode('utf-8')
#provider_name=provider_name.encode('utf-8')
if self.INDEXBYCHNAME == True:
sp = (sid,provider_name)
if channel_name != '':
if self.lamedb_dict.has_key(channel_name):
self.lamedb_dict[channel_name].append(sp)
else:
self.lamedb_dict[channel_name]=[sp]
if self.INDEXBYPROVID == True:
sp = (sid,channel_name)
if self.lamedb_provid_dict.has_key(provider_name):
self.lamedb_provid_dict[provider_name].append(sp)
else:
self.lamedb_provid_dict[provider_name]=[sp]
fd.close()
if len(self.lamedb_dict) == 0 :
print("ERROR lamedb empty ?")
sys.exit(1)
def get_sid_byname(self,channel_name):
sid_list = []
if self.lamedb_dict.has_key(channel_name) :
for v in self.lamedb_dict[channel_name]:
# (sid,provider_name)
sid_list.append(v[0])
return(sid_list)
def get_provid_byname(self,channel_name):
provid_list = []
if self.lamedb_dict.has_key(channel_name) :
for v in self.lamedb_dict[channel_name]:
# (sid,provider_name)
provid_list.append(v[1])
return(provid_list)
def get_sidprovid_byname(self,channel_name):
sidprov_list = []
if self.lamedb_dict.has_key(channel_name) :
# (sid,provider_name)
sidprov_list = self.lamedb_dict[channel_name]
return(sidprov_list)
def get_chnames_byprov(self,provider_name):
if self.INDEXBYPROVID == True:
if self.lamedb_provid_dict.has_key(provider_name) :
return self.lamedb_provid_dict[provider_name]
else:
return None
return None
def convert_sid(self,sid):
s=[]
# SID:ns:TSID:ONID:stype:unused
try:
tmp = sid.split(":")
s.append(int(tmp[0],0x10)) # SID
s.append(int(tmp[2],0X10)) # TSID
s.append(int(tmp[3],0X10)) # ONID
except:
pass
return(s)
class crossepg_db_class:
db_channel_ref = ''
event_id = 1
title_ref = ''
def __init__(self):
pass
def open_db(self):
# get where CrossEPG save data (dbroot)
dbroot = crossepg.epgdb_get_dbroot()
# open CrossEPG database
if not crossepg.epgdb_open(dbroot):
print("ERROR opening CrossEPG database")
sys.exit(1)
# load database structures (index, ....)
crossepg.epgdb_load()
def close_db(self):
# save data
if crossepg.epgdb_save(None):
print("CrossEPG data saved")
else:
print("CrossEPG Error saving data")
# close epgdb and clean memory
crossepg.epgdb_close()
crossepg.epgdb_clean()
# add channel into db and get a reference to the structure
# doesn't matter if the channel already exist... epgdb do all the work
def add_channel(self,ch_sid):
# epgdb_channels_add(onid, tsid, sid)
self.db_channel_ref = crossepg.epgdb_channels_add(ch_sid[2], ch_sid[1], ch_sid[0])
self.event_id = 1
# add an EPG event
def add_event(self, start_time, duration, title=' ', summarie=' ', language='eng', utf8=False):
start_time = int(start_time)
duration = int(duration)
if (duration < 0) or (duration > 65535) :
# duration must be >= 0 or < 65536 , skip this event (it's an error)
print("DEBUG: length error %d" % duration)
return
event_ref = crossepg.epgdb_title_alloc() # alloc title structure in memory
event_ref.event_id = self.event_id # event_id is unique inside a channel
self.event_id += 1
event_ref.start_time = start_time # Unix timestamp, always referred to gmt+0 without daylight saving
event_ref.mjd = crossepg.epgdb_calculate_mjd(event_ref.start_time) # Modified Julian Date. if you don't know it you can calulate it with epgdb_calculate_mjd()
# print(" title %s , starttime %s , duration %f" % (title, start_time, duration))
event_ref.length = duration # event duration in seconds
# ISO 639 language code. http://en.wikipedia.org/wiki/ISO_639
event_ref.iso_639_1 = ord(language[0:1])
event_ref.iso_639_2 = ord(language[1:2])
event_ref.iso_639_3 = ord(language[2:3])
# add event in epgdb and return back a reference to the structure
# remember to use always the new structure reference
# if the event already exist epgdb update it and automatically destroy the new structure
event_ref = crossepg.epgdb_titles_add(self.db_channel_ref, event_ref)
#print("DEBUG , title DATA TYPE: \'%s\'" % type(title).__name__ )
#print("DEBUG , summarie DATA TYPE: \'%s\'" % type(summarie).__name__ )
if utf8 == False :
crossepg.epgdb_titles_set_description(event_ref, title);
crossepg.epgdb_titles_set_long_description(event_ref, summarie);
else:
crossepg.epgdb_titles_set_description_utf8(event_ref, title);
crossepg.epgdb_titles_set_long_description_utf8(event_ref, summarie);
|
lgpl-2.1
|
jerem/Whoosh
|
src/whoosh/filedb/fileindex.py
|
1
|
19208
|
#===============================================================================
# Copyright 2009 Matt Chaput
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===============================================================================
import cPickle, re
from bisect import bisect_right
from time import time
from threading import Lock
from whoosh import __version__
from whoosh.fields import Schema
from whoosh.index import Index
from whoosh.index import EmptyIndexError, OutOfDateError, IndexVersionError
from whoosh.index import _DEF_INDEX_NAME
from whoosh.store import LockError
from whoosh.support.bitvector import BitVector
from whoosh.system import _INT_SIZE, _FLOAT_SIZE
_INDEX_VERSION = -105
_EXTENSIONS = "dci|dcz|tiz|fvz|pst|vps"
# A mix-in that adds methods for deleting
# documents from self.segments. These methods are on IndexWriter as
# well as Index for convenience, so they're broken out here.
class SegmentDeletionMixin(object):
"""Mix-in for classes that support deleting documents from self.segments."""
def delete_document(self, docnum, delete = True):
"""Deletes a document by number."""
self.segments.delete_document(docnum, delete = delete)
def deleted_count(self):
"""Returns the total number of deleted documents in this index.
"""
return self.segments.deleted_count()
def is_deleted(self, docnum):
"""Returns True if a given document number is deleted but
not yet optimized out of the index.
"""
return self.segments.is_deleted(docnum)
def has_deletions(self):
"""Returns True if this index has documents that are marked
deleted but haven't been optimized out of the index yet.
"""
return self.segments.has_deletions()
class FileIndex(SegmentDeletionMixin, Index):
def __init__(self, storage, schema, create=False, indexname=_DEF_INDEX_NAME):
self.storage = storage
self.indexname = indexname
if schema is not None and not isinstance(schema, Schema):
raise ValueError("%r is not a Schema object" % schema)
self.generation = self.latest_generation()
if create:
if schema is None:
raise IndexError("To create an index you must specify a schema")
self.schema = schema
self.generation = 0
self.segment_counter = 0
self.segments = SegmentSet()
# Clear existing files
self.unlock()
prefix = "_%s_" % self.indexname
for filename in self.storage:
if filename.startswith(prefix):
storage.delete_file(filename)
self._write()
elif self.generation >= 0:
self._read(schema)
else:
raise EmptyIndexError
# Open a reader for this index. This is used by the
# deletion methods, but mostly it's to keep the underlying
# files open so they don't get deleted from underneath us.
self._searcher = self.searcher()
self.segment_num_lock = Lock()
def __repr__(self):
return "%s(%r, %r)" % (self.__class__.__name__, self.storage, self.indexname)
def __del__(self):
if hasattr(self, "_searcher") and self._searcher and not self._searcher.is_closed:
self._searcher.close()
def close(self):
self._searcher.close()
def latest_generation(self):
pattern = _toc_pattern(self.indexname)
max = -1
for filename in self.storage:
m = pattern.match(filename)
if m:
num = int(m.group(1))
if num > max: max = num
return max
def refresh(self):
if not self.up_to_date():
return self.__class__(self.storage, self.schema, indexname = self.indexname)
else:
return self
def up_to_date(self):
return self.generation == self.latest_generation()
def _write(self):
# Writes the content of this index to the .toc file.
for field in self.schema:
field.clean()
#stream = self.storage.create_file(self._toc_filename())
# Use a temporary file for atomic write.
tocfilename = self._toc_filename()
tempfilename = '%s.%s' % (tocfilename, time())
stream = self.storage.create_file(tempfilename)
stream.write_varint(_INT_SIZE)
stream.write_varint(_FLOAT_SIZE)
stream.write_int(-12345)
stream.write_int(_INDEX_VERSION)
for num in __version__[:3]:
stream.write_varint(num)
stream.write_string(cPickle.dumps(self.schema, -1))
stream.write_int(self.generation)
stream.write_int(self.segment_counter)
stream.write_pickle(self.segments)
stream.close()
# Rename temporary file to the proper filename
self.storage.rename_file(tempfilename, self._toc_filename(), safe=True)
def _read(self, schema):
# Reads the content of this index from the .toc file.
stream = self.storage.open_file(self._toc_filename())
if stream.read_varint() != _INT_SIZE or \
stream.read_varint() != _FLOAT_SIZE:
raise IndexError("Index was created on an architecture with different data sizes")
if not stream.read_int() == -12345:
raise IndexError("Number misread: byte order problem")
version = stream.read_int()
if version != _INDEX_VERSION:
raise IndexVersionError("Can't read format %s" % version, version)
self.version = version
self.release = (stream.read_varint(),
stream.read_varint(),
stream.read_varint())
# If the user supplied a schema object with the constructor,
# don't load the pickled schema from the saved index.
if schema:
self.schema = schema
stream.skip_string()
else:
self.schema = cPickle.loads(stream.read_string())
generation = stream.read_int()
assert generation == self.generation
self.segment_counter = stream.read_int()
self.segments = stream.read_pickle()
stream.close()
def _next_segment_name(self):
#Returns the name of the next segment in sequence.
if self.segment_num_lock.acquire():
try:
self.segment_counter += 1
return "_%s_%s" % (self.indexname, self.segment_counter)
finally:
self.segment_num_lock.release()
else:
raise LockError
def _toc_filename(self):
# Returns the computed filename of the TOC for this
# index name and generation.
return "_%s_%s.toc" % (self.indexname, self.generation)
def last_modified(self):
return self.storage.file_modified(self._toc_filename())
def lock(self):
return self.storage.lock("_%s_LOCK" % self.indexname)
def unlock(self):
self.storage.unlock("_%s_LOCK" % self.indexname)
def is_empty(self):
return len(self.segments) == 0
def optimize(self):
if len(self.segments) < 2 and not self.segments.has_deletions():
return
from whoosh.filedb.filewriting import OPTIMIZE
w = self.writer()
w.commit(OPTIMIZE)
def commit(self, new_segments = None):
self._searcher.close()
if not self.up_to_date():
raise OutOfDateError
if new_segments:
self.segments = new_segments
self.generation += 1
self._write()
self._clean_files()
self._searcher = self.searcher()
def _clean_files(self):
# Attempts to remove unused index files (called when a new generation
# is created). If existing Index and/or reader objects have the files
# open, they may not get deleted immediately (i.e. on Windows)
# but will probably be deleted eventually by a later call to clean_files.
storage = self.storage
current_segment_names = set([s.name for s in self.segments])
tocpattern = _toc_pattern(self.indexname)
segpattern = _segment_pattern(self.indexname)
for filename in storage:
m = tocpattern.match(filename)
if m:
num = int(m.group(1))
if num != self.generation:
try:
storage.delete_file(filename)
except OSError:
# Another process still has this file open
pass
else:
m = segpattern.match(filename)
if m:
name = m.group(1)
if name not in current_segment_names:
try:
storage.delete_file(filename)
except OSError:
# Another process still has this file open
pass
def doc_count_all(self):
return self.segments.doc_count_all()
def doc_count(self):
return self.segments.doc_count()
def field_length(self, fieldid):
fieldnum = self.schema.to_number(fieldid)
return sum(s.field_length(fieldnum) for s in self.segments)
def reader(self):
return self.segments.reader(self.storage, self.schema)
def writer(self, **kwargs):
from whoosh.filedb.filewriting import FileIndexWriter
return FileIndexWriter(self, **kwargs)
# SegmentSet object
class SegmentSet(object):
"""This class is never instantiated by the user. It is used by the Index
object to keep track of the segments in the index.
"""
def __init__(self, segments = None):
if segments is None:
self.segments = []
else:
self.segments = segments
self._doc_offsets = self.doc_offsets()
def __repr__(self):
return repr(self.segments)
def __len__(self):
""":returns: the number of segments in this set."""
return len(self.segments)
def __iter__(self):
return iter(self.segments)
def __getitem__(self, n):
return self.segments.__getitem__(n)
def append(self, segment):
"""Adds a segment to this set."""
self.segments.append(segment)
self._doc_offsets = self.doc_offsets()
def _document_segment(self, docnum):
"""Returns the index.Segment object containing the given document
number.
"""
offsets = self._doc_offsets
if len(offsets) == 1: return 0
return bisect_right(offsets, docnum) - 1
def _segment_and_docnum(self, docnum):
"""Returns an (index.Segment, segment_docnum) pair for the
segment containing the given document number.
"""
segmentnum = self._document_segment(docnum)
offset = self._doc_offsets[segmentnum]
segment = self.segments[segmentnum]
return segment, docnum - offset
def copy(self):
""":returns: a deep copy of this set."""
return self.__class__([s.copy() for s in self.segments])
def doc_offsets(self):
# Recomputes the document offset list. This must be called if you
# change self.segments.
offsets = []
base = 0
for s in self.segments:
offsets.append(base)
base += s.doc_count_all()
return offsets
def doc_count_all(self):
"""
:returns: the total number of documents, DELETED or
UNDELETED, in this set.
"""
return sum(s.doc_count_all() for s in self.segments)
def doc_count(self):
"""
:returns: the number of undeleted documents in this set.
"""
return sum(s.doc_count() for s in self.segments)
def has_deletions(self):
"""
:returns: True if this index has documents that are marked
deleted but haven't been optimized out of the index yet.
This includes deletions that haven't been written to disk
with Index.commit() yet.
"""
return any(s.has_deletions() for s in self.segments)
def delete_document(self, docnum, delete = True):
"""Deletes a document by number.
You must call Index.commit() for the deletion to be written to disk.
"""
segment, segdocnum = self._segment_and_docnum(docnum)
segment.delete_document(segdocnum, delete = delete)
def deleted_count(self):
"""
:returns: the total number of deleted documents in this index.
"""
return sum(s.deleted_count() for s in self.segments)
def is_deleted(self, docnum):
"""
:returns: True if a given document number is deleted but not yet
optimized out of the index.
"""
segment, segdocnum = self._segment_and_docnum(docnum)
return segment.is_deleted(segdocnum)
def reader(self, storage, schema):
from whoosh.filedb.filereading import SegmentReader
segments = self.segments
if len(segments) == 1:
return SegmentReader(storage, segments[0], schema)
else:
from whoosh.reading import MultiReader
readers = [SegmentReader(storage, segment, schema)
for segment in segments]
return MultiReader(readers, self._doc_offsets, schema)
class Segment(object):
"""Do not instantiate this object directly. It is used by the Index
object to hold information about a segment. A list of objects of this
class are pickled as part of the TOC file.
The TOC file stores a minimal amount of information -- mostly a list of
Segment objects. Segments are the real reverse indexes. Having multiple
segments allows quick incremental indexing: just create a new segment for
the new documents, and have the index overlay the new segment over previous
ones for purposes of reading/search. "Optimizing" the index combines the
contents of existing segments into one (removing any deleted documents
along the way).
"""
def __init__(self, name, max_doc, field_length_totals, deleted = None):
"""
:param name: The name of the segment (the Index object computes this from its
name and the generation).
:param max_doc: The maximum document number in the segment.
:param term_count: Total count of all terms in all documents.
:param field_length_totals: A dictionary mapping field numbers to the total
number of terms in that field across all documents in the segment.
:param deleted: A set of deleted document numbers, or None if no deleted
documents exist in this segment.
"""
self.name = name
self.max_doc = max_doc
self.field_length_totals = field_length_totals
self.deleted = deleted
self.doclen_filename = self.name + ".dci"
self.docs_filename = self.name + ".dcz"
self.term_filename = self.name + ".tiz"
self.vector_filename = self.name + ".fvz"
self.posts_filename = self.name + ".pst"
self.vectorposts_filename = self.name + ".vps"
def __repr__(self):
return "%s(%r)" % (self.__class__.__name__, self.name)
def copy(self):
if self.deleted:
deleted = set(self.deleted)
else:
deleted = None
return Segment(self.name, self.max_doc,
self.field_length_totals,
deleted)
def doc_count_all(self):
"""
:returns: the total number of documents, DELETED OR UNDELETED,
in this segment.
"""
return self.max_doc
def doc_count(self):
""":returns: the number of (undeleted) documents in this segment."""
return self.max_doc - self.deleted_count()
def has_deletions(self):
""":returns: True if any documents in this segment are deleted."""
return self.deleted_count() > 0
def deleted_count(self):
""":returns: the total number of deleted documents in this segment."""
if self.deleted is None: return 0
return len(self.deleted)
def field_length(self, fieldnum):
"""
:param fieldnum: the internal number of the field.
:returns: the total number of terms in the given field across all
documents in this segment.
"""
return self.field_length_totals.get(fieldnum, 0)
def delete_document(self, docnum, delete = True):
"""Deletes the given document number. The document is not actually
removed from the index until it is optimized.
:param docnum: The document number to delete.
:param delete: If False, this undeletes a deleted document.
"""
if delete:
if self.deleted is None:
self.deleted = set()
elif docnum in self.deleted:
raise KeyError("Document %s in segment %r is already deleted"
% (docnum, self.name))
self.deleted.add(docnum)
else:
if self.deleted is None or docnum not in self.deleted:
raise KeyError("Document %s is not deleted" % docnum)
self.deleted.clear(docnum)
def is_deleted(self, docnum):
""":returns: True if the given document number is deleted."""
if self.deleted is None: return False
return docnum in self.deleted
# Utility functions
def _toc_pattern(indexname):
"""Returns a regular expression object that matches TOC filenames.
name is the name of the index.
"""
return re.compile("_%s_([0-9]+).toc" % indexname)
def _segment_pattern(indexname):
"""Returns a regular expression object that matches segment filenames.
name is the name of the index.
"""
return re.compile("(_%s_[0-9]+).(%s)" % (indexname, _EXTENSIONS))
|
apache-2.0
|
riveridea/gnuradio
|
gr-qtgui/apps/plot_spectrogram_form.py
|
11
|
7041
|
#!/usr/bin/env python
#
# Copyright 2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
import sys
from gnuradio import filter
try:
from PyQt4 import QtGui, QtCore
import sip
except ImportError:
print "Error: Program requires PyQt4."
sys.exit(1)
try:
from gnuradio.qtgui.plot_from import plot_form
except ImportError:
from plot_form import plot_form
class plot_spectrogram_form(plot_form):
def __init__(self, top_block, title=''):
plot_form.__init__(self, top_block, title)
self.right_col_layout = QtGui.QVBoxLayout()
self.right_col_form = QtGui.QFormLayout()
self.right_col_layout.addLayout(self.right_col_form)
self.layout.addLayout(self.right_col_layout, 1,4,1,1)
self.psd_size_val = QtGui.QIntValidator(0, 2**18, self)
self.psd_size_edit = QtGui.QLineEdit(self)
self.psd_size_edit.setMinimumWidth(50)
self.psd_size_edit.setMaximumWidth(100)
self.psd_size_edit.setText(QtCore.QString("%1").arg(top_block._psd_size))
self.psd_size_edit.setValidator(self.psd_size_val)
self.right_col_form.addRow("FFT Size:", self.psd_size_edit)
self.connect(self.psd_size_edit, QtCore.SIGNAL("returnPressed()"),
self.update_psd_size)
self.psd_win_combo = QtGui.QComboBox(self)
self.psd_win_combo.addItems(["None", "Hamming", "Hann", "Blackman",
"Rectangular", "Kaiser", "Blackman-harris"])
self.psd_win_combo.setCurrentIndex(self.top_block.gui_snk.fft_window()+1)
self.right_col_form.addRow("Window:", self.psd_win_combo)
self.connect(self.psd_win_combo,
QtCore.SIGNAL("currentIndexChanged(int)"),
self.update_psd_win)
self.psd_avg_val = QtGui.QDoubleValidator(0, 1.0, 4, self)
self.psd_avg_edit = QtGui.QLineEdit(self)
self.psd_avg_edit.setMinimumWidth(50)
self.psd_avg_edit.setMaximumWidth(100)
self.psd_avg_edit.setText(QtCore.QString("%1").arg(top_block._avg))
self.psd_avg_edit.setValidator(self.psd_avg_val)
self.right_col_form.addRow("Average:", self.psd_avg_edit)
self.connect(self.psd_avg_edit, QtCore.SIGNAL("returnPressed()"),
self.update_psd_avg)
self.autoscale_button = QtGui.QPushButton("Auto Scale", self)
self.autoscale_button.setMaximumWidth(100)
self.right_col_layout.addWidget(self.autoscale_button)
self.connect(self.autoscale_button, QtCore.SIGNAL("clicked()"),
self.spectrogram_auto_scale)
self.add_spectrogram_control(self.right_col_layout)
def update_psd_size(self):
newpsdsize = self.psd_size_edit.text().toInt()[0]
if(newpsdsize != self.top_block._psd_size):
self.top_block.gui_snk.set_fft_size(newpsdsize)
self.top_block._psd_size = newpsdsize
self.top_block.reset(self.top_block._start,
self.top_block._nsamps)
def update_psd_win(self, index):
self.top_block.gui_snk.set_fft_window(index-1)
self.top_block.reset(self.top_block._start,
self.top_block._nsamps)
def update_psd_avg(self):
newpsdavg = self.psd_avg_edit.text().toDouble()[0]
if(newpsdavg != self.top_block._avg):
self.top_block.gui_snk.set_fft_average(newpsdavg)
self.top_block._avg = newpsdavg
self.top_block.reset(self.top_block._start,
self.top_block._nsamps)
def add_spectrogram_control(self, layout):
self._line_tabs = QtGui.QTabWidget()
self._line_pages = []
self._line_forms = []
self._label_edit = []
self._size_edit = []
self._color_edit = []
self._style_edit = []
self._marker_edit = []
self._alpha_edit = []
for n in xrange(self.top_block._nsigs):
self._line_pages.append(QtGui.QDialog())
self._line_forms.append(QtGui.QFormLayout(self._line_pages[-1]))
label = self.top_block.gui_snk.line_label(n)
self._label_edit.append(QtGui.QLineEdit(self))
self._label_edit[-1].setMinimumWidth(125)
self._label_edit[-1].setMaximumWidth(125)
self._label_edit[-1].setText(QtCore.QString("%1").arg(label))
self._line_forms[-1].addRow("Label:", self._label_edit[-1])
self.connect(self._label_edit[-1], QtCore.SIGNAL("returnPressed()"),
self.update_line_label)
self._qtcolormaps = ["Multi Color", "White Hot",
"Black Hot", "Incandescent"]
self._color_edit.append(QtGui.QComboBox(self))
self._color_edit[-1].addItems(self._qtcolormaps)
self._line_forms[-1].addRow("Color Map:", self._color_edit[-1])
self.connect(self._color_edit[-1],
QtCore.SIGNAL("currentIndexChanged(int)"),
self.update_color_map)
alpha_val = QtGui.QDoubleValidator(0, 1.0, 2, self)
alpha_val.setTop(1.0)
alpha = self.top_block.gui_snk.line_alpha(n)
self._alpha_edit.append(QtGui.QLineEdit(self))
self._alpha_edit[-1].setMinimumWidth(50)
self._alpha_edit[-1].setMaximumWidth(100)
self._alpha_edit[-1].setText(QtCore.QString("%1").arg(alpha))
self._alpha_edit[-1].setValidator(alpha_val)
self._line_forms[-1].addRow("Alpha:", self._alpha_edit[-1])
self.connect(self._alpha_edit[-1], QtCore.SIGNAL("returnPressed()"),
self.update_line_alpha)
self._line_tabs.addTab(self._line_pages[-1], "{0}".format(label))
layout.addWidget(self._line_tabs)
def update_color_map(self, c_index):
index = self._line_tabs.currentIndex()
self.top_block.gui_snk.set_color_map(index, c_index)
self.update_line_alpha()
def spectrogram_auto_scale(self):
self.top_block.gui_snk.auto_scale()
_min = self.top_block.gui_snk.min_intensity(0)
_max = self.top_block.gui_snk.max_intensity(0)
if(self.gui_y_axis):
self.gui_y_axis(_min, _max)
|
gpl-3.0
|
iap-mutant/godot
|
doc/tools/makedocs.py
|
25
|
14752
|
#!/usr/bin/python3
# -*- coding: utf-8 -*-
#
# makedocs.py: Generate documentation for Open Project Wiki
# Copyright (c) 2007-2016 Juan Linietsky, Ariel Manzur.
# Contributor: Jorge Araya Navarro <[email protected]>
#
# IMPORTANT NOTICE:
# If you are going to modify anything from this file, please be sure to follow
# the Style Guide for Python Code or often called "PEP8". To do this
# automagically just install autopep8:
#
# $ sudo pip3 install autopep8
#
# and run:
#
# $ autopep8 makedocs.py
#
# Before committing your changes. Also be sure to delete any trailing
# whitespace you may left.
#
# TODO:
# * Refactor code.
# * Adapt this script for generating content in other markup formats like
# reStructuredText, Markdown, DokuWiki, etc.
#
# Also check other TODO entries in this script for more information on what is
# left to do.
import argparse
import gettext
import logging
import re
from itertools import zip_longest
from os import path, listdir
from xml.etree import ElementTree
# add an option to change the verbosity
logging.basicConfig(level=logging.INFO)
def getxmlfloc():
""" Returns the supposed location of the XML file
"""
filepath = path.dirname(path.abspath(__file__))
return path.join(filepath, "class_list.xml")
def langavailable():
""" Return a list of languages available for translation
"""
filepath = path.join(
path.dirname(path.abspath(__file__)), "locales")
files = listdir(filepath)
choices = [x for x in files]
choices.insert(0, "none")
return choices
desc = "Generates documentation from a XML file to different markup languages"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("--input", dest="xmlfp", default=getxmlfloc(),
help="Input XML file, default: {}".format(getxmlfloc()))
parser.add_argument("--output-dir", dest="outputdir", required=True,
help="Output directory for generated files")
parser.add_argument("--language", choices=langavailable(), default="none",
help=("Choose the language of translation"
" for the output files. Default is English (none). "
"Note: This is NOT for the documentation itself!"))
# TODO: add an option for outputting different markup formats
args = parser.parse_args()
# Let's check if the file and output directory exists
if not path.isfile(args.xmlfp):
logging.critical("File not found: {}".format(args.xmlfp))
exit(1)
elif not path.isdir(args.outputdir):
logging.critical("Path does not exist: {}".format(args.outputdir))
exit(1)
_ = gettext.gettext
if args.language != "none":
lang = gettext.translation(domain="makedocs",
localedir="locales",
languages=[args.language])
lang.install()
_ = lang.gettext
# Strings
C_LINK = _("\"<code>{gclass}</code>(Go to page of class"
" {gclass})\":/class_{lkclass}")
MC_LINK = _("\"<code>{gclass}.{method}</code>(Go "
"to page {gclass}, section {method})\""
":/class_{lkclass}#{lkmethod}")
TM_JUMP = _("\"<code>{method}</code>(Jump to method"
" {method})\":#{lkmethod}")
GTC_LINK = _(" \"{rtype}(Go to page of class {rtype})\":/class_{link} ")
DFN_JUMP = _("\"*{funcname}*(Jump to description for"
" node {funcname})\":#{link} <b>(</b> ")
M_ARG_DEFAULT = C_LINK + " {name}={default}"
M_ARG = C_LINK + " {name}"
OPENPROJ_INH = _("h4. Inherits: ") + C_LINK + "\n\n"
def tb(string):
""" Return a byte representation of a string
"""
return bytes(string, "UTF-8")
def sortkey(c):
""" Symbols are first, letters second
"""
if "_" == c.attrib["name"][0]:
return "A"
else:
return c.attrib["name"]
def toOP(text):
""" Convert commands in text to Open Project commands
"""
# TODO: Make this capture content between [command] ... [/command]
groups = re.finditer((r'\[html (?P<command>/?\w+/?)(\]| |=)?(\]| |=)?(?P<a'
'rg>\w+)?(\]| |=)?(?P<value>"[^"]+")?/?\]'), text)
alignstr = ""
for group in groups:
gd = group.groupdict()
if gd["command"] == "br/":
text = text.replace(group.group(0), "\n\n", 1)
elif gd["command"] == "div":
if gd["value"] == '"center"':
alignstr = ("{display:block; margin-left:auto;"
" margin-right:auto;}")
elif gd["value"] == '"left"':
alignstr = "<"
elif gd["value"] == '"right"':
alignstr = ">"
text = text.replace(group.group(0), "\n\n", 1)
elif gd["command"] == "/div":
alignstr = ""
text = text.replace(group.group(0), "\n\n", 1)
elif gd["command"] == "img":
text = text.replace(group.group(0), "!{align}{src}!".format(
align=alignstr, src=gd["value"].strip('"')), 1)
elif gd["command"] == "b" or gd["command"] == "/b":
text = text.replace(group.group(0), "*", 1)
elif gd["command"] == "i" or gd["command"] == "/i":
text = text.replace(group.group(0), "_", 1)
elif gd["command"] == "u" or gd["command"] == "/u":
text = text.replace(group.group(0), "+", 1)
# Process other non-html commands
groups = re.finditer((r'\[method ((?P<class>[aA0-zZ9_]+)(?:\.))'
r'?(?P<method>[aA0-zZ9_]+)\]'), text)
for group in groups:
gd = group.groupdict()
if gd["class"]:
replacewith = (MC_LINK.format(gclass=gd["class"],
method=gd["method"],
lkclass=gd["class"].lower(),
lkmethod=gd["method"].lower()))
else:
# The method is located in the same wiki page
replacewith = (TM_JUMP.format(method=gd["method"],
lkmethod=gd["method"].lower()))
text = text.replace(group.group(0), replacewith, 1)
# Finally, [Classes] are around brackets, make them direct links
groups = re.finditer(r'\[(?P<class>[az0-AZ0_]+)\]', text)
for group in groups:
gd = group.groupdict()
replacewith = (C_LINK.
format(gclass=gd["class"],
lkclass=gd["class"].lower()))
text = text.replace(group.group(0), replacewith, 1)
return text + "\n\n"
def mkfn(node, is_signal=False):
""" Return a string containing a unsorted item for a function
"""
finalstr = ""
name = node.attrib["name"]
rtype = node.find("return")
if rtype:
rtype = rtype.attrib["type"]
else:
rtype = "void"
# write the return type and the function name first
finalstr += "* "
# return type
if not is_signal:
if rtype != "void":
finalstr += GTC_LINK.format(
rtype=rtype,
link=rtype.lower())
else:
finalstr += " void "
# function name
if not is_signal:
finalstr += DFN_JUMP.format(
funcname=name,
link=name.lower())
else:
# Signals have no description
finalstr += "*{funcname}* <b>(</b>".format(funcname=name)
# loop for the arguments of the function, if any
args = []
for arg in sorted(
node.iter(tag="argument"),
key=lambda a: int(a.attrib["index"])):
ntype = arg.attrib["type"]
nname = arg.attrib["name"]
if "default" in arg.attrib:
args.insert(-1, M_ARG_DEFAULT.format(
gclass=ntype,
lkclass=ntype.lower(),
name=nname,
default=arg.attrib["default"]))
else:
# No default value present
args.insert(-1, M_ARG.format(gclass=ntype,
lkclass=ntype.lower(), name=nname))
# join the arguments together
finalstr += ", ".join(args)
# and, close the function with a )
finalstr += " <b>)</b>"
# write the qualifier, if any
if "qualifiers" in node.attrib:
qualifier = node.attrib["qualifiers"]
finalstr += " " + qualifier
finalstr += "\n"
return finalstr
# Let's begin
tree = ElementTree.parse(args.xmlfp)
root = tree.getroot()
# Check version attribute exists in <doc>
if "version" not in root.attrib:
logging.critical(_("<doc>'s version attribute missing"))
exit(1)
version = root.attrib["version"]
classes = sorted(root, key=sortkey)
# first column is always longer, second column of classes should be shorter
zclasses = zip_longest(classes[:int(len(classes) / 2 + 1)],
classes[int(len(classes) / 2 + 1):],
fillvalue="")
# We write the class_list file and also each class file at once
with open(path.join(args.outputdir, "class_list.txt"), "wb") as fcl:
# Write header of table
fcl.write(tb("|^.\n"))
fcl.write(tb(_("|_. Index symbol |_. Class name "
"|_. Index symbol |_. Class name |\n")))
fcl.write(tb("|-.\n"))
indexletterl = ""
indexletterr = ""
for gdclassl, gdclassr in zclasses:
# write a row #
# write the index symbol column, left
if indexletterl != gdclassl.attrib["name"][0]:
indexletterl = gdclassl.attrib["name"][0]
fcl.write(tb("| *{}* |".format(indexletterl.upper())))
else:
# empty cell
fcl.write(tb("| |"))
# write the class name column, left
fcl.write(tb(C_LINK.format(
gclass=gdclassl.attrib["name"],
lkclass=gdclassl.attrib["name"].lower())))
# write the index symbol column, right
if isinstance(gdclassr, ElementTree.Element):
if indexletterr != gdclassr.attrib["name"][0]:
indexletterr = gdclassr.attrib["name"][0]
fcl.write(tb("| *{}* |".format(indexletterr.upper())))
else:
# empty cell
fcl.write(tb("| |"))
# We are dealing with an empty string
else:
# two empty cell
fcl.write(tb("| | |\n"))
# We won't get the name of the class since there is no ElementTree
# object for the right side of the tuple, so we iterate the next
# tuple instead
continue
# write the class name column (if any), right
fcl.write(tb(C_LINK.format(
gclass=gdclassl.attrib["name"],
lkclass=gdclassl.attrib["name"].lower()) + "|\n"))
# row written #
# now, let's write each class page for each class
for gdclass in [gdclassl, gdclassr]:
if not isinstance(gdclass, ElementTree.Element):
continue
classname = gdclass.attrib["name"]
with open(path.join(args.outputdir, "{}.txt".format(
classname.lower())), "wb") as clsf:
# First level header with the name of the class
clsf.write(tb("h1. {}\n\n".format(classname)))
# lay the attributes
if "inherits" in gdclass.attrib:
inh = gdclass.attrib["inherits"].strip()
clsf.write(tb(OPENPROJ_INH.format(gclass=inh,
lkclass=inh.lower())))
if "category" in gdclass.attrib:
clsf.write(tb(_("h4. Category: {}\n\n").
format(gdclass.attrib["category"].strip())))
# lay child nodes
briefd = gdclass.find("brief_description")
if briefd.text.strip():
clsf.write(tb(_("h2. Brief Description\n\n")))
clsf.write(tb(toOP(briefd.text.strip()) +
_("\"read more\":#more\n\n")))
# Write the list of member functions of this class
methods = gdclass.find("methods")
if methods and len(methods) > 0:
clsf.write(tb(_("\nh3. Member Functions\n\n")))
for method in methods.iter(tag='method'):
clsf.write(tb(mkfn(method)))
signals = gdclass.find("signals")
if signals and len(signals) > 0:
clsf.write(tb(_("\nh3. Signals\n\n")))
for signal in signals.iter(tag='signal'):
clsf.write(tb(mkfn(signal, True)))
# TODO: <members> tag is necessary to process? it does not
# exists in class_list.xml file.
consts = gdclass.find("constants")
if consts and len(consts) > 0:
clsf.write(tb(_("\nh3. Numeric Constants\n\n")))
for const in sorted(consts, key=lambda k:
k.attrib["name"]):
if const.text.strip():
clsf.write(tb("* *{name}* = *{value}* - {desc}\n".
format(
name=const.attrib["name"],
value=const.attrib["value"],
desc=const.text.strip())))
else:
# Constant have no description
clsf.write(tb("* *{name}* = *{value}*\n".
format(
name=const.attrib["name"],
value=const.attrib["value"])))
descrip = gdclass.find("description")
clsf.write(tb(_("\nh3(#more). Description\n\n")))
if descrip.text:
clsf.write(tb(descrip.text.strip() + "\n"))
else:
clsf.write(tb(_("_Nothing here, yet..._\n")))
# and finally, the description for each method
if methods and len(methods) > 0:
clsf.write(tb(_("\nh3. Member Function Description\n\n")))
for method in methods.iter(tag='method'):
clsf.write(tb("h4(#{n}). {name}\n\n".format(
n=method.attrib["name"].lower(),
name=method.attrib["name"])))
clsf.write(tb(mkfn(method) + "\n"))
clsf.write(tb(toOP(method.find(
"description").text.strip())))
|
mit
|
Nitaco/ansible
|
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py
|
8
|
5936
|
#!/usr/bin/python
# (c) 2018, NetApp, Inc
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: na_ontap_ucadapter
short_description: ONTAP UC adapter configuration
extends_documentation_fragment:
- netapp.na_ontap
version_added: '2.6'
author: chhaya gunawat ([email protected])
description:
- modify the UC adapter mode and type taking pending type and mode into account.
options:
state:
description:
- Whether the specified adapter should exist.
required: false
choices: ['present']
default: 'present'
adapter_name:
description:
- Specifies the adapter name.
required: true
node_name:
description:
- Specifies the adapter home node.
required: true
mode:
description:
- Specifies the mode of the adapter.
type:
description:
- Specifies the fc4 type of the adapter.
'''
EXAMPLES = '''
- name: Modify adapter
na_ontap_adapter:
state: present
adapter_name: data2
node_name: laurentn-vsim1
mode: fc
type: target
hostname: "{{ netapp_hostname }}"
username: "{{ netapp_username }}"
password: "{{ netapp_password }}"
'''
RETURN = '''
'''
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
import ansible.module_utils.netapp as netapp_utils
HAS_NETAPP_LIB = netapp_utils.has_netapp_lib()
class NetAppOntapadapter(object):
''' object to describe adapter info '''
def __init__(self):
self.argument_spec = netapp_utils.na_ontap_host_argument_spec()
self.argument_spec.update(dict(
state=dict(required=False, choices=['present'], default='present'),
adapter_name=dict(required=True, type='str'),
node_name=dict(required=True, type='str'),
mode=dict(required=False, type='str'),
type=dict(required=False, type='str'),
))
self.module = AnsibleModule(
argument_spec=self.argument_spec,
supports_check_mode=True
)
params = self.module.params
# set up state variables
self.state = params['state']
self.adapter_name = params['adapter_name']
self.node_name = params['node_name']
self.mode = params['mode']
self.type = params['type']
if HAS_NETAPP_LIB is False:
self.module.fail_json(msg="the python NetApp-Lib module is required")
else:
self.server = netapp_utils.setup_na_ontap_zapi(module=self.module)
def get_adapter(self):
"""
Return details about the adapter
:param:
name : Name of the name of the adapter
:return: Details about the adapter. None if not found.
:rtype: dict
"""
adapter_info = netapp_utils.zapi.NaElement('ucm-adapter-get')
adapter_info.add_new_child('adapter-name', self.adapter_name)
adapter_info.add_new_child('node-name', self.node_name)
result = self.server.invoke_successfully(adapter_info, True)
return_value = None
adapter_attributes = result.get_child_by_name('attributes').\
get_child_by_name('uc-adapter-info')
return_value = {
'mode': adapter_attributes.get_child_content('mode'),
'pending-mode': adapter_attributes.get_child_content('pending-mode'),
'type': adapter_attributes.get_child_content('fc4-type'),
'pending-type': adapter_attributes.get_child_content('pending-fc4-type'),
'status': adapter_attributes.get_child_content('status'),
}
return return_value
def modify_adapter(self):
"""
Modify the adapter.
"""
params = {'adapter-name': self.adapter_name,
'node-name': self.node_name}
if self.type is not None:
params['fc4-type'] = self.type
if self.mode is not None:
params['mode'] = self.mode
adapter_modify = netapp_utils.zapi.NaElement.create_node_with_children(
'ucm-adapter-modify', ** params)
try:
self.server.invoke_successfully(adapter_modify,
enable_tunneling=True)
except netapp_utils.zapi.NaApiError as e:
self.module.fail_json(msg='Error modifying adapter %s: %s' % (self.adapter_name, to_native(e)),
exception=traceback.format_exc())
def apply(self):
''' calling all adapter features '''
changed = False
results = netapp_utils.get_cserver(self.server)
cserver = netapp_utils.setup_na_ontap_zapi(module=self.module, vserver=results)
netapp_utils.ems_log_event("na_ontap_ucadapter", cserver)
adapter_detail = self.get_adapter()
def need_to_change(expected, pending, current):
if expected is None:
return False
if pending is not None:
return pending != expected
if current is not None:
return current != expected
return False
if adapter_detail:
changed = need_to_change(self.type, adapter_detail['pending-type'], adapter_detail['type']) or \
need_to_change(self.mode, adapter_detail['pending-mode'], adapter_detail['mode'])
if changed:
if self.module.check_mode:
pass
else:
self.modify_adapter()
self.module.exit_json(changed=changed)
def main():
adapter = NetAppOntapadapter()
adapter.apply()
if __name__ == '__main__':
main()
|
gpl-3.0
|
tekapo/fabric
|
tests/utils.py
|
30
|
6045
|
from __future__ import with_statement
from contextlib import contextmanager
from copy import deepcopy
from fudge.patcher import with_patched_object
from functools import partial
from types import StringTypes
import copy
import getpass
import os
import re
import shutil
import sys
import tempfile
from fudge import Fake, patched_context, clear_expectations, with_patched_object
from nose.tools import raises
from nose import SkipTest
from fabric.context_managers import settings
from fabric.state import env, output
from fabric.sftp import SFTP
import fabric.network
from fabric.network import normalize, to_dict
from server import PORT, PASSWORDS, USER, HOST
from mock_streams import mock_streams
class FabricTest(object):
"""
Nose-oriented test runner which wipes state.env and provides file helpers.
"""
def setup(self):
# Clear Fudge mock expectations
clear_expectations()
# Copy env, output for restoration in teardown
self.previous_env = copy.deepcopy(env)
# Deepcopy doesn't work well on AliasDicts; but they're only one layer
# deep anyways, so...
self.previous_output = output.items()
# Allow hooks from subclasses here for setting env vars (so they get
# purged correctly in teardown())
self.env_setup()
# Temporary local file dir
self.tmpdir = tempfile.mkdtemp()
def set_network(self):
env.update(to_dict('%s@%s:%s' % (USER, HOST, PORT)))
def env_setup(self):
# Set up default networking for test server
env.disable_known_hosts = True
self.set_network()
env.password = PASSWORDS[USER]
# Command response mocking is easier without having to account for
# shell wrapping everywhere.
env.use_shell = False
def teardown(self):
env.clear() # In case tests set env vars that didn't exist previously
env.update(self.previous_env)
output.update(self.previous_output)
shutil.rmtree(self.tmpdir)
# Clear Fudge mock expectations...again
clear_expectations()
def path(self, *path_parts):
return os.path.join(self.tmpdir, *path_parts)
def mkfile(self, path, contents):
dest = self.path(path)
with open(dest, 'w') as fd:
fd.write(contents)
return dest
def exists_remotely(self, path):
return SFTP(env.host_string).exists(path)
def exists_locally(self, path):
return os.path.exists(path)
def password_response(password, times_called=None, silent=True):
"""
Context manager which patches ``getpass.getpass`` to return ``password``.
``password`` may be a single string or an iterable of strings:
* If single string, given password is returned every time ``getpass`` is
called.
* If iterable, iterated over for each call to ``getpass``, after which
``getpass`` will error.
If ``times_called`` is given, it is used to add a ``Fake.times_called``
clause to the mock object, e.g. ``.times_called(1)``. Specifying
``times_called`` alongside an iterable ``password`` list is unsupported
(see Fudge docs on ``Fake.next_call``).
If ``silent`` is True, no prompt will be printed to ``sys.stderr``.
"""
fake = Fake('getpass', callable=True)
# Assume stringtype or iterable, turn into mutable iterable
if isinstance(password, StringTypes):
passwords = [password]
else:
passwords = list(password)
# Optional echoing of prompt to mimic real behavior of getpass
# NOTE: also echo a newline if the prompt isn't a "passthrough" from the
# server (as it means the server won't be sending its own newline for us).
echo = lambda x, y: y.write(x + ("\n" if x != " " else ""))
# Always return first (only?) password right away
fake = fake.returns(passwords.pop(0))
if not silent:
fake = fake.calls(echo)
# If we had >1, return those afterwards
for pw in passwords:
fake = fake.next_call().returns(pw)
if not silent:
fake = fake.calls(echo)
# Passthrough times_called
if times_called:
fake = fake.times_called(times_called)
return patched_context(getpass, 'getpass', fake)
def _assert_contains(needle, haystack, invert):
matched = re.search(needle, haystack, re.M)
if (invert and matched) or (not invert and not matched):
raise AssertionError("r'%s' %sfound in '%s'" % (
needle,
"" if invert else "not ",
haystack
))
assert_contains = partial(_assert_contains, invert=False)
assert_not_contains = partial(_assert_contains, invert=True)
def line_prefix(prefix, string):
"""
Return ``string`` with all lines prefixed by ``prefix``.
"""
return "\n".join(prefix + x for x in string.splitlines())
def eq_(result, expected, msg=None):
"""
Shadow of the Nose builtin which presents easier to read multiline output.
"""
params = {'expected': expected, 'result': result}
aka = """
--------------------------------- aka -----------------------------------------
Expected:
%(expected)r
Got:
%(result)r
""" % params
default_msg = """
Expected:
%(expected)s
Got:
%(result)s
""" % params
if (repr(result) != str(result)) or (repr(expected) != str(expected)):
default_msg += aka
assert result == expected, msg or default_msg
def eq_contents(path, text):
with open(path) as fd:
eq_(text, fd.read())
def support(path):
return os.path.join(os.path.dirname(__file__), 'support', path)
fabfile = support
@contextmanager
def path_prefix(module):
i = 0
sys.path.insert(i, os.path.dirname(module))
yield
sys.path.pop(i)
def aborts(func):
return raises(SystemExit)(mock_streams('stderr')(func))
def _patched_input(func, fake):
return func(sys.modules['__builtin__'], 'raw_input', fake)
patched_input = partial(_patched_input, patched_context)
with_patched_input = partial(_patched_input, with_patched_object)
|
bsd-2-clause
|
derblub/pixelpi
|
screen/virtualscreen.py
|
1
|
1561
|
import pygame
from abstractscreen import AbstractScreen
from settings import *
S = Settings()
instance = None
# Behaves like the actual LED screen, but shows the screen content on a computer screen
class VirtualScreen(AbstractScreen):
def __init__(self,
width=int(S.get('screen', 'matrix_width')),
height=int(S.get('screen', 'matrix_height'))):
super(VirtualScreen, self).__init__(width, height)
self.pixel_size = int(S.get('dev', 'pixel_size'))
self.update_brightness()
pygame.display.init()
self.screen = pygame.display.set_mode([width * self.pixel_size, height * self.pixel_size], 0)
self.surface = pygame.Surface(self.screen.get_size())
global instance
instance = self
def update(self):
for y in range(self.height):
for x in range(self.width):
pygame.draw.rect(self.surface, self.pixel[x][y], ((x * self.pixel_size, y * self.pixel_size), (((x + 1) * self.pixel_size), (y + 1) * self.pixel_size)))
self.screen.blit(self.surface, (0, 0))
pygame.display.flip()
pygame.display.update()
def update_brightness(self):
pass
# b = int(4 + 3.1 * (int(S.get('screen', 'brightness')) + 1) ** 2)
# @TODO maybe simulate brightness
def set_brightness(self, value):
value = min(max(value, 0), 8)
S.set('screen', 'brightness', value)
self.update_brightness()
def get_brightness(self):
return int(S.get('screen', 'brightness'))
|
mit
|
Valka7a/python-playground
|
python-the-hard-way/43-basic-object-oriented-analysis-and-design.py
|
1
|
13790
|
# Exercise 43: Basic Object-Oriented Analysis and Design
# Process to build something to evolve problems
# 1. Write or draw about the problem.
# 2. Extract key concepts from 1 and research them.
# 3. Create a class hierarchy and object map for the concepts.
# 4. Code the classes and a test to run them.
# 5. Repeat and refine.
# The Analysis of a Simple Game Engine
# Write or Draw About the Problem
"""
Aliens have invaded a space ship and our hero has to go through a maze of rooms
defeating them so he can escape into an escape pod to the planet below. The game
will be more like a Zork or Adventure type game with text outputs and funny ways
to die. The game will involve an engine that runs a map full of rooms or scenes.
Each room will print its own description when the player enters it and then tell
the engine what room to run next out of the map.
"""
# At this point I have a good idea for the game and how it would run, so now I want
# to describe each scene:
"""
Death
This is when the player dies and should be something funny.
Central Corridor
This is the starting point and has a Gothon already standing there.
They have to defeat with a joke before continuing.
Laser Weapon Armory
This is where the hero gets a neutron bomb to blow up the ship before
getting to the escape pod. It has a keypad the hero has to gues the
number for.
The Bridge
Another battle scene with a Gothon where the hero places the bomb.
Escape Pod
Where the hero escapes but only after guessing the right escape pod.
"""
# Extract Key Concepts and Research Them
# First I make a list of all the nouns:
# Alien, Player, Ship, Maze, Room, Scene, Gothon, Escape Pod, Planet, Map, Engine, Death,
# Central Corridor, Laser Weapon Armory, The Bridge
# Create a Class Hierarchy and Object Map for the Concepts
"""
Right away I see that "Room" and "Scene" are basically the same thing depending on how
I want to do things. I'm going to pick "Scene" for this game. Then I see that all the
specific rooms like "Central Corridor" are basically just Scenes. I see also that Death
is basically a Scene, which confirms my choice of "Scene" over "Room" since you can have
a death scene, but a death room is kind of odd. "Maze" and "Map" are basically the same
so I'm going to go with "Map" since I used it more often. I don't want to do a battle
system so I'm going to ignore "Alien" and "Player" and save that for later. The "Planet"
could also just be another scene instead of something specific
"""
# After all of that thoiught process I start to make a class hierarchy that looks
# like this in my text editor:
# * Map
# * Engine
# * Scene
# * Death
# * Central Corridor
# * Laser Weapon Armory
# * The Bridge
# * Escape Pod
"""
I would then go through and figure out what actions are needed on each thing based on
verbs in the description. For example, I know from the description I'm going to need a
way to "run" the engine, "get the next scene" from the map, get the "opening scene" and
"enter" a scene. I'll add those like this:
"""
# * Map
# - next_scene
# - opening_scene
# * Engine
# - play
# * Scene
# - enter
# * Death
# * Central Corridor
# * Laser Weapon Armory
# * The Bridge
# * Escape Pod
"""
Notice how I just put -enter under Scene since I know that all the scenes under it will
inherit it and have to override it later.
"""
# Code the Classes and a Test to Run Them
# The Code for "Gothons from Planet Percal #25"
from sys import exit
from random import randint
class Scene(object):
def enter(self):
print "This scene is not yet configured. Subclass it and implement enter()."
exit(1)
class Engine(object):
def __init__(self, scene_map):
self.scene_map = scene_map
def play(self):
current_scene = self.scene_map.opening_scene()
last_scene = self.scene_map.next_scene('finished')
while current_scene != last_scene:
next_scene_name = current_scene.enter()
current_scene = self.scene_map.next_scene(next_scene_name)
# be sure to print out the last scene
current_scene.enter()
class Death(Scene):
quips = [
"You died. You kinda suck at this.",
"Your mom would be proud...if she were smarter.",
"Such a luser.",
"I have a small puppy that's better at this."
]
def enter(self):
print Death.quips[randint(0, len(self.quips)-1)]
exit(1)
class CentralCorridor(Scene):
def enter(self):
print "The Gothons of Planet Percal #25 have invaded your ship and destroyed"
print "your entire crew. You are the last surviving member and your last"
print "mission is to get the neutron destruct bomb from the Weapons Armory,"
print "put it in the bridge, and blow the ship up after getting into an "
print "escape pod."
print "\n"
print "You're running down the central corridor to the Weapons Armory when"
print "a Gothon jumps out, red scaly skin, dark grimy teeth, and evil clown costume"
print "flowing around his hate filled body. He's blocking the door to the"
print "Armory and about to pull a weapon to blast you."
print "What will you do?"
print ">> shoot!"
print ">> dodge!"
print ">>tell a joke"
action = raw_input("> ")
if action == "shoot!":
print "Quick on the draw you yank out your blaster and fire it at the Gothon."
print "His clown costume is flowing and moving around his body, which throws"
print "off your aim. Your laser hits his costume but misses him entirely. This"
print "completely ruins his brand new costume his mother bought him, which"
print "makes him fly into an insane rage and blast you repeatedly in the face until"
print "you are dead. Then he eats you."
return 'death'
elif action == "dodge!":
print "Like a world class boxer you dodge, weave, slip and slide right"
print "as the Gothon's blaster cranks a laser past your head."
print "In the middle of your artful dodge your foot slips and you"
print "bang your head on the metal wall and pass out."
print "You wake up shortly after only to die as the Gothon stomps on"
print "your head and eats you."
return 'death'
elif action == "tell a joke":
print "Lucky for you they made you learn Gothon insults in the academy."
print "You tell the one Gothon joke you know: "
print "Lbhe zbgure vf fb sng, jura fur fvgf nebhaq gur ubhfr, fur fvgf nebhaq gur ubhfr."
print "The Gothon stops, tries not to laugh, then busts out laughing and can't move."
print "While he's laughing you run up and shoot him square in the head"
print "putting him down, then jump through the Weapon Armory door."
return 'laser_weapon_armory'
else:
print "DOES NOT COMPUTE!"
return 'central_corridor'
class LaserWeaponArmory(Scene):
def enter(self):
print "You do a dive roll into the Weapon Armory, crouch and scan the room"
print "for more Gothons that might be hiding. It's dead quiet, too quiet."
print "You stand up and run to the far side of the room and find the"
print "neutron bomb in its container. There's a keypad lock on the box"
print "and you need the code to get the bomb out. If you get the code"
print "wrong 10 times then the lock closes forever and you can't"
print "get the bomb. The code is 3 digits."
code = "%d%d%d" % (randint(1,9), randint(1,9), randint(1,9))
print "This is the code: %s." % code
guess = raw_input("[keypad]> ")
guesses = 0
while guess != code and guesses < 10:
print "BZZZZEDDD!"
guesses += 1
guess = raw_input("[keypad]> ")
if guess == code:
print "The container clicks open and the seal breaks, letting gas out."
print "You grab the neutron bomb and run as fast as you can to the"
print "bridge where you must place it in the right spot."
return 'the_bridge'
else:
print "The lock buzzes one last time and then you hear a sickening"
print "melting sound as the mechanism is fused together."
print "You decide to sit there, and finally the Gothons blow up the"
print "ship from their ship and you die."
return 'death'
class TheBridge(Scene):
def enter(self):
print "You burst onto the Bridge with the netron destruct bomb"
print "under your arm and surprise 5 Gothons who are trying to"
print "take control of the ship. Each of them has an even uglier"
print "clown costume than the last. They haven't pulled their"
print "weapons out yet, as they see the active bomb under your"
print "arm and don't want to set it off."
print "What will you do?"
print ">> throw the bomb"
print ">>slowly place the bomb"
action = raw_input("> ")
if action == "throw the bomb":
print "In a panic you throw the bomb at the group of Gothons"
print "and make a leap for the door. Right as you drop it a"
print "Gothon shoots you right in the back killing you."
print "As you die you see another Gothon frantically try to disarm"
print "the bomb. You die knowing they will probably blow up when"
print "it goes off."
return 'death'
elif action == "slowly place the bomb":
print "You point your blaster at the bomb under your arm"
print "and the Gothons put their hands up and start to sweat."
print "You inch backward to the door, open it, and then carefully"
print "place the bomb on the floor, pointing your blaster at it."
print "You then jump back through the door, punch the close button"
print "and blast the lock so the Gothons can't get out."
print "Now that the bomb is placed you run to the escape pod to"
print "get off this tin can."
return 'escape_pod'
else:
print "DOES NOT COMPUTE!"
return "the_bridge"
class EscapePod(Scene):
def enter(self):
print "You rush through the ship desperately trying to make it to"
print "the escape pod before the whole ship explodes. It seems like"
print "hardly any Gothons are on the ship, so your run is clear of"
print "interference. You get to the chamber with the escape pods, and"
print "now need to pick one to take. Some of them could be damaged"
print "but you don't have time to look. There's 5 pods, which one"
print "do you take?"
good_pod = randint(1,5)
print "Fast look tells you %s is good." % good_pod
guess = raw_input("[pod #]> ")
if int(guess) != good_pod:
print "You jump into pod %s and hit the eject button." % guess
print "The pod escapes out into the void of space, then"
print "implodes as the hull ruptures, crushing your body"
print "into jam jelly."
return 'death'
else:
print "You jump into pod %s and hit the eject button." % guess
print "The pod easily slides out into space heading to"
print "the planet below. As it flies to the planet, you look"
print "back and see your ship implode then explode like a"
print "bright star, taking out the Gothon ship at the same"
print "time. You won!"
return 'finished'
class Finished(Scene):
def enter(self):
print "You won! Good job."
return 'finished'
class Map(object):
scenes = {
'central_corridor': CentralCorridor(),
'laser_weapon_armory': LaserWeaponArmory(),
'the_bridge': TheBridge(),
'escape_pod': EscapePod(),
'death': Death(),
'finished': Finished(),
}
def __init__(self, start_scene):
self.start_scene = start_scene
def next_scene(self, scene_name):
val = Map.scenes.get(scene_name)
return val
def opening_scene(self):
return self.next_scene(self.start_scene)
a_map = Map('central_corridor')
a_game = Engine(a_map)
a_game.play()
# Top Down vs Bottom Up
# Steps to do Bottom Up:
# 1. Take a small piece of the problem; hack on some code and get it to run barely.
# 2. Refine the code into something more formal with classes and automated tests.
# 3. Extract the key concepts you're using and try to find research for them.
# 4. Write a description of what's really going on.
# 5. Go back and refine the code, possibly throwing it out and starting over.
# 6. Repeat, moving on to some other piece of the problem.
# Study Drills:
# 1. Change it! Maybe you hate this game. Could be to violent, you aren't into sci-fi. Get the game
# working, then change it to what you like. This is your computer, you make it do what you want.
# 2. I have a bug in this code. Why is the door lock guessing 11 times?
# 3. Explain how returning the next room works.
# 4. Add cheat codes to the game so you can get past the more difficult rooms. I can do this with
# two words on one line.
# 5. Go back to my description and analysis, then try to build a small combat system for the hero
# and the various Gothons he encounters.
# 6. This is actually a small version of something called a "finite state machine". Read about them.
# They might not make sense but try anyway.
|
mit
|
hkariti/ansible
|
lib/ansible/utils/module_docs_fragments/fortios.py
|
85
|
2307
|
#
# (c) 2017, Benjamin Jolivot <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
class ModuleDocFragment(object):
# Standard files documentation fragment
DOCUMENTATION = """
options:
file_mode:
description:
- Don't connect to any device, only use I(config_file) as input and Output.
default: false
type: bool
version_added: "2.4"
config_file:
description:
- Path to configuration file. Required when I(file_mode) is True.
version_added: "2.4"
host:
description:
- Specifies the DNS hostname or IP address for connecting to the remote fortios device. Required when I(file_mode) is False.
username:
description:
- Configures the username used to authenticate to the remote device. Required when I(file_mode) is True.
password:
description:
- Specifies the password used to authenticate to the remote device. Required when I(file_mode) is True.
timeout:
description:
- Timeout in seconds for connecting to the remote device.
default: 60
vdom:
description:
- Specifies on which vdom to apply configuration
backup:
description:
- This argument will cause the module to create a backup of
the current C(running-config) from the remote device before any
changes are made. The backup file is written to the i(backup)
folder.
default: no
choices: ['yes', 'no']
backup_path:
description:
- Specifies where to store backup files. Required if I(backup=yes).
backup_filename:
description:
- Specifies the backup filename. If omitted filename will be
formatted like HOST_config.YYYY-MM-DD@HH:MM:SS
"""
|
gpl-3.0
|
longmen21/edx-platform
|
cms/djangoapps/contentstore/management/commands/tests/test_export_all_courses.py
|
187
|
2065
|
"""
Test for export all courses.
"""
import shutil
from tempfile import mkdtemp
from contentstore.management.commands.export_all_courses import export_courses_to_output_path
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory
class ExportAllCourses(ModuleStoreTestCase):
"""
Tests exporting all courses.
"""
def setUp(self):
""" Common setup. """
super(ExportAllCourses, self).setUp()
self.store = modulestore()._get_modulestore_by_type(ModuleStoreEnum.Type.mongo)
self.temp_dir = mkdtemp()
self.addCleanup(shutil.rmtree, self.temp_dir)
self.first_course = CourseFactory.create(
org="test", course="course1", display_name="run1", default_store=ModuleStoreEnum.Type.mongo
)
self.second_course = CourseFactory.create(
org="test", course="course2", display_name="run2", default_store=ModuleStoreEnum.Type.mongo
)
def test_export_all_courses(self):
"""
Test exporting good and faulty courses
"""
# check that both courses exported successfully
courses, failed_export_courses = export_courses_to_output_path(self.temp_dir)
self.assertEqual(len(courses), 2)
self.assertEqual(len(failed_export_courses), 0)
# manually make second course faulty and check that it fails on export
second_course_id = self.second_course.id
self.store.collection.update(
{'_id.org': second_course_id.org, '_id.course': second_course_id.course, '_id.name': second_course_id.run},
{'$set': {'metadata.tags': 'crash'}}
)
courses, failed_export_courses = export_courses_to_output_path(self.temp_dir)
self.assertEqual(len(courses), 2)
self.assertEqual(len(failed_export_courses), 1)
self.assertEqual(failed_export_courses[0], unicode(second_course_id))
|
agpl-3.0
|
xifle/home-assistant
|
homeassistant/components/climate/mysensors.py
|
10
|
7483
|
"""
mysensors platform that offers a Climate(MySensors-HVAC) component.
For more details about this platform, please refer to the documentation
https://home-assistant.io/components/climate.mysensors
"""
import logging
from homeassistant.components import mysensors
from homeassistant.components.climate import (
STATE_COOL, STATE_HEAT, STATE_OFF, STATE_AUTO, ClimateDevice,
ATTR_TARGET_TEMP_HIGH, ATTR_TARGET_TEMP_LOW)
from homeassistant.const import TEMP_CELSIUS, TEMP_FAHRENHEIT, ATTR_TEMPERATURE
_LOGGER = logging.getLogger(__name__)
DICT_HA_TO_MYS = {STATE_COOL: "CoolOn", STATE_HEAT: "HeatOn",
STATE_AUTO: "AutoChangeOver", STATE_OFF: "Off"}
DICT_MYS_TO_HA = {"CoolOn": STATE_COOL, "HeatOn": STATE_HEAT,
"AutoChangeOver": STATE_AUTO, "Off": STATE_OFF}
def setup_platform(hass, config, add_devices, discovery_info=None):
"""Setup the mysensors climate."""
if discovery_info is None:
return
gateways = hass.data.get(mysensors.MYSENSORS_GATEWAYS)
if not gateways:
return
for gateway in gateways:
if float(gateway.protocol_version) < 1.5:
continue
pres = gateway.const.Presentation
set_req = gateway.const.SetReq
map_sv_types = {
pres.S_HVAC: [set_req.V_HVAC_FLOW_STATE],
}
devices = {}
gateway.platform_callbacks.append(mysensors.pf_callback_factory(
map_sv_types, devices, add_devices, MySensorsHVAC))
class MySensorsHVAC(mysensors.MySensorsDeviceEntity, ClimateDevice):
"""Representation of a MySensorsHVAC hvac."""
@property
def assumed_state(self):
"""Return True if unable to access real state of entity."""
return self.gateway.optimistic
@property
def temperature_unit(self):
"""Return the unit of measurement."""
return (TEMP_CELSIUS
if self.gateway.metric else TEMP_FAHRENHEIT)
@property
def current_temperature(self):
"""Return the current temperature."""
return self._values.get(self.gateway.const.SetReq.V_TEMP)
@property
def target_temperature(self):
"""Return the temperature we try to reach."""
set_req = self.gateway.const.SetReq
if set_req.V_HVAC_SETPOINT_COOL in self._values and \
set_req.V_HVAC_SETPOINT_HEAT in self._values:
return None
temp = self._values.get(set_req.V_HVAC_SETPOINT_COOL)
if temp is None:
temp = self._values.get(set_req.V_HVAC_SETPOINT_HEAT)
return temp
@property
def target_temperature_high(self):
"""Return the highbound target temperature we try to reach."""
set_req = self.gateway.const.SetReq
if set_req.V_HVAC_SETPOINT_HEAT in self._values:
return self._values.get(set_req.V_HVAC_SETPOINT_COOL)
@property
def target_temperature_low(self):
"""Return the lowbound target temperature we try to reach."""
set_req = self.gateway.const.SetReq
if set_req.V_HVAC_SETPOINT_COOL in self._values:
return self._values.get(set_req.V_HVAC_SETPOINT_HEAT)
@property
def current_operation(self):
"""Return current operation ie. heat, cool, idle."""
return self._values.get(self.gateway.const.SetReq.V_HVAC_FLOW_STATE)
@property
def operation_list(self):
"""List of available operation modes."""
return [STATE_OFF, STATE_AUTO, STATE_COOL, STATE_HEAT]
@property
def current_fan_mode(self):
"""Return the fan setting."""
return self._values.get(self.gateway.const.SetReq.V_HVAC_SPEED)
@property
def fan_list(self):
"""List of available fan modes."""
return ["Auto", "Min", "Normal", "Max"]
def set_temperature(self, **kwargs):
"""Set new target temperature."""
set_req = self.gateway.const.SetReq
temp = kwargs.get(ATTR_TEMPERATURE)
low = kwargs.get(ATTR_TARGET_TEMP_LOW)
high = kwargs.get(ATTR_TARGET_TEMP_HIGH)
heat = self._values.get(set_req.V_HVAC_SETPOINT_HEAT)
cool = self._values.get(set_req.V_HVAC_SETPOINT_COOL)
updates = ()
if temp is not None:
if heat is not None:
# Set HEAT Target temperature
value_type = set_req.V_HVAC_SETPOINT_HEAT
elif cool is not None:
# Set COOL Target temperature
value_type = set_req.V_HVAC_SETPOINT_COOL
if heat is not None or cool is not None:
updates = [(value_type, temp)]
elif all(val is not None for val in (low, high, heat, cool)):
updates = [
(set_req.V_HVAC_SETPOINT_HEAT, low),
(set_req.V_HVAC_SETPOINT_COOL, high)]
for value_type, value in updates:
self.gateway.set_child_value(
self.node_id, self.child_id, value_type, value)
if self.gateway.optimistic:
# optimistically assume that switch has changed state
self._values[value_type] = value
self.update_ha_state()
def set_fan_mode(self, fan):
"""Set new target temperature."""
set_req = self.gateway.const.SetReq
self.gateway.set_child_value(self.node_id, self.child_id,
set_req.V_HVAC_SPEED, fan)
if self.gateway.optimistic:
# optimistically assume that switch has changed state
self._values[set_req.V_HVAC_SPEED] = fan
self.update_ha_state()
def set_operation_mode(self, operation_mode):
"""Set new target temperature."""
set_req = self.gateway.const.SetReq
self.gateway.set_child_value(self.node_id, self.child_id,
set_req.V_HVAC_FLOW_STATE,
DICT_HA_TO_MYS[operation_mode])
if self.gateway.optimistic:
# optimistically assume that switch has changed state
self._values[set_req.V_HVAC_FLOW_STATE] = operation_mode
self.update_ha_state()
def update(self):
"""Update the controller with the latest value from a sensor."""
set_req = self.gateway.const.SetReq
node = self.gateway.sensors[self.node_id]
child = node.children[self.child_id]
for value_type, value in child.values.items():
_LOGGER.debug(
'%s: value_type %s, value = %s', self._name, value_type, value)
if value_type == set_req.V_HVAC_FLOW_STATE:
self._values[value_type] = DICT_MYS_TO_HA[value]
else:
self._values[value_type] = value
def set_humidity(self, humidity):
"""Set new target humidity."""
_LOGGER.error("Service Not Implemented yet")
def set_swing_mode(self, swing_mode):
"""Set new target swing operation."""
_LOGGER.error("Service Not Implemented yet")
def turn_away_mode_on(self):
"""Turn away mode on."""
_LOGGER.error("Service Not Implemented yet")
def turn_away_mode_off(self):
"""Turn away mode off."""
_LOGGER.error("Service Not Implemented yet")
def turn_aux_heat_on(self):
"""Turn auxillary heater on."""
_LOGGER.error("Service Not Implemented yet")
def turn_aux_heat_off(self):
"""Turn auxillary heater off."""
_LOGGER.error("Service Not Implemented yet")
|
mit
|
deer-hope/zulip
|
zerver/lib/tornado_ioloop_logging.py
|
115
|
2820
|
from __future__ import absolute_import
import logging
import time
import select
from tornado import ioloop
from django.conf import settings
try:
# Tornado 2.4
orig_poll_impl = ioloop._poll
def instrument_tornado_ioloop():
ioloop._poll = InstrumentedPoll
except:
# Tornado 3
from tornado.ioloop import IOLoop, PollIOLoop
# There isn't a good way to get at what the underlying poll implementation
# will be without actually constructing an IOLoop, so we just assume it will
# be epoll.
orig_poll_impl = select.epoll
class InstrumentedPollIOLoop(PollIOLoop):
def initialize(self, **kwargs):
super(InstrumentedPollIOLoop, self).initialize(impl=InstrumentedPoll(), **kwargs)
def instrument_tornado_ioloop():
IOLoop.configure(InstrumentedPollIOLoop)
# A hack to keep track of how much time we spend working, versus sleeping in
# the event loop.
#
# Creating a new event loop instance with a custom impl object fails (events
# don't get processed), so instead we modify the ioloop module variable holding
# the default poll implementation. We need to do this before any Tornado code
# runs that might instantiate the default event loop.
class InstrumentedPoll(object):
def __init__(self):
self._underlying = orig_poll_impl()
self._times = []
self._last_print = 0
# Python won't let us subclass e.g. select.epoll, so instead
# we proxy every method. __getattr__ handles anything we
# don't define elsewhere.
def __getattr__(self, name):
return getattr(self._underlying, name)
# Call the underlying poll method, and report timing data.
def poll(self, timeout):
# Avoid accumulating a bunch of insignificant data points
# from short timeouts.
if timeout < 1e-3:
return self._underlying.poll(timeout)
# Record start and end times for the underlying poll
t0 = time.time()
result = self._underlying.poll(timeout)
t1 = time.time()
# Log this datapoint and restrict our log to the past minute
self._times.append((t0, t1))
while self._times and self._times[0][0] < t1 - 60:
self._times.pop(0)
# Report (at most once every 5s) the percentage of time spent
# outside poll
if self._times and t1 - self._last_print >= 5:
total = t1 - self._times[0][0]
in_poll = sum(b-a for a,b in self._times)
if total > 0:
percent_busy = 100 * (1 - in_poll/total)
if settings.PRODUCTION or percent_busy > 20:
logging.info('Tornado %5.1f%% busy over the past %4.1f seconds'
% (percent_busy, total))
self._last_print = t1
return result
|
apache-2.0
|
lekum/ansible
|
lib/ansible/plugins/action/set_fact.py
|
51
|
1392
|
# Copyright 2013 Dag Wieers <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleError
from ansible.plugins.action import ActionBase
from ansible.utils.boolean import boolean
class ActionModule(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=dict()):
facts = dict()
if self._task.args:
for (k, v) in self._task.args.iteritems():
k = self._templar.template(k)
if isinstance(v, basestring) and v.lower() in ('true', 'false', 'yes', 'no'):
v = boolean(v)
facts[k] = v
return dict(changed=False, ansible_facts=facts)
|
gpl-3.0
|
PierreFaniel/openerp-7.0
|
sale_order_line/__init__.py
|
4
|
1106
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (c) 2010-2013 Elico Corp. All Rights Reserved.
# Author: LIN Yu <[email protected]>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import sale
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
michalkurka/h2o-3
|
h2o-py/h2o/estimators/random_forest.py
|
2
|
83030
|
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
#
# This file is auto-generated by h2o-3/h2o-bindings/bin/gen_python.py
# Copyright 2016 H2O.ai; Apache License Version 2.0 (see LICENSE for details)
#
from __future__ import absolute_import, division, print_function, unicode_literals
from h2o.utils.metaclass import deprecated_params, deprecated_property
from h2o.estimators.estimator_base import H2OEstimator
from h2o.exceptions import H2OValueError
from h2o.frame import H2OFrame
from h2o.utils.typechecks import assert_is_type, Enum, numeric
class H2ORandomForestEstimator(H2OEstimator):
"""
Distributed Random Forest
Builds a Distributed Random Forest (DRF) on a parsed dataset, for regression or
classification.
"""
algo = "drf"
supervised_learning = True
_options_ = {'model_extensions': ['h2o.model.extensions.ScoringHistoryTrees',
'h2o.model.extensions.VariableImportance',
'h2o.model.extensions.Trees'],
'verbose': True}
@deprecated_params({'offset_column': None})
def __init__(self,
model_id=None, # type: Optional[Union[None, str, H2OEstimator]]
training_frame=None, # type: Optional[Union[None, str, H2OFrame]]
validation_frame=None, # type: Optional[Union[None, str, H2OFrame]]
nfolds=0, # type: int
keep_cross_validation_models=True, # type: bool
keep_cross_validation_predictions=False, # type: bool
keep_cross_validation_fold_assignment=False, # type: bool
score_each_iteration=False, # type: bool
score_tree_interval=0, # type: int
fold_assignment="auto", # type: Literal["auto", "random", "modulo", "stratified"]
fold_column=None, # type: Optional[str]
response_column=None, # type: Optional[str]
ignored_columns=None, # type: Optional[List[str]]
ignore_const_cols=True, # type: bool
weights_column=None, # type: Optional[str]
balance_classes=False, # type: bool
class_sampling_factors=None, # type: Optional[List[float]]
max_after_balance_size=5.0, # type: float
max_confusion_matrix_size=20, # type: int
ntrees=50, # type: int
max_depth=20, # type: int
min_rows=1.0, # type: float
nbins=20, # type: int
nbins_top_level=1024, # type: int
nbins_cats=1024, # type: int
r2_stopping=None, # type: Optional[float]
stopping_rounds=0, # type: int
stopping_metric="auto", # type: Literal["auto", "deviance", "logloss", "mse", "rmse", "mae", "rmsle", "auc", "aucpr", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"]
stopping_tolerance=0.001, # type: float
max_runtime_secs=0.0, # type: float
seed=-1, # type: int
build_tree_one_node=False, # type: bool
mtries=-1, # type: int
sample_rate=0.632, # type: float
sample_rate_per_class=None, # type: Optional[List[float]]
binomial_double_trees=False, # type: bool
checkpoint=None, # type: Optional[Union[None, str, H2OEstimator]]
col_sample_rate_change_per_level=1.0, # type: float
col_sample_rate_per_tree=1.0, # type: float
min_split_improvement=1e-05, # type: float
histogram_type="auto", # type: Literal["auto", "uniform_adaptive", "random", "quantiles_global", "round_robin"]
categorical_encoding="auto", # type: Literal["auto", "enum", "one_hot_internal", "one_hot_explicit", "binary", "eigen", "label_encoder", "sort_by_response", "enum_limited"]
calibrate_model=False, # type: bool
calibration_frame=None, # type: Optional[Union[None, str, H2OFrame]]
distribution="auto", # type: Literal["auto", "bernoulli", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace", "quantile", "huber"]
custom_metric_func=None, # type: Optional[str]
export_checkpoints_dir=None, # type: Optional[str]
check_constant_response=True, # type: bool
gainslift_bins=-1, # type: int
auc_type="auto", # type: Literal["auto", "none", "macro_ovr", "weighted_ovr", "macro_ovo", "weighted_ovo"]
):
"""
:param model_id: Destination id for this model; auto-generated if not specified.
Defaults to ``None``.
:type model_id: Union[None, str, H2OEstimator], optional
:param training_frame: Id of the training data frame.
Defaults to ``None``.
:type training_frame: Union[None, str, H2OFrame], optional
:param validation_frame: Id of the validation data frame.
Defaults to ``None``.
:type validation_frame: Union[None, str, H2OFrame], optional
:param nfolds: Number of folds for K-fold cross-validation (0 to disable or >= 2).
Defaults to ``0``.
:type nfolds: int
:param keep_cross_validation_models: Whether to keep the cross-validation models.
Defaults to ``True``.
:type keep_cross_validation_models: bool
:param keep_cross_validation_predictions: Whether to keep the predictions of the cross-validation models.
Defaults to ``False``.
:type keep_cross_validation_predictions: bool
:param keep_cross_validation_fold_assignment: Whether to keep the cross-validation fold assignment.
Defaults to ``False``.
:type keep_cross_validation_fold_assignment: bool
:param score_each_iteration: Whether to score during each iteration of model training.
Defaults to ``False``.
:type score_each_iteration: bool
:param score_tree_interval: Score the model after every so many trees. Disabled if set to 0.
Defaults to ``0``.
:type score_tree_interval: int
:param fold_assignment: Cross-validation fold assignment scheme, if fold_column is not specified. The
'Stratified' option will stratify the folds based on the response variable, for classification problems.
Defaults to ``"auto"``.
:type fold_assignment: Literal["auto", "random", "modulo", "stratified"]
:param fold_column: Column with cross-validation fold index assignment per observation.
Defaults to ``None``.
:type fold_column: str, optional
:param response_column: Response variable column.
Defaults to ``None``.
:type response_column: str, optional
:param ignored_columns: Names of columns to ignore for training.
Defaults to ``None``.
:type ignored_columns: List[str], optional
:param ignore_const_cols: Ignore constant columns.
Defaults to ``True``.
:type ignore_const_cols: bool
:param weights_column: Column with observation weights. Giving some observation a weight of zero is equivalent
to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating
that row twice. Negative weights are not allowed. Note: Weights are per-row observation weights and do
not increase the size of the data frame. This is typically the number of times a row is repeated, but
non-integer values are supported as well. During training, rows with higher weights matter more, due to
the larger loss function pre-factor.
Defaults to ``None``.
:type weights_column: str, optional
:param balance_classes: Balance training data class counts via over/under-sampling (for imbalanced data).
Defaults to ``False``.
:type balance_classes: bool
:param class_sampling_factors: Desired over/under-sampling ratios per class (in lexicographic order). If not
specified, sampling factors will be automatically computed to obtain class balance during training.
Requires balance_classes.
Defaults to ``None``.
:type class_sampling_factors: List[float], optional
:param max_after_balance_size: Maximum relative size of the training data after balancing class counts (can be
less than 1.0). Requires balance_classes.
Defaults to ``5.0``.
:type max_after_balance_size: float
:param max_confusion_matrix_size: [Deprecated] Maximum size (# classes) for confusion matrices to be printed in
the Logs
Defaults to ``20``.
:type max_confusion_matrix_size: int
:param ntrees: Number of trees.
Defaults to ``50``.
:type ntrees: int
:param max_depth: Maximum tree depth (0 for unlimited).
Defaults to ``20``.
:type max_depth: int
:param min_rows: Fewest allowed (weighted) observations in a leaf.
Defaults to ``1.0``.
:type min_rows: float
:param nbins: For numerical columns (real/int), build a histogram of (at least) this many bins, then split at
the best point
Defaults to ``20``.
:type nbins: int
:param nbins_top_level: For numerical columns (real/int), build a histogram of (at most) this many bins at the
root level, then decrease by factor of two per level
Defaults to ``1024``.
:type nbins_top_level: int
:param nbins_cats: For categorical columns (factors), build a histogram of this many bins, then split at the
best point. Higher values can lead to more overfitting.
Defaults to ``1024``.
:type nbins_cats: int
:param r2_stopping: r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds,
stopping_metric and stopping_tolerance instead. Previous version of H2O would stop making trees when the
R^2 metric equals or exceeds this
Defaults to ``∞``.
:type r2_stopping: float
:param stopping_rounds: Early stopping based on convergence of stopping_metric. Stop if simple moving average of
length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)
Defaults to ``0``.
:type stopping_rounds: int
:param stopping_metric: Metric to use for early stopping (AUTO: logloss for classification, deviance for
regression and anonomaly_score for Isolation Forest). Note that custom and custom_increasing can only be
used in GBM and DRF with the Python client.
Defaults to ``"auto"``.
:type stopping_metric: Literal["auto", "deviance", "logloss", "mse", "rmse", "mae", "rmsle", "auc", "aucpr", "lift_top_group",
"misclassification", "mean_per_class_error", "custom", "custom_increasing"]
:param stopping_tolerance: Relative tolerance for metric-based stopping criterion (stop if relative improvement
is not at least this much)
Defaults to ``0.001``.
:type stopping_tolerance: float
:param max_runtime_secs: Maximum allowed runtime in seconds for model training. Use 0 to disable.
Defaults to ``0.0``.
:type max_runtime_secs: float
:param seed: Seed for pseudo random number generator (if applicable)
Defaults to ``-1``.
:type seed: int
:param build_tree_one_node: Run on one node only; no network overhead but fewer cpus used. Suitable for small
datasets.
Defaults to ``False``.
:type build_tree_one_node: bool
:param mtries: Number of variables randomly sampled as candidates at each split. If set to -1, defaults to
sqrt{p} for classification and p/3 for regression (where p is the # of predictors
Defaults to ``-1``.
:type mtries: int
:param sample_rate: Row sample rate per tree (from 0.0 to 1.0)
Defaults to ``0.632``.
:type sample_rate: float
:param sample_rate_per_class: A list of row sample rates per class (relative fraction for each class, from 0.0
to 1.0), for each tree
Defaults to ``None``.
:type sample_rate_per_class: List[float], optional
:param binomial_double_trees: For binary classification: Build 2x as many trees (one per class) - can lead to
higher accuracy.
Defaults to ``False``.
:type binomial_double_trees: bool
:param checkpoint: Model checkpoint to resume training with.
Defaults to ``None``.
:type checkpoint: Union[None, str, H2OEstimator], optional
:param col_sample_rate_change_per_level: Relative change of the column sampling rate for every level (must be >
0.0 and <= 2.0)
Defaults to ``1.0``.
:type col_sample_rate_change_per_level: float
:param col_sample_rate_per_tree: Column sample rate per tree (from 0.0 to 1.0)
Defaults to ``1.0``.
:type col_sample_rate_per_tree: float
:param min_split_improvement: Minimum relative improvement in squared error reduction for a split to happen
Defaults to ``1e-05``.
:type min_split_improvement: float
:param histogram_type: What type of histogram to use for finding optimal split points
Defaults to ``"auto"``.
:type histogram_type: Literal["auto", "uniform_adaptive", "random", "quantiles_global", "round_robin"]
:param categorical_encoding: Encoding scheme for categorical features
Defaults to ``"auto"``.
:type categorical_encoding: Literal["auto", "enum", "one_hot_internal", "one_hot_explicit", "binary", "eigen", "label_encoder",
"sort_by_response", "enum_limited"]
:param calibrate_model: Use Platt Scaling to calculate calibrated class probabilities. Calibration can provide
more accurate estimates of class probabilities.
Defaults to ``False``.
:type calibrate_model: bool
:param calibration_frame: Calibration frame for Platt Scaling
Defaults to ``None``.
:type calibration_frame: Union[None, str, H2OFrame], optional
:param distribution: Distribution function
Defaults to ``"auto"``.
:type distribution: Literal["auto", "bernoulli", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace",
"quantile", "huber"]
:param custom_metric_func: Reference to custom evaluation function, format: `language:keyName=funcName`
Defaults to ``None``.
:type custom_metric_func: str, optional
:param export_checkpoints_dir: Automatically export generated models to this directory.
Defaults to ``None``.
:type export_checkpoints_dir: str, optional
:param check_constant_response: Check if response column is constant. If enabled, then an exception is thrown if
the response column is a constant value.If disabled, then model will train regardless of the response
column being a constant value or not.
Defaults to ``True``.
:type check_constant_response: bool
:param gainslift_bins: Gains/Lift table number of bins. 0 means disabled.. Default value -1 means automatic
binning.
Defaults to ``-1``.
:type gainslift_bins: int
:param auc_type: Set default multinomial AUC type.
Defaults to ``"auto"``.
:type auc_type: Literal["auto", "none", "macro_ovr", "weighted_ovr", "macro_ovo", "weighted_ovo"]
"""
super(H2ORandomForestEstimator, self).__init__()
self._parms = {}
self._id = self._parms['model_id'] = model_id
self.training_frame = training_frame
self.validation_frame = validation_frame
self.nfolds = nfolds
self.keep_cross_validation_models = keep_cross_validation_models
self.keep_cross_validation_predictions = keep_cross_validation_predictions
self.keep_cross_validation_fold_assignment = keep_cross_validation_fold_assignment
self.score_each_iteration = score_each_iteration
self.score_tree_interval = score_tree_interval
self.fold_assignment = fold_assignment
self.fold_column = fold_column
self.response_column = response_column
self.ignored_columns = ignored_columns
self.ignore_const_cols = ignore_const_cols
self.weights_column = weights_column
self.balance_classes = balance_classes
self.class_sampling_factors = class_sampling_factors
self.max_after_balance_size = max_after_balance_size
self.max_confusion_matrix_size = max_confusion_matrix_size
self.ntrees = ntrees
self.max_depth = max_depth
self.min_rows = min_rows
self.nbins = nbins
self.nbins_top_level = nbins_top_level
self.nbins_cats = nbins_cats
self.r2_stopping = r2_stopping
self.stopping_rounds = stopping_rounds
self.stopping_metric = stopping_metric
self.stopping_tolerance = stopping_tolerance
self.max_runtime_secs = max_runtime_secs
self.seed = seed
self.build_tree_one_node = build_tree_one_node
self.mtries = mtries
self.sample_rate = sample_rate
self.sample_rate_per_class = sample_rate_per_class
self.binomial_double_trees = binomial_double_trees
self.checkpoint = checkpoint
self.col_sample_rate_change_per_level = col_sample_rate_change_per_level
self.col_sample_rate_per_tree = col_sample_rate_per_tree
self.min_split_improvement = min_split_improvement
self.histogram_type = histogram_type
self.categorical_encoding = categorical_encoding
self.calibrate_model = calibrate_model
self.calibration_frame = calibration_frame
self.distribution = distribution
self.custom_metric_func = custom_metric_func
self.export_checkpoints_dir = export_checkpoints_dir
self.check_constant_response = check_constant_response
self.gainslift_bins = gainslift_bins
self.auc_type = auc_type
@property
def training_frame(self):
"""
Id of the training data frame.
Type: ``Union[None, str, H2OFrame]``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8],
... seed=1234)
>>> cars_drf = H2ORandomForestEstimator(seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.auc(valid=True)
"""
return self._parms.get("training_frame")
@training_frame.setter
def training_frame(self, training_frame):
self._parms["training_frame"] = H2OFrame._validate(training_frame, 'training_frame')
@property
def validation_frame(self):
"""
Id of the validation data frame.
Type: ``Union[None, str, H2OFrame]``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8],
... seed=1234)
>>> cars_drf = H2ORandomForestEstimator(seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.auc(valid=True)
"""
return self._parms.get("validation_frame")
@validation_frame.setter
def validation_frame(self, validation_frame):
self._parms["validation_frame"] = H2OFrame._validate(validation_frame, 'validation_frame')
@property
def nfolds(self):
"""
Number of folds for K-fold cross-validation (0 to disable or >= 2).
Type: ``int``, defaults to ``0``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> folds = 5
>>> cars_drf = H2ORandomForestEstimator(nfolds=folds,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=cars)
>>> cars_drf.auc(xval=True)
"""
return self._parms.get("nfolds")
@nfolds.setter
def nfolds(self, nfolds):
assert_is_type(nfolds, None, int)
self._parms["nfolds"] = nfolds
@property
def keep_cross_validation_models(self):
"""
Whether to keep the cross-validation models.
Type: ``bool``, defaults to ``True``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(keep_cross_validation_models=True,
... nfolds=5,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train)
>>> cars_drf.auc()
"""
return self._parms.get("keep_cross_validation_models")
@keep_cross_validation_models.setter
def keep_cross_validation_models(self, keep_cross_validation_models):
assert_is_type(keep_cross_validation_models, None, bool)
self._parms["keep_cross_validation_models"] = keep_cross_validation_models
@property
def keep_cross_validation_predictions(self):
"""
Whether to keep the predictions of the cross-validation models.
Type: ``bool``, defaults to ``False``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(keep_cross_validation_predictions=True,
... nfolds=5,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train)
>>> cars_drf.cross_validation_predictions()
"""
return self._parms.get("keep_cross_validation_predictions")
@keep_cross_validation_predictions.setter
def keep_cross_validation_predictions(self, keep_cross_validation_predictions):
assert_is_type(keep_cross_validation_predictions, None, bool)
self._parms["keep_cross_validation_predictions"] = keep_cross_validation_predictions
@property
def keep_cross_validation_fold_assignment(self):
"""
Whether to keep the cross-validation fold assignment.
Type: ``bool``, defaults to ``False``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(keep_cross_validation_fold_assignment=True,
... nfolds=5,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train)
>>> cars_drf.cross_validation_fold_assignment()
"""
return self._parms.get("keep_cross_validation_fold_assignment")
@keep_cross_validation_fold_assignment.setter
def keep_cross_validation_fold_assignment(self, keep_cross_validation_fold_assignment):
assert_is_type(keep_cross_validation_fold_assignment, None, bool)
self._parms["keep_cross_validation_fold_assignment"] = keep_cross_validation_fold_assignment
@property
def score_each_iteration(self):
"""
Whether to score during each iteration of model training.
Type: ``bool``, defaults to ``False``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(score_each_iteration=True,
... ntrees=55,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame = valid)
>>> cars_drf.scoring_history()
"""
return self._parms.get("score_each_iteration")
@score_each_iteration.setter
def score_each_iteration(self, score_each_iteration):
assert_is_type(score_each_iteration, None, bool)
self._parms["score_each_iteration"] = score_each_iteration
@property
def score_tree_interval(self):
"""
Score the model after every so many trees. Disabled if set to 0.
Type: ``int``, defaults to ``0``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(score_tree_interval=5,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.scoring_history()
"""
return self._parms.get("score_tree_interval")
@score_tree_interval.setter
def score_tree_interval(self, score_tree_interval):
assert_is_type(score_tree_interval, None, int)
self._parms["score_tree_interval"] = score_tree_interval
@property
def fold_assignment(self):
"""
Cross-validation fold assignment scheme, if fold_column is not specified. The 'Stratified' option will stratify
the folds based on the response variable, for classification problems.
Type: ``Literal["auto", "random", "modulo", "stratified"]``, defaults to ``"auto"``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> assignment_type = "Random"
>>> cars_drf = H2ORandomForestEstimator(fold_assignment=assignment_type,
... nfolds=5,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=cars)
>>> cars_drf.auc(xval=True)
"""
return self._parms.get("fold_assignment")
@fold_assignment.setter
def fold_assignment(self, fold_assignment):
assert_is_type(fold_assignment, None, Enum("auto", "random", "modulo", "stratified"))
self._parms["fold_assignment"] = fold_assignment
@property
def fold_column(self):
"""
Column with cross-validation fold index assignment per observation.
Type: ``str``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> fold_numbers = cars.kfold_column(n_folds=5, seed=1234)
>>> fold_numbers.set_names(["fold_numbers"])
>>> cars = cars.cbind(fold_numbers)
>>> print(cars['fold_numbers'])
>>> cars_drf = H2ORandomForestEstimator(seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=cars,
... fold_column="fold_numbers")
>>> cars_drf.auc(xval=True)
"""
return self._parms.get("fold_column")
@fold_column.setter
def fold_column(self, fold_column):
assert_is_type(fold_column, None, str)
self._parms["fold_column"] = fold_column
@property
def response_column(self):
"""
Response variable column.
Type: ``str``.
"""
return self._parms.get("response_column")
@response_column.setter
def response_column(self, response_column):
assert_is_type(response_column, None, str)
self._parms["response_column"] = response_column
@property
def ignored_columns(self):
"""
Names of columns to ignore for training.
Type: ``List[str]``.
"""
return self._parms.get("ignored_columns")
@ignored_columns.setter
def ignored_columns(self, ignored_columns):
assert_is_type(ignored_columns, None, [str])
self._parms["ignored_columns"] = ignored_columns
@property
def ignore_const_cols(self):
"""
Ignore constant columns.
Type: ``bool``, defaults to ``True``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> cars["const_1"] = 6
>>> cars["const_2"] = 7
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(seed=1234,
... ignore_const_cols=True)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.auc(valid=True)
"""
return self._parms.get("ignore_const_cols")
@ignore_const_cols.setter
def ignore_const_cols(self, ignore_const_cols):
assert_is_type(ignore_const_cols, None, bool)
self._parms["ignore_const_cols"] = ignore_const_cols
@property
def weights_column(self):
"""
Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the
dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative
weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the data
frame. This is typically the number of times a row is repeated, but non-integer values are supported as well.
During training, rows with higher weights matter more, due to the larger loss function pre-factor.
Type: ``str``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8],
... seed=1234)
>>> cars_drf = H2ORandomForestEstimator(seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid,
... weights_column="weight")
>>> cars_drf.auc(valid=True)
"""
return self._parms.get("weights_column")
@weights_column.setter
def weights_column(self, weights_column):
assert_is_type(weights_column, None, str)
self._parms["weights_column"] = weights_column
@property
def balance_classes(self):
"""
Balance training data class counts via over/under-sampling (for imbalanced data).
Type: ``bool``, defaults to ``False``.
:examples:
>>> covtype = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/covtype/covtype.20k.data")
>>> covtype[54] = covtype[54].asfactor()
>>> predictors = covtype.columns[0:54]
>>> response = 'C55'
>>> train, valid = covtype.split_frame(ratios=[.8], seed=1234)
>>> cov_drf = H2ORandomForestEstimator(balance_classes=True,
... seed=1234)
>>> cov_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('logloss', cov_drf.logloss(valid=True))
"""
return self._parms.get("balance_classes")
@balance_classes.setter
def balance_classes(self, balance_classes):
assert_is_type(balance_classes, None, bool)
self._parms["balance_classes"] = balance_classes
@property
def class_sampling_factors(self):
"""
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will
be automatically computed to obtain class balance during training. Requires balance_classes.
Type: ``List[float]``.
:examples:
>>> covtype = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/covtype/covtype.20k.data")
>>> covtype[54] = covtype[54].asfactor()
>>> predictors = covtype.columns[0:54]
>>> response = 'C55'
>>> train, valid = covtype.split_frame(ratios=[.8], seed=1234)
>>> print(covtype[54].table())
>>> sample_factors = [1., 0.5, 1., 1., 1., 1., 1.]
>>> cov_drf = H2ORandomForestEstimator(balance_classes=True,
... class_sampling_factors=sample_factors,
... seed=1234)
>>> cov_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('logloss', cov_drf.logloss(valid=True))
"""
return self._parms.get("class_sampling_factors")
@class_sampling_factors.setter
def class_sampling_factors(self, class_sampling_factors):
assert_is_type(class_sampling_factors, None, [float])
self._parms["class_sampling_factors"] = class_sampling_factors
@property
def max_after_balance_size(self):
"""
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires
balance_classes.
Type: ``float``, defaults to ``5.0``.
:examples:
>>> covtype = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/covtype/covtype.20k.data")
>>> covtype[54] = covtype[54].asfactor()
>>> predictors = covtype.columns[0:54]
>>> response = 'C55'
>>> train, valid = covtype.split_frame(ratios=[.8], seed=1234)
>>> print(covtype[54].table())
>>> max = .85
>>> cov_drf = H2ORandomForestEstimator(balance_classes=True,
... max_after_balance_size=max,
... seed=1234)
>>> cov_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('logloss', cov_drf.logloss(valid=True))
"""
return self._parms.get("max_after_balance_size")
@max_after_balance_size.setter
def max_after_balance_size(self, max_after_balance_size):
assert_is_type(max_after_balance_size, None, float)
self._parms["max_after_balance_size"] = max_after_balance_size
@property
def max_confusion_matrix_size(self):
"""
[Deprecated] Maximum size (# classes) for confusion matrices to be printed in the Logs
Type: ``int``, defaults to ``20``.
"""
return self._parms.get("max_confusion_matrix_size")
@max_confusion_matrix_size.setter
def max_confusion_matrix_size(self, max_confusion_matrix_size):
assert_is_type(max_confusion_matrix_size, None, int)
self._parms["max_confusion_matrix_size"] = max_confusion_matrix_size
@property
def ntrees(self):
"""
Number of trees.
Type: ``int``, defaults to ``50``.
:examples:
>>> titanic = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
>>> titanic['survived'] = titanic['survived'].asfactor()
>>> predictors = titanic.columns
>>> del predictors[1:3]
>>> response = 'survived'
>>> train, valid = titanic.split_frame(ratios=[.8],
... seed=1234)
>>> tree_num = [20, 50, 80, 110,
... 140, 170, 200]
>>> label = ["20", "50", "80", "110",
... "140", "170", "200"]
>>> for key, num in enumerate(tree_num):
# Input an integer for 'num' and 'key'
>>> titanic_drf = H2ORandomForestEstimator(ntrees=num,
... seed=1234)
>>> titanic_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(label[key], 'training score',
... titanic_drf.auc(train=True))
>>> print(label[key], 'validation score',
... titanic_drf.auc(valid=True))
"""
return self._parms.get("ntrees")
@ntrees.setter
def ntrees(self, ntrees):
assert_is_type(ntrees, None, int)
self._parms["ntrees"] = ntrees
@property
def max_depth(self):
"""
Maximum tree depth (0 for unlimited).
Type: ``int``, defaults to ``20``.
:examples:
>>> df = h2o.import_file(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
>>> response = "survived"
>>> df[response] = df[response].asfactor()
>>> predictors = df.columns
>>> del predictors[1:3]
>>> train, valid, test = df.split_frame(ratios=[0.6,0.2],
... seed=1234,
... destination_frames=
... ['train.hex','valid.hex','test.hex'])
>>> drf = H2ORandomForestEstimator()
>>> drf.train(x=predictors,
... y=response,
... training_frame=train)
>>> perf = drf.model_performance(valid)
>>> print perf.auc()
"""
return self._parms.get("max_depth")
@max_depth.setter
def max_depth(self, max_depth):
assert_is_type(max_depth, None, int)
self._parms["max_depth"] = max_depth
@property
def min_rows(self):
"""
Fewest allowed (weighted) observations in a leaf.
Type: ``float``, defaults to ``1.0``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(min_rows=16,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(cars_drf.auc(valid=True))
"""
return self._parms.get("min_rows")
@min_rows.setter
def min_rows(self, min_rows):
assert_is_type(min_rows, None, numeric)
self._parms["min_rows"] = min_rows
@property
def nbins(self):
"""
For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best point
Type: ``int``, defaults to ``20``.
:examples:
>>> eeg = h2o.import_file("https://h2o-public-test-data.s3.amazonaws.com/smalldata/eeg/eeg_eyestate.csv")
>>> eeg['eyeDetection'] = eeg['eyeDetection'].asfactor()
>>> predictors = eeg.columns[:-1]
>>> response = 'eyeDetection'
>>> train, valid = eeg.split_frame(ratios=[.8], seed=1234)
>>> bin_num = [16, 32, 64, 128, 256, 512]
>>> label = ["16", "32", "64", "128", "256", "512"]
>>> for key, num in enumerate(bin_num):
# Insert integer for 'num' and 'key'
>>> eeg_drf = H2ORandomForestEstimator(nbins=num, seed=1234)
>>> eeg_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(label[key], 'training score',
... eeg_drf.auc(train=True))
>>> print(label[key], 'validation score',
... eeg_drf.auc(train=True))
"""
return self._parms.get("nbins")
@nbins.setter
def nbins(self, nbins):
assert_is_type(nbins, None, int)
self._parms["nbins"] = nbins
@property
def nbins_top_level(self):
"""
For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease
by factor of two per level
Type: ``int``, defaults to ``1024``.
:examples:
>>> eeg = h2o.import_file("https://h2o-public-test-data.s3.amazonaws.com/smalldata/eeg/eeg_eyestate.csv")
>>> eeg['eyeDetection'] = eeg['eyeDetection'].asfactor()
>>> predictors = eeg.columns[:-1]
>>> response = 'eyeDetection'
>>> train, valid = eeg.split_frame(ratios=[.8],
... seed=1234)
>>> bin_num = [32, 64, 128, 256, 512,
... 1024, 2048, 4096]
>>> label = ["32", "64", "128", "256",
... "512", "1024", "2048", "4096"]
>>> for key, num in enumerate(bin_num):
# Insert integer for 'num' and 'key'
>>> eeg_drf = H2ORandomForestEstimator(nbins_top_level=32,
... seed=1234)
>>> eeg_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(label[key], 'training score',
... eeg_gbm.auc(train=True))
>>> print(label[key], 'validation score',
... eeg_gbm.auc(valid=True))
"""
return self._parms.get("nbins_top_level")
@nbins_top_level.setter
def nbins_top_level(self, nbins_top_level):
assert_is_type(nbins_top_level, None, int)
self._parms["nbins_top_level"] = nbins_top_level
@property
def nbins_cats(self):
"""
For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher
values can lead to more overfitting.
Type: ``int``, defaults to ``1024``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> bin_num = [8, 16, 32, 64, 128, 256,
... 512, 1024, 2048, 4096]
>>> label = ["8", "16", "32", "64", "128",
... "256", "512", "1024", "2048", "4096"]
>>> for key, num in enumerate(bin_num):
# Insert integer for 'num' and 'key'
>>> airlines_drf = H2ORandomForestEstimator(nbins_cats=num,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(label[key], 'training score',
... airlines_gbm.auc(train=True))
>>> print(label[key], 'validation score',
... airlines_gbm.auc(valid=True))
"""
return self._parms.get("nbins_cats")
@nbins_cats.setter
def nbins_cats(self, nbins_cats):
assert_is_type(nbins_cats, None, int)
self._parms["nbins_cats"] = nbins_cats
@property
def r2_stopping(self):
"""
r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds, stopping_metric and
stopping_tolerance instead. Previous version of H2O would stop making trees when the R^2 metric equals or
exceeds this
Type: ``float``, defaults to ``∞``.
"""
return self._parms.get("r2_stopping")
@r2_stopping.setter
def r2_stopping(self, r2_stopping):
assert_is_type(r2_stopping, None, numeric)
self._parms["r2_stopping"] = r2_stopping
@property
def stopping_rounds(self):
"""
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the
stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)
Type: ``int``, defaults to ``0``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8],
... seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(stopping_metric="auc",
... stopping_rounds=3,
... stopping_tolerance=1e-2,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> airlines_drf.auc(valid=True)
"""
return self._parms.get("stopping_rounds")
@stopping_rounds.setter
def stopping_rounds(self, stopping_rounds):
assert_is_type(stopping_rounds, None, int)
self._parms["stopping_rounds"] = stopping_rounds
@property
def stopping_metric(self):
"""
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression and anonomaly_score
for Isolation Forest). Note that custom and custom_increasing can only be used in GBM and DRF with the Python
client.
Type: ``Literal["auto", "deviance", "logloss", "mse", "rmse", "mae", "rmsle", "auc", "aucpr", "lift_top_group",
"misclassification", "mean_per_class_error", "custom", "custom_increasing"]``, defaults to ``"auto"``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8],
... seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(stopping_metric="auc",
... stopping_rounds=3,
... stopping_tolerance=1e-2,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> airlines_drf.auc(valid=True)
"""
return self._parms.get("stopping_metric")
@stopping_metric.setter
def stopping_metric(self, stopping_metric):
assert_is_type(stopping_metric, None, Enum("auto", "deviance", "logloss", "mse", "rmse", "mae", "rmsle", "auc", "aucpr", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"))
self._parms["stopping_metric"] = stopping_metric
@property
def stopping_tolerance(self):
"""
Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)
Type: ``float``, defaults to ``0.001``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8],
... seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(stopping_metric="auc",
... stopping_rounds=3,
... stopping_tolerance=1e-2,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> airlines_drf.auc(valid=True)
"""
return self._parms.get("stopping_tolerance")
@stopping_tolerance.setter
def stopping_tolerance(self, stopping_tolerance):
assert_is_type(stopping_tolerance, None, numeric)
self._parms["stopping_tolerance"] = stopping_tolerance
@property
def max_runtime_secs(self):
"""
Maximum allowed runtime in seconds for model training. Use 0 to disable.
Type: ``float``, defaults to ``0.0``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(max_runtime_secs=10,
... ntrees=10000,
... max_depth=10,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.auc(valid = True)
"""
return self._parms.get("max_runtime_secs")
@max_runtime_secs.setter
def max_runtime_secs(self, max_runtime_secs):
assert_is_type(max_runtime_secs, None, numeric)
self._parms["max_runtime_secs"] = max_runtime_secs
@property
def seed(self):
"""
Seed for pseudo random number generator (if applicable)
Type: ``int``, defaults to ``-1``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> drf_w_seed_1 = H2ORandomForestEstimator(seed=1234)
>>> drf_w_seed_1.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('auc for the 1st model build with a seed:',
... drf_w_seed_1.auc(valid=True))
"""
return self._parms.get("seed")
@seed.setter
def seed(self, seed):
assert_is_type(seed, None, int)
self._parms["seed"] = seed
@property
def build_tree_one_node(self):
"""
Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.
Type: ``bool``, defaults to ``False``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(build_tree_one_node=True,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.auc(valid=True)
"""
return self._parms.get("build_tree_one_node")
@build_tree_one_node.setter
def build_tree_one_node(self, build_tree_one_node):
assert_is_type(build_tree_one_node, None, bool)
self._parms["build_tree_one_node"] = build_tree_one_node
@property
def mtries(self):
"""
Number of variables randomly sampled as candidates at each split. If set to -1, defaults to sqrt{p} for
classification and p/3 for regression (where p is the # of predictors
Type: ``int``, defaults to ``-1``.
:examples:
>>> covtype = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/covtype/covtype.20k.data")
>>> covtype[54] = covtype[54].asfactor()
>>> predictors = covtype.columns[0:54]
>>> response = 'C55'
>>> train, valid = covtype.split_frame(ratios=[.8], seed=1234)
>>> cov_drf = H2ORandomForestEstimator(mtries=30, seed=1234)
>>> cov_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('logloss', cov_drf.logloss(valid=True))
"""
return self._parms.get("mtries")
@mtries.setter
def mtries(self, mtries):
assert_is_type(mtries, None, int)
self._parms["mtries"] = mtries
@property
def sample_rate(self):
"""
Row sample rate per tree (from 0.0 to 1.0)
Type: ``float``, defaults to ``0.632``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8],
... seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(sample_rate=.7,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(airlines_drf.auc(valid=True))
"""
return self._parms.get("sample_rate")
@sample_rate.setter
def sample_rate(self, sample_rate):
assert_is_type(sample_rate, None, numeric)
self._parms["sample_rate"] = sample_rate
@property
def sample_rate_per_class(self):
"""
A list of row sample rates per class (relative fraction for each class, from 0.0 to 1.0), for each tree
Type: ``List[float]``.
:examples:
>>> covtype = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/covtype/covtype.20k.data")
>>> covtype[54] = covtype[54].asfactor()
>>> predictors = covtype.columns[0:54]
>>> response = 'C55'
>>> train, valid = covtype.split_frame(ratios=[.8],
... seed=1234)
>>> print(train[response].table())
>>> rate_per_class_list = [1, .4, 1, 1, 1, 1, 1]
>>> cov_drf = H2ORandomForestEstimator(sample_rate_per_class=rate_per_class_list,
... seed=1234)
>>> cov_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('logloss', cov_drf.logloss(valid=True))
"""
return self._parms.get("sample_rate_per_class")
@sample_rate_per_class.setter
def sample_rate_per_class(self, sample_rate_per_class):
assert_is_type(sample_rate_per_class, None, [numeric])
self._parms["sample_rate_per_class"] = sample_rate_per_class
@property
def binomial_double_trees(self):
"""
For binary classification: Build 2x as many trees (one per class) - can lead to higher accuracy.
Type: ``bool``, defaults to ``False``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(binomial_double_trees=False,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('without binomial_double_trees:',
... cars_drf.auc(valid=True))
>>> cars_drf_2 = H2ORandomForestEstimator(binomial_double_trees=True,
... seed=1234)
>>> cars_drf_2.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print('with binomial_double_trees:', cars_drf_2.auc(valid=True))
"""
return self._parms.get("binomial_double_trees")
@binomial_double_trees.setter
def binomial_double_trees(self, binomial_double_trees):
assert_is_type(binomial_double_trees, None, bool)
self._parms["binomial_double_trees"] = binomial_double_trees
@property
def checkpoint(self):
"""
Model checkpoint to resume training with.
Type: ``Union[None, str, H2OEstimator]``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8],
... seed=1234)
>>> cars_drf = H2ORandomForestEstimator(ntrees=1,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(cars_drf.auc(valid=True))
"""
return self._parms.get("checkpoint")
@checkpoint.setter
def checkpoint(self, checkpoint):
assert_is_type(checkpoint, None, str, H2OEstimator)
self._parms["checkpoint"] = checkpoint
@property
def col_sample_rate_change_per_level(self):
"""
Relative change of the column sampling rate for every level (must be > 0.0 and <= 2.0)
Type: ``float``, defaults to ``1.0``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(col_sample_rate_change_per_level=.9,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(airlines_drf.auc(valid=True))
"""
return self._parms.get("col_sample_rate_change_per_level")
@col_sample_rate_change_per_level.setter
def col_sample_rate_change_per_level(self, col_sample_rate_change_per_level):
assert_is_type(col_sample_rate_change_per_level, None, numeric)
self._parms["col_sample_rate_change_per_level"] = col_sample_rate_change_per_level
@property
def col_sample_rate_per_tree(self):
"""
Column sample rate per tree (from 0.0 to 1.0)
Type: ``float``, defaults to ``1.0``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(col_sample_rate_per_tree=.7,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(airlines_drf.auc(valid=True))
"""
return self._parms.get("col_sample_rate_per_tree")
@col_sample_rate_per_tree.setter
def col_sample_rate_per_tree(self, col_sample_rate_per_tree):
assert_is_type(col_sample_rate_per_tree, None, numeric)
self._parms["col_sample_rate_per_tree"] = col_sample_rate_per_tree
@property
def min_split_improvement(self):
"""
Minimum relative improvement in squared error reduction for a split to happen
Type: ``float``, defaults to ``1e-05``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> cars["economy_20mpg"] = cars["economy_20mpg"].asfactor()
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "economy_20mpg"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(min_split_improvement=1e-3,
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(cars_drf.auc(valid=True))
"""
return self._parms.get("min_split_improvement")
@min_split_improvement.setter
def min_split_improvement(self, min_split_improvement):
assert_is_type(min_split_improvement, None, numeric)
self._parms["min_split_improvement"] = min_split_improvement
@property
def histogram_type(self):
"""
What type of histogram to use for finding optimal split points
Type: ``Literal["auto", "uniform_adaptive", "random", "quantiles_global", "round_robin"]``, defaults to
``"auto"``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> airlines_drf = H2ORandomForestEstimator(histogram_type="UniformAdaptive",
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> print(airlines_drf.auc(valid=True))
"""
return self._parms.get("histogram_type")
@histogram_type.setter
def histogram_type(self, histogram_type):
assert_is_type(histogram_type, None, Enum("auto", "uniform_adaptive", "random", "quantiles_global", "round_robin"))
self._parms["histogram_type"] = histogram_type
@property
def categorical_encoding(self):
"""
Encoding scheme for categorical features
Type: ``Literal["auto", "enum", "one_hot_internal", "one_hot_explicit", "binary", "eigen", "label_encoder",
"sort_by_response", "enum_limited"]``, defaults to ``"auto"``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
>>> airlines["Year"] = airlines["Year"].asfactor()
>>> airlines["Month"] = airlines["Month"].asfactor()
>>> airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
>>> airlines["Cancelled"] = airlines["Cancelled"].asfactor()
>>> airlines['FlightNum'] = airlines['FlightNum'].asfactor()
>>> predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
... "DayOfWeek", "Month", "Distance", "FlightNum"]
>>> response = "IsDepDelayed"
>>> train, valid= airlines.split_frame(ratios=[.8], seed=1234)
>>> encoding = "one_hot_explicit"
>>> airlines_drf = H2ORandomForestEstimator(categorical_encoding=encoding,
... seed=1234)
>>> airlines_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> airlines_drf.auc(valid=True)
"""
return self._parms.get("categorical_encoding")
@categorical_encoding.setter
def categorical_encoding(self, categorical_encoding):
assert_is_type(categorical_encoding, None, Enum("auto", "enum", "one_hot_internal", "one_hot_explicit", "binary", "eigen", "label_encoder", "sort_by_response", "enum_limited"))
self._parms["categorical_encoding"] = categorical_encoding
@property
def calibrate_model(self):
"""
Use Platt Scaling to calculate calibrated class probabilities. Calibration can provide more accurate estimates
of class probabilities.
Type: ``bool``, defaults to ``False``.
:examples:
>>> ecology = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/ecology_model.csv")
>>> ecology['Angaus'] = ecology['Angaus'].asfactor()
>>> from h2o.estimators.random_forest import H2ORandomForestEstimator
>>> response = 'Angaus'
>>> predictors = ecology.columns[3:13]
>>> train, calib = ecology.split_frame(seed=12354)
>>> w = h2o.create_frame(binary_fraction=1,
... binary_ones_fraction=0.5,
... missing_fraction=0,
... rows=744, cols=1)
>>> w.set_names(["weight"])
>>> train = train.cbind(w)
>>> ecology_drf = H2ORandomForestEstimator(ntrees=10,
... max_depth=5,
... min_rows=10,
... distribution="multinomial",
... weights_column="weight",
... calibrate_model=True,
... calibration_frame=calib)
>>> ecology_drf.train(x=predictors,
... y="Angaus",
... training_frame=train)
>>> predicted = ecology_drf.predict(calib)
"""
return self._parms.get("calibrate_model")
@calibrate_model.setter
def calibrate_model(self, calibrate_model):
assert_is_type(calibrate_model, None, bool)
self._parms["calibrate_model"] = calibrate_model
@property
def calibration_frame(self):
"""
Calibration frame for Platt Scaling
Type: ``Union[None, str, H2OFrame]``.
:examples:
>>> ecology = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/ecology_model.csv")
>>> ecology['Angaus'] = ecology['Angaus'].asfactor()
>>> response = 'Angaus'
>>> predictors = ecology.columns[3:13]
>>> train, calib = ecology.split_frame(seed = 12354)
>>> w = h2o.create_frame(binary_fraction=1,
... binary_ones_fraction=0.5,
... missing_fraction=0,
... rows=744, cols=1)
>>> w.set_names(["weight"])
>>> train = train.cbind(w)
>>> ecology_drf = H2ORandomForestEstimator(ntrees=10,
... max_depth=5,
... min_rows=10,
... distribution="multinomial",
... calibrate_model=True,
... calibration_frame=calib)
>>> ecology_drf.train(x=predictors,
... y="Angaus,
... training_frame=train,
... weights_column="weight")
>>> predicted = ecology_drf.predict(train)
"""
return self._parms.get("calibration_frame")
@calibration_frame.setter
def calibration_frame(self, calibration_frame):
self._parms["calibration_frame"] = H2OFrame._validate(calibration_frame, 'calibration_frame')
@property
def distribution(self):
"""
Distribution function
Type: ``Literal["auto", "bernoulli", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace",
"quantile", "huber"]``, defaults to ``"auto"``.
:examples:
>>> cars = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv")
>>> predictors = ["displacement","power","weight","acceleration","year"]
>>> response = "cylinders"
>>> train, valid = cars.split_frame(ratios=[.8], seed=1234)
>>> cars_drf = H2ORandomForestEstimator(distribution="poisson",
... seed=1234)
>>> cars_drf.train(x=predictors,
... y=response,
... training_frame=train,
... validation_frame=valid)
>>> cars_drf.mse(valid=True)
"""
return self._parms.get("distribution")
@distribution.setter
def distribution(self, distribution):
assert_is_type(distribution, None, Enum("auto", "bernoulli", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace", "quantile", "huber"))
self._parms["distribution"] = distribution
@property
def custom_metric_func(self):
"""
Reference to custom evaluation function, format: `language:keyName=funcName`
Type: ``str``.
"""
return self._parms.get("custom_metric_func")
@custom_metric_func.setter
def custom_metric_func(self, custom_metric_func):
assert_is_type(custom_metric_func, None, str)
self._parms["custom_metric_func"] = custom_metric_func
@property
def export_checkpoints_dir(self):
"""
Automatically export generated models to this directory.
Type: ``str``.
:examples:
>>> import tempfile
>>> from os import listdir
>>> from h2o.grid.grid_search import H2OGridSearch
>>> airlines = h2o.import_file("http://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip", destination_frame="air.hex")
>>> predictors = ["DayofMonth", "DayOfWeek"]
>>> response = "IsDepDelayed"
>>> hyper_parameters = {'ntrees': [5,10]}
>>> search_crit = {'strategy': "RandomDiscrete",
... 'max_models': 5,
... 'seed': 1234,
... 'stopping_rounds': 3,
... 'stopping_metric': "AUTO",
... 'stopping_tolerance': 1e-2}
>>> checkpoints_dir = tempfile.mkdtemp()
>>> air_grid = H2OGridSearch(H2ORandomForestEstimator,
... hyper_params=hyper_parameters,
... search_criteria=search_crit)
>>> air_grid.train(x=predictors,
... y=response,
... training_frame=airlines,
... distribution="bernoulli",
... max_depth=3,
... export_checkpoints_dir=checkpoints_dir)
>>> num_files = len(listdir(checkpoints_dir))
>>> num_files
"""
return self._parms.get("export_checkpoints_dir")
@export_checkpoints_dir.setter
def export_checkpoints_dir(self, export_checkpoints_dir):
assert_is_type(export_checkpoints_dir, None, str)
self._parms["export_checkpoints_dir"] = export_checkpoints_dir
@property
def check_constant_response(self):
"""
Check if response column is constant. If enabled, then an exception is thrown if the response column is a
constant value.If disabled, then model will train regardless of the response column being a constant value or
not.
Type: ``bool``, defaults to ``True``.
:examples:
>>> train = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/iris/iris_train.csv")
>>> train["constantCol"] = 1
>>> my_drf = H2ORandomForestEstimator(check_constant_response=False)
>>> my_drf.train(x=list(range(1,5)),
... y="constantCol",
... training_frame=train)
"""
return self._parms.get("check_constant_response")
@check_constant_response.setter
def check_constant_response(self, check_constant_response):
assert_is_type(check_constant_response, None, bool)
self._parms["check_constant_response"] = check_constant_response
@property
def gainslift_bins(self):
"""
Gains/Lift table number of bins. 0 means disabled.. Default value -1 means automatic binning.
Type: ``int``, defaults to ``-1``.
:examples:
>>> airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/testng/airlines_train.csv")
>>> model = H2ORandomForestEstimator(ntrees=1, gainslift_bins=20)
>>> model.train(x=["Origin", "Distance"],
... y="IsDepDelayed",
... training_frame=airlines)
>>> model.gains_lift()
"""
return self._parms.get("gainslift_bins")
@gainslift_bins.setter
def gainslift_bins(self, gainslift_bins):
assert_is_type(gainslift_bins, None, int)
self._parms["gainslift_bins"] = gainslift_bins
@property
def auc_type(self):
"""
Set default multinomial AUC type.
Type: ``Literal["auto", "none", "macro_ovr", "weighted_ovr", "macro_ovo", "weighted_ovo"]``, defaults to
``"auto"``.
"""
return self._parms.get("auc_type")
@auc_type.setter
def auc_type(self, auc_type):
assert_is_type(auc_type, None, Enum("auto", "none", "macro_ovr", "weighted_ovr", "macro_ovo", "weighted_ovo"))
self._parms["auc_type"] = auc_type
offset_column = deprecated_property('offset_column', None)
|
apache-2.0
|
piyush1911/git-cola
|
cola/widgets/browse.py
|
3
|
28400
|
from __future__ import division, absolute_import, unicode_literals
from PyQt4 import QtGui
from PyQt4 import QtCore
from PyQt4.QtCore import Qt
from PyQt4.QtCore import SIGNAL
from cola import cmds
from cola import core
from cola import difftool
from cola import gitcmds
from cola import hotkeys
from cola import icons
from cola import utils
from cola import qtutils
from cola.cmds import BaseCommand
from cola.git import git
from cola.i18n import N_
from cola.interaction import Interaction
from cola.models import main
from cola.models.browse import GitRepoEntryStore
from cola.models.browse import GitRepoModel
from cola.models.browse import GitRepoNameItem
from cola.models.selection import State
from cola.models.selection import selection_model
from cola.widgets import defs
from cola.widgets import standard
from cola.widgets.selectcommits import select_commits
from cola.compat import ustr
def worktree_browser_widget(parent, update=True, settings=None):
"""Return a widget for immediate use."""
view = Browser(parent, update=update, settings=settings)
view.tree.setModel(GitRepoModel(view.tree))
view.ctl = BrowserController(view.tree)
if update:
view.tree.refresh()
return view
def worktree_browser(update=True, settings=None):
"""Launch a new worktree browser session."""
view = worktree_browser_widget(None, update=update, settings=settings)
view.show()
return view
class Browser(standard.Widget):
def __init__(self, parent, update=True, settings=None):
standard.Widget.__init__(self, parent)
self.settings = settings
self.tree = RepoTreeView(self)
self.mainlayout = qtutils.hbox(defs.no_margin, defs.spacing, self.tree)
self.setLayout(self.mainlayout)
self.connect(self, SIGNAL('updated()'),
self._updated_callback, Qt.QueuedConnection)
self.model = main.model()
self.model.add_observer(self.model.message_updated, self.model_updated)
if parent is None:
qtutils.add_close_action(self)
if update:
self.model_updated()
# Restore saved settings
if not self.restore_state(settings=settings):
self.resize(720, 420)
# Read-only mode property
mode = property(lambda self: self.model.mode)
def model_updated(self):
"""Update the title with the current branch and directory name."""
self.emit(SIGNAL('updated()'))
def _updated_callback(self):
branch = self.model.currentbranch
curdir = core.getcwd()
msg = N_('Repository: %s') % curdir
msg += '\n'
msg += N_('Branch: %s') % branch
self.setToolTip(msg)
scope = dict(project=self.model.project, branch=branch)
title = N_('%(project)s: %(branch)s - Browse') % scope
if self.mode == self.model.mode_amend:
title += ' %s' % N_('(Amending)')
self.setWindowTitle(title)
class RepoTreeView(standard.TreeView):
"""Provides a filesystem-like view of a git repository."""
def __init__(self, parent):
standard.TreeView.__init__(self, parent)
self.saved_selection = []
self.saved_current_path = None
self.saved_open_folders = set()
self.restoring_selection = False
self.setDragEnabled(True)
self.setRootIsDecorated(False)
self.setSortingEnabled(False)
self.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection)
# Observe model updates
model = main.model()
model.add_observer(model.message_about_to_update,
self.emit_about_to_update)
model.add_observer(model.message_updated, self.emit_update)
self.connect(self, SIGNAL('about_to_update()'), self.save_selection,
Qt.QueuedConnection)
self.connect(self, SIGNAL('update()'), self.update_actions,
Qt.QueuedConnection)
# The non-Qt cola application model
self.connect(self, SIGNAL('expanded(QModelIndex)'), self.size_columns)
self.connect(self, SIGNAL('expanded(QModelIndex)'), self.expanded)
self.connect(self, SIGNAL('collapsed(QModelIndex)'), self.size_columns)
self.connect(self, SIGNAL('collapsed(QModelIndex)'), self.collapsed)
# Sync selection before the key press event changes the model index
self.connect(self, SIGNAL('indexAboutToChange()'), self.sync_selection,
Qt.QueuedConnection)
self.action_history = qtutils.add_action_with_status_tip(
self, N_('View History...'),
N_('View history for selected path(s)'),
self.view_history, hotkeys.HISTORY)
self.action_stage = qtutils.add_action_with_status_tip(
self, cmds.StageOrUnstage.name(),
N_('Stage/unstage selected path(s) for commit'),
cmds.run(cmds.StageOrUnstage), hotkeys.STAGE_SELECTION)
self.action_untrack = qtutils.add_action_with_status_tip(
self, N_('Untrack Selected'),
N_('Stop tracking path(s)'),
self.untrack_selected)
self.action_difftool = qtutils.add_action_with_status_tip(
self, cmds.LaunchDifftool.name(),
N_('Launch git-difftool on the current path.'),
cmds.run(cmds.LaunchDifftool), hotkeys.DIFF)
self.action_difftool_predecessor = qtutils.add_action_with_status_tip(
self, N_('Diff Against Predecessor...'),
N_('Launch git-difftool against previous versions.'),
self.difftool_predecessor, hotkeys.DIFF_SECONDARY)
self.action_revert_unstaged = qtutils.add_action_with_status_tip(
self, cmds.RevertUnstagedEdits.name(),
N_('Revert unstaged changes to selected paths.'),
cmds.run(cmds.RevertUnstagedEdits), hotkeys.REVERT)
self.action_revert_uncommitted = qtutils.add_action_with_status_tip(
self, cmds.RevertUncommittedEdits.name(),
N_('Revert uncommitted changes to selected paths.'),
cmds.run(cmds.RevertUncommittedEdits), hotkeys.UNDO)
self.action_editor = qtutils.add_action_with_status_tip(
self, cmds.LaunchEditor.name(),
N_('Edit selected path(s).'),
cmds.run(cmds.LaunchEditor), hotkeys.EDIT)
self.action_refresh = qtutils.add_action(
self, N_('Refresh'), cmds.run(cmds.Refresh), hotkeys.REFRESH)
self.x_width = QtGui.QFontMetrics(self.font()).width('x')
self.size_columns()
def expanded(self, index):
item = self.model().itemFromIndex(index)
self.saved_open_folders.add(item.path)
def collapsed(self, index):
item = self.model().itemFromIndex(index)
self.saved_open_folders.remove(item.path)
def refresh(self):
self.model().refresh()
def size_columns(self):
"""Set the column widths."""
self.resizeColumnToContents(0)
self.resizeColumnToContents(1)
self.resizeColumnToContents(2)
self.resizeColumnToContents(3)
self.resizeColumnToContents(4)
def sizeHintForColumn(self, column):
x_width = self.x_width
if column == 1:
# Status
size = x_width * 10
elif column == 2:
# Summary
size = x_width * 64
elif column == 3:
# Author
size = x_width * 18
elif column == 4:
# Age
size = x_width * 16
else:
# Filename and others use the actual content
size = super(RepoTreeView, self).sizeHintForColumn(column)
return size
def emit_update(self):
self.emit(SIGNAL('update()'))
def emit_about_to_update(self):
self.emit(SIGNAL('about_to_update()'))
def save_selection(self):
selection = self.selected_paths()
if selection:
self.saved_selection = selection
current = self.current_item()
if current:
self.saved_current_path = current.path
def restore(self):
selection = self.selectionModel()
flags = selection.Select | selection.Rows
self.restoring_selection = True
# Restore opened folders
for path in sorted(self.saved_open_folders):
row = self.model().row(path, create=False)
if not row:
continue
index = row[0].index()
if index.isValid():
self.setExpanded(index, True)
# Restore the current item. We do this first, otherwise
# setCurrentIndex() can mess with the selection we set below
current_index = None
current_path = self.saved_current_path
if current_path:
row = self.model().row(current_path, create=False)
if row:
current_index = row[0].index()
if current_index and current_index.isValid():
self.setCurrentIndex(current_index)
# Restore selected items
for path in self.saved_selection:
row = self.model().row(path, create=False)
if not row:
continue
index = row[0].index()
if index.isValid():
self.scrollTo(index)
selection.select(index, flags)
self.restoring_selection = False
self.size_columns()
self.update_diff()
def update_actions(self):
"""Enable/disable actions."""
selection = self.selected_paths()
selected = bool(selection)
staged = bool(self.selected_staged_paths(selection=selection))
modified = bool(self.selected_modified_paths(selection=selection))
unstaged = bool(self.selected_unstaged_paths(selection=selection))
tracked = bool(self.selected_tracked_paths(selection=selection))
revertable = staged or modified
self.action_history.setEnabled(selected)
self.action_stage.setEnabled(staged or unstaged)
self.action_untrack.setEnabled(tracked)
self.action_difftool.setEnabled(staged or modified)
self.action_difftool_predecessor.setEnabled(tracked)
self.action_revert_unstaged.setEnabled(revertable)
self.action_revert_uncommitted.setEnabled(revertable)
def contextMenuEvent(self, event):
"""Create a context menu."""
self.update_actions()
menu = QtGui.QMenu(self)
menu.addAction(self.action_editor)
menu.addAction(self.action_stage)
menu.addSeparator()
menu.addAction(self.action_history)
menu.addAction(self.action_difftool)
menu.addAction(self.action_difftool_predecessor)
menu.addSeparator()
menu.addAction(self.action_revert_unstaged)
menu.addAction(self.action_revert_uncommitted)
menu.addAction(self.action_untrack)
menu.exec_(self.mapToGlobal(event.pos()))
def mousePressEvent(self, event):
"""Synchronize the selection on mouse-press."""
result = QtGui.QTreeView.mousePressEvent(self, event)
self.sync_selection()
return result
def sync_selection(self):
"""Push selection into the selection model."""
staged = []
unmerged = []
modified = []
untracked = []
state = State(staged, unmerged, modified, untracked)
paths = self.selected_paths()
model = main.model()
model_staged = utils.add_parents(model.staged)
model_modified = utils.add_parents(model.modified)
model_unmerged = utils.add_parents(model.unmerged)
model_untracked = utils.add_parents(model.untracked)
for path in paths:
if path in model_unmerged:
unmerged.append(path)
elif path in model_untracked:
untracked.append(path)
elif path in model_staged:
staged.append(path)
elif path in model_modified:
modified.append(path)
else:
staged.append(path)
# Push the new selection into the model.
selection_model().set_selection(state)
return paths
def selectionChanged(self, old, new):
"""Override selectionChanged to update available actions."""
result = QtGui.QTreeView.selectionChanged(self, old, new)
if not self.restoring_selection:
self.update_actions()
self.update_diff()
return result
def update_diff(self):
paths = self.sync_selection()
if paths and self.model().path_is_interesting(paths[0]):
cached = paths[0] in main.model().staged
cmds.do(cmds.Diff, paths[0], cached)
def setModel(self, model):
"""Set the concrete QAbstractItemModel instance."""
QtGui.QTreeView.setModel(self, model)
self.connect(model, SIGNAL('restore()'), self.restore,
Qt.QueuedConnection)
def item_from_index(self, model_index):
"""Return the name item corresponding to the model index."""
index = model_index.sibling(model_index.row(), 0)
return self.model().itemFromIndex(index)
def paths_from_indexes(self, indexes):
return qtutils.paths_from_indexes(self.model(), indexes,
item_type=GitRepoNameItem.TYPE)
def selected_paths(self):
"""Return the selected paths."""
return self.paths_from_indexes(self.selectedIndexes())
def selected_staged_paths(self, selection=None):
"""Return selected staged paths."""
if selection is None:
selection = self.selected_paths()
staged = utils.add_parents(main.model().staged)
return [p for p in selection if p in staged]
def selected_modified_paths(self, selection=None):
"""Return selected modified paths."""
if selection is None:
selection = self.selected_paths()
model = main.model()
modified = utils.add_parents(model.modified)
return [p for p in selection if p in modified]
def selected_unstaged_paths(self, selection=None):
"""Return selected unstaged paths."""
if selection is None:
selection = self.selected_paths()
model = main.model()
modified = utils.add_parents(model.modified)
untracked = utils.add_parents(model.untracked)
unstaged = modified.union(untracked)
return [p for p in selection if p in unstaged]
def selected_tracked_paths(self, selection=None):
"""Return selected tracked paths."""
if selection is None:
selection = self.selected_paths()
model = main.model()
staged = set(self.selected_staged_paths(selection=selection))
modified = set(self.selected_modified_paths(selection=selection))
untracked = utils.add_parents(model.untracked)
tracked = staged.union(modified)
return [p for p in selection
if p not in untracked or p in tracked]
def view_history(self):
"""Signal that we should view history for paths."""
self.emit(SIGNAL('history(PyQt_PyObject)'), self.selected_paths())
def untrack_selected(self):
"""untrack selected paths."""
cmds.do(cmds.Untrack, self.selected_tracked_paths())
def difftool_predecessor(self):
"""Diff paths against previous versions."""
paths = self.selected_tracked_paths()
self.emit(SIGNAL('difftool_predecessor(PyQt_PyObject)'), paths)
def current_path(self):
"""Return the path for the current item."""
index = self.currentIndex()
if not index.isValid():
return None
return self.item_from_index(index).path
class BrowserController(QtCore.QObject):
def __init__(self, view=None):
QtCore.QObject.__init__(self, view)
self.model = main.model()
self.view = view
self.runtask = qtutils.RunTask(parent=self)
self.connect(view, SIGNAL('history(PyQt_PyObject)'),
self.view_history)
self.connect(view, SIGNAL('expanded(QModelIndex)'),
self.query_model)
self.connect(view, SIGNAL('difftool_predecessor(PyQt_PyObject)'),
self.difftool_predecessor)
def view_history(self, entries):
"""Launch the configured history browser path-limited to entries."""
entries = list(map(ustr, entries))
cmds.do(cmds.VisualizePaths, entries)
def query_model(self, model_index):
"""Update information about a directory as it is expanded."""
item = self.view.item_from_index(model_index)
if item.cached:
return
path = item.path
GitRepoEntryStore.entry(path, self.view, self.runtask).update()
entry = GitRepoEntryStore.entry
for row in range(item.rowCount()):
path = item.child(row, 0).path
entry(path, self.view, self.runtask).update()
item.cached = True
def difftool_predecessor(self, paths):
"""Prompt for an older commit and launch difftool against it."""
args = ['--'] + paths
revs, summaries = gitcmds.log_helper(all=False, extra_args=args)
commits = select_commits(N_('Select Previous Version'),
revs, summaries, multiselect=False)
if not commits:
return
commit = commits[0]
difftool.launch(left=commit, paths=paths)
class BrowseModel(object):
def __init__(self, ref):
self.ref = ref
self.relpath = None
self.filename = None
class SaveBlob(BaseCommand):
def __init__(self, model):
BaseCommand.__init__(self)
self.model = model
def do(self):
model = self.model
cmd = ['git', 'show', '%s:%s' % (model.ref, model.relpath)]
with core.xopen(model.filename, 'wb') as fp:
proc = core.start_command(cmd, stdout=fp)
out, err = proc.communicate()
status = proc.returncode
msg = (N_('Saved "%(filename)s" from "%(ref)s" to "%(destination)s"') %
dict(filename=model.relpath,
ref=model.ref,
destination=model.filename))
Interaction.log_status(status, msg, '')
Interaction.information(
N_('File Saved'),
N_('File saved to "%s"') % model.filename)
class BrowseDialog(QtGui.QDialog):
@staticmethod
def browse(ref):
parent = qtutils.active_window()
model = BrowseModel(ref)
dlg = BrowseDialog(model, parent=parent)
dlg_model = GitTreeModel(ref, dlg)
dlg.setModel(dlg_model)
dlg.setWindowTitle(N_('Browsing %s') % model.ref)
if hasattr(parent, 'width'):
dlg.resize(parent.width()*3//4, 333)
else:
dlg.resize(420, 333)
dlg.show()
dlg.raise_()
if dlg.exec_() != dlg.Accepted:
return None
return dlg
@staticmethod
def select_file(ref):
parent = qtutils.active_window()
model = BrowseModel(ref)
dlg = BrowseDialog(model, select_file=True, parent=parent)
dlg_model = GitTreeModel(ref, dlg)
dlg.setModel(dlg_model)
dlg.setWindowTitle(N_('Select file from "%s"') % model.ref)
dlg.resize(parent.width()*3//4, 333)
dlg.show()
dlg.raise_()
if dlg.exec_() != dlg.Accepted:
return None
return model.filename
@staticmethod
def select_file_from_list(file_list, title=N_('Select File')):
parent = qtutils.active_window()
model = BrowseModel(None)
dlg = BrowseDialog(model, select_file=True, parent=parent)
dlg_model = GitFileTreeModel(dlg)
dlg_model.add_files(file_list)
dlg.setModel(dlg_model)
dlg.expandAll()
dlg.setWindowTitle(title)
dlg.resize(parent.width()*3//4, 333)
dlg.show()
dlg.raise_()
if dlg.exec_() != dlg.Accepted:
return None
return model.filename
def __init__(self, model, select_file=False, parent=None):
QtGui.QDialog.__init__(self, parent)
self.setAttribute(Qt.WA_MacMetalStyle)
if parent is not None:
self.setWindowModality(Qt.WindowModal)
# updated for use by commands
self.model = model
# widgets
self.tree = GitTreeWidget(parent=self)
self.close = qtutils.close_button()
if select_file:
text = N_('Select')
else:
text = N_('Save')
self.save = qtutils.create_button(text=text, enabled=False,
default=True)
# layouts
self.btnlayt = qtutils.hbox(defs.margin, defs.spacing,
qtutils.STRETCH, self.close, self.save)
self.layt = qtutils.vbox(defs.margin, defs.spacing,
self.tree, self.btnlayt)
self.setLayout(self.layt)
# connections
if select_file:
self.connect(self.tree, SIGNAL('path_chosen(PyQt_PyObject)'),
self.path_chosen)
else:
self.connect(self.tree, SIGNAL('path_chosen(PyQt_PyObject)'),
self.save_path)
self.connect(self.tree, SIGNAL('selectionChanged()'),
self.selection_changed, Qt.QueuedConnection)
qtutils.connect_button(self.close, self.reject)
qtutils.connect_button(self.save, self.save_blob)
def expandAll(self):
self.tree.expandAll()
def setModel(self, model):
self.tree.setModel(model)
def path_chosen(self, path, close=True):
"""Update the model from the view"""
model = self.model
model.relpath = path
model.filename = path
if close:
self.accept()
def save_path(self, path):
"""Choose an output filename based on the selected path"""
self.path_chosen(path, close=False)
model = self.model
filename = qtutils.save_as(model.filename)
if not filename:
return
model.filename = filename
cmds.do(SaveBlob, model)
self.accept()
def save_blob(self):
"""Save the currently selected file"""
filenames = self.tree.selected_files()
if not filenames:
return
self.path_chosen(filenames[0], close=True)
def selection_changed(self):
"""Update actions based on the current selection"""
filenames = self.tree.selected_files()
self.save.setEnabled(bool(filenames))
class GitTreeWidget(standard.TreeView):
def __init__(self, parent=None):
standard.TreeView.__init__(self, parent)
self.setHeaderHidden(True)
self.connect(self, SIGNAL('doubleClicked(QModelIndex)'),
self.double_clicked)
def double_clicked(self, index):
item = self.model().itemFromIndex(index)
if item is None:
return
if item.is_dir:
return
self.emit(SIGNAL('path_chosen(PyQt_PyObject)'), item.path)
def selected_files(self):
items = self.selected_items()
return [i.path for i in items if not i.is_dir]
def selectionChanged(self, old_selection, new_selection):
QtGui.QTreeView.selectionChanged(self, old_selection, new_selection)
self.emit(SIGNAL('selectionChanged()'))
def select_first_file(self):
"""Select the first filename in the tree"""
model = self.model()
idx = self.indexAt(QtCore.QPoint(0, 0))
item = model.itemFromIndex(idx)
while idx and idx.isValid() and item and item.is_dir:
idx = self.indexBelow(idx)
item = model.itemFromIndex(idx)
if idx and idx.isValid() and item:
self.setCurrentIndex(idx)
class GitFileTreeModel(QtGui.QStandardItemModel):
"""Presents a list of file paths as a hierarchical tree."""
def __init__(self, parent):
QtGui.QStandardItemModel.__init__(self, parent)
self.dir_entries = {'': self.invisibleRootItem()}
self.dir_rows = {}
def clear(self):
QtGui.QStandardItemModel.clear(self)
self.dir_rows = {}
self.dir_entries = {'': self.invisibleRootItem()}
def add_files(self, files):
"""Add a list of files"""
add_file = self.add_file
for f in files:
add_file(f)
def add_file(self, path):
"""Add a file to the model."""
dirname = utils.dirname(path)
dir_entries = self.dir_entries
try:
parent = dir_entries[dirname]
except KeyError:
parent = dir_entries[dirname] = self.create_dir_entry(dirname)
row_items = self.create_row(path, False)
parent.appendRow(row_items)
def add_directory(self, parent, path):
"""Add a directory entry to the model."""
# Create model items
row_items = self.create_row(path, True)
try:
parent_path = parent.path
except AttributeError: # root QStandardItem
parent_path = ''
# Insert directories before file paths
try:
row = self.dir_rows[parent_path]
except KeyError:
row = self.dir_rows[parent_path] = 0
parent.insertRow(row, row_items)
self.dir_rows[parent_path] += 1
self.dir_entries[path] = row_items[0]
return row_items[0]
def create_row(self, path, is_dir):
"""Return a list of items representing a row."""
return [GitTreeItem(path, is_dir)]
def create_dir_entry(self, dirname):
"""
Create a directory entry for the model.
This ensures that directories are always listed before files.
"""
entries = dirname.split('/')
curdir = []
parent = self.invisibleRootItem()
curdir_append = curdir.append
self_add_directory = self.add_directory
dir_entries = self.dir_entries
for entry in entries:
curdir_append(entry)
path = '/'.join(curdir)
try:
parent = dir_entries[path]
except KeyError:
grandparent = parent
parent = self_add_directory(grandparent, path)
dir_entries[path] = parent
return parent
class GitTreeModel(GitFileTreeModel):
def __init__(self, ref, parent):
GitFileTreeModel.__init__(self, parent)
self.ref = ref
self._initialize()
def _initialize(self):
"""Iterate over git-ls-tree and create GitTreeItems."""
status, out, err = git.ls_tree('--full-tree', '-r', '-t', '-z',
self.ref)
if status != 0:
Interaction.log_status(status, out, err)
return
if not out:
return
for line in out[:-1].split('\0'):
# .....6 ...4 ......................................40
# 040000 tree c127cde9a0c644a3a8fef449a244f47d5272dfa6 relative
# 100644 blob 139e42bf4acaa4927ec9be1ec55a252b97d3f1e2 relative/path
objtype = line[7]
relpath = line[6 + 1 + 4 + 1 + 40 + 1:]
if objtype == 't':
parent = self.dir_entries[utils.dirname(relpath)]
self.add_directory(parent, relpath)
elif objtype == 'b':
self.add_file(relpath)
class GitTreeItem(QtGui.QStandardItem):
"""
Represents a cell in a treeview.
Many GitRepoItems could map to a single repository path,
but this tree only has a single column.
Each GitRepoItem manages a different cell in the tree view.
"""
def __init__(self, path, is_dir):
QtGui.QStandardItem.__init__(self)
self.is_dir = is_dir
self.path = path
self.setEditable(False)
self.setDragEnabled(False)
self.setText(utils.basename(path))
if is_dir:
icon = icons.directory()
else:
icon = icons.file_text()
self.setIcon(icon)
|
gpl-2.0
|
AnishShah/tensorflow
|
tensorflow/python/autograph/utils/type_check_test.py
|
11
|
1545
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for type_check."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy
from tensorflow.python.autograph.utils import type_check
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import test_util
from tensorflow.python.platform import test
class TypeCheckTest(test.TestCase):
def test_checks(self):
self.assertTrue(type_check.is_tensor(constant_op.constant([1, 2, 3])))
self.assertTrue(
type_check.is_tensor(test_util.variables.Variable([1, 2, 3])))
self.assertTrue(
type_check.is_tensor(
test_util.array_ops.placeholder(test_util.dtypes.float32)))
self.assertFalse(type_check.is_tensor(3))
self.assertFalse(type_check.is_tensor(numpy.eye(3)))
if __name__ == '__main__':
test.main()
|
apache-2.0
|
jank3/django
|
tests/migrate_signals/tests.py
|
324
|
3585
|
from django.apps import apps
from django.core import management
from django.db.models import signals
from django.test import TestCase, override_settings
from django.utils import six
APP_CONFIG = apps.get_app_config('migrate_signals')
PRE_MIGRATE_ARGS = ['app_config', 'verbosity', 'interactive', 'using']
MIGRATE_DATABASE = 'default'
MIGRATE_VERBOSITY = 1
MIGRATE_INTERACTIVE = False
class PreMigrateReceiver(object):
def __init__(self):
self.call_counter = 0
self.call_args = None
def __call__(self, signal, sender, **kwargs):
self.call_counter = self.call_counter + 1
self.call_args = kwargs
class OneTimeReceiver(object):
"""
Special receiver for handle the fact that test runner calls migrate for
several databases and several times for some of them.
"""
def __init__(self):
self.call_counter = 0
self.call_args = None
def __call__(self, signal, sender, **kwargs):
# Although test runner calls migrate for several databases,
# testing for only one of them is quite sufficient.
if kwargs['using'] == MIGRATE_DATABASE:
self.call_counter = self.call_counter + 1
self.call_args = kwargs
# we need to test only one call of migrate
signals.pre_migrate.disconnect(pre_migrate_receiver, sender=APP_CONFIG)
# We connect receiver here and not in unit test code because we need to
# connect receiver before test runner creates database. That is, sequence of
# actions would be:
#
# 1. Test runner imports this module.
# 2. We connect receiver.
# 3. Test runner calls migrate for create default database.
# 4. Test runner execute our unit test code.
pre_migrate_receiver = OneTimeReceiver()
signals.pre_migrate.connect(pre_migrate_receiver, sender=APP_CONFIG)
class MigrateSignalTests(TestCase):
available_apps = ['migrate_signals']
def test_pre_migrate_call_time(self):
self.assertEqual(pre_migrate_receiver.call_counter, 1)
def test_pre_migrate_args(self):
r = PreMigrateReceiver()
signals.pre_migrate.connect(r, sender=APP_CONFIG)
management.call_command('migrate', database=MIGRATE_DATABASE,
verbosity=MIGRATE_VERBOSITY, interactive=MIGRATE_INTERACTIVE,
stdout=six.StringIO())
args = r.call_args
self.assertEqual(r.call_counter, 1)
self.assertEqual(set(args), set(PRE_MIGRATE_ARGS))
self.assertEqual(args['app_config'], APP_CONFIG)
self.assertEqual(args['verbosity'], MIGRATE_VERBOSITY)
self.assertEqual(args['interactive'], MIGRATE_INTERACTIVE)
self.assertEqual(args['using'], 'default')
@override_settings(MIGRATION_MODULES={'migrate_signals': 'migrate_signals.custom_migrations'})
def test_pre_migrate_migrations_only(self):
"""
If all apps have migrations, pre_migrate should be sent.
"""
r = PreMigrateReceiver()
signals.pre_migrate.connect(r, sender=APP_CONFIG)
stdout = six.StringIO()
management.call_command('migrate', database=MIGRATE_DATABASE,
verbosity=MIGRATE_VERBOSITY, interactive=MIGRATE_INTERACTIVE,
stdout=stdout)
args = r.call_args
self.assertEqual(r.call_counter, 1)
self.assertEqual(set(args), set(PRE_MIGRATE_ARGS))
self.assertEqual(args['app_config'], APP_CONFIG)
self.assertEqual(args['verbosity'], MIGRATE_VERBOSITY)
self.assertEqual(args['interactive'], MIGRATE_INTERACTIVE)
self.assertEqual(args['using'], 'default')
|
bsd-3-clause
|
Forage/Gramps
|
gramps/gen/svn_revision.py
|
1
|
1853
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2012 Doug Blank <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# $Id$
import sys
import subprocess
if sys.version_info[0] < 3:
cuni = unicode
else:
def to_utf8(s):
return s.decode("utf-8", errors = 'replace')
cuni = to_utf8
def get_svn_revision(path=""):
stdout = ""
try:
p = subprocess.Popen("svnversion -n \"%s\"" % path, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = p.communicate()
except:
return "" # subprocess failed
# subprocess worked
if stdout: # has output
stdout = cuni(stdout) # get a proper string
if (" " in stdout) or (stdout == "exported"):
# one of svnversion's 1.7 non-version responses:
# 'Unversioned directory'
# 'Unversioned file'
# 'Uncommitted local addition, copy or move'
# svnversion's 1.6 non-version response:
# 'exported'
return ""
else:
return "-r" + stdout
else: # no output from svnversion
return ""
|
gpl-2.0
|
SebasSBM/django
|
tests/forms_tests/tests/tests.py
|
89
|
16458
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import datetime
from django.core.files.uploadedfile import SimpleUploadedFile
from django.db import models
from django.forms import (
CharField, FileField, Form, ModelChoiceField, ModelForm,
)
from django.forms.models import ModelFormMetaclass
from django.test import SimpleTestCase, TestCase
from django.utils import six
from ..models import (
BoundaryModel, ChoiceFieldModel, ChoiceModel, ChoiceOptionModel, Defaults,
FileModel, Group, OptionalMultiChoiceModel,
)
class ChoiceFieldForm(ModelForm):
class Meta:
model = ChoiceFieldModel
fields = '__all__'
class OptionalMultiChoiceModelForm(ModelForm):
class Meta:
model = OptionalMultiChoiceModel
fields = '__all__'
class ChoiceFieldExclusionForm(ModelForm):
multi_choice = CharField(max_length=50)
class Meta:
exclude = ['multi_choice']
model = ChoiceFieldModel
class EmptyCharLabelChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice']
class EmptyIntegerLabelChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice_integer']
class EmptyCharLabelNoneChoiceForm(ModelForm):
class Meta:
model = ChoiceModel
fields = ['name', 'choice_string_w_none']
class FileForm(Form):
file1 = FileField()
class TestTicket12510(TestCase):
''' It is not necessary to generate choices for ModelChoiceField (regression test for #12510). '''
def setUp(self):
self.groups = [Group.objects.create(name=name) for name in 'abc']
def test_choices_not_fetched_when_not_rendering(self):
# only one query is required to pull the model from DB
with self.assertNumQueries(1):
field = ModelChoiceField(Group.objects.order_by('-name'))
self.assertEqual('a', field.clean(self.groups[0].pk).name)
class TestTicket14567(TestCase):
"""
Check that the return values of ModelMultipleChoiceFields are QuerySets
"""
def test_empty_queryset_return(self):
"If a model's ManyToManyField has blank=True and is saved with no data, a queryset is returned."
option = ChoiceOptionModel.objects.create(name='default')
form = OptionalMultiChoiceModelForm({'multi_choice_optional': '', 'multi_choice': [option.pk]})
self.assertTrue(form.is_valid())
# Check that the empty value is a QuerySet
self.assertIsInstance(form.cleaned_data['multi_choice_optional'], models.query.QuerySet)
# While we're at it, test whether a QuerySet is returned if there *is* a value.
self.assertIsInstance(form.cleaned_data['multi_choice'], models.query.QuerySet)
class ModelFormCallableModelDefault(TestCase):
def test_no_empty_option(self):
"If a model's ForeignKey has blank=False and a default, no empty option is created (Refs #10792)."
option = ChoiceOptionModel.objects.create(name='default')
choices = list(ChoiceFieldForm().fields['choice'].choices)
self.assertEqual(len(choices), 1)
self.assertEqual(choices[0], (option.pk, six.text_type(option)))
def test_callable_initial_value(self):
"The initial value for a callable default returning a queryset is the pk (refs #13769)"
ChoiceOptionModel.objects.create(id=1, name='default')
ChoiceOptionModel.objects.create(id=2, name='option 2')
ChoiceOptionModel.objects.create(id=3, name='option 3')
self.assertHTMLEqual(
ChoiceFieldForm().as_p(),
"""<p><label for="id_choice">Choice:</label> <select name="choice" id="id_choice">
<option value="1" selected="selected">ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice" value="1" id="initial-id_choice" /></p>
<p><label for="id_choice_int">Choice int:</label> <select name="choice_int" id="id_choice_int">
<option value="1" selected="selected">ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice_int" value="1" id="initial-id_choice_int" /></p>
<p><label for="id_multi_choice">Multi choice:</label>
<select multiple="multiple" name="multi_choice" id="id_multi_choice">
<option value="1" selected="selected">ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice" value="1" id="initial-id_multi_choice_0" /></p>
<p><label for="id_multi_choice_int">Multi choice int:</label>
<select multiple="multiple" name="multi_choice_int" id="id_multi_choice_int">
<option value="1" selected="selected">ChoiceOption 1</option>
<option value="2">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice_int" value="1" id="initial-id_multi_choice_int_0" /></p>"""
)
def test_initial_instance_value(self):
"Initial instances for model fields may also be instances (refs #7287)"
ChoiceOptionModel.objects.create(id=1, name='default')
obj2 = ChoiceOptionModel.objects.create(id=2, name='option 2')
obj3 = ChoiceOptionModel.objects.create(id=3, name='option 3')
self.assertHTMLEqual(
ChoiceFieldForm(initial={
'choice': obj2,
'choice_int': obj2,
'multi_choice': [obj2, obj3],
'multi_choice_int': ChoiceOptionModel.objects.exclude(name="default"),
}).as_p(),
"""<p><label for="id_choice">Choice:</label> <select name="choice" id="id_choice">
<option value="1">ChoiceOption 1</option>
<option value="2" selected="selected">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice" value="2" id="initial-id_choice" /></p>
<p><label for="id_choice_int">Choice int:</label> <select name="choice_int" id="id_choice_int">
<option value="1">ChoiceOption 1</option>
<option value="2" selected="selected">ChoiceOption 2</option>
<option value="3">ChoiceOption 3</option>
</select><input type="hidden" name="initial-choice_int" value="2" id="initial-id_choice_int" /></p>
<p><label for="id_multi_choice">Multi choice:</label>
<select multiple="multiple" name="multi_choice" id="id_multi_choice">
<option value="1">ChoiceOption 1</option>
<option value="2" selected="selected">ChoiceOption 2</option>
<option value="3" selected="selected">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice" value="2" id="initial-id_multi_choice_0" />
<input type="hidden" name="initial-multi_choice" value="3" id="initial-id_multi_choice_1" /></p>
<p><label for="id_multi_choice_int">Multi choice int:</label>
<select multiple="multiple" name="multi_choice_int" id="id_multi_choice_int">
<option value="1">ChoiceOption 1</option>
<option value="2" selected="selected">ChoiceOption 2</option>
<option value="3" selected="selected">ChoiceOption 3</option>
</select><input type="hidden" name="initial-multi_choice_int" value="2" id="initial-id_multi_choice_int_0" />
<input type="hidden" name="initial-multi_choice_int" value="3" id="initial-id_multi_choice_int_1" /></p>"""
)
class FormsModelTestCase(TestCase):
def test_unicode_filename(self):
# FileModel with unicode filename and data #########################
file1 = SimpleUploadedFile('我隻氣墊船裝滿晒鱔.txt', 'मेरी मँडराने वाली नाव सर्पमीनों से भरी ह'.encode('utf-8'))
f = FileForm(data={}, files={'file1': file1}, auto_id=False)
self.assertTrue(f.is_valid())
self.assertIn('file1', f.cleaned_data)
m = FileModel.objects.create(file=f.cleaned_data['file1'])
self.assertEqual(m.file.name, 'tests/\u6211\u96bb\u6c23\u588a\u8239\u88dd\u6eff\u6652\u9c54.txt')
m.delete()
def test_boundary_conditions(self):
# Boundary conditions on a PostitiveIntegerField #########################
class BoundaryForm(ModelForm):
class Meta:
model = BoundaryModel
fields = '__all__'
f = BoundaryForm({'positive_integer': 100})
self.assertTrue(f.is_valid())
f = BoundaryForm({'positive_integer': 0})
self.assertTrue(f.is_valid())
f = BoundaryForm({'positive_integer': -100})
self.assertFalse(f.is_valid())
def test_formfield_initial(self):
# Formfield initial values ########
# If the model has default values for some fields, they are used as the formfield
# initial values.
class DefaultsForm(ModelForm):
class Meta:
model = Defaults
fields = '__all__'
self.assertEqual(DefaultsForm().fields['name'].initial, 'class default value')
self.assertEqual(DefaultsForm().fields['def_date'].initial, datetime.date(1980, 1, 1))
self.assertEqual(DefaultsForm().fields['value'].initial, 42)
r1 = DefaultsForm()['callable_default'].as_widget()
r2 = DefaultsForm()['callable_default'].as_widget()
self.assertNotEqual(r1, r2)
# In a ModelForm that is passed an instance, the initial values come from the
# instance's values, not the model's defaults.
foo_instance = Defaults(name='instance value', def_date=datetime.date(1969, 4, 4), value=12)
instance_form = DefaultsForm(instance=foo_instance)
self.assertEqual(instance_form.initial['name'], 'instance value')
self.assertEqual(instance_form.initial['def_date'], datetime.date(1969, 4, 4))
self.assertEqual(instance_form.initial['value'], 12)
from django.forms import CharField
class ExcludingForm(ModelForm):
name = CharField(max_length=255)
class Meta:
model = Defaults
exclude = ['name', 'callable_default']
f = ExcludingForm({'name': 'Hello', 'value': 99, 'def_date': datetime.date(1999, 3, 2)})
self.assertTrue(f.is_valid())
self.assertEqual(f.cleaned_data['name'], 'Hello')
obj = f.save()
self.assertEqual(obj.name, 'class default value')
self.assertEqual(obj.value, 99)
self.assertEqual(obj.def_date, datetime.date(1999, 3, 2))
class RelatedModelFormTests(SimpleTestCase):
def test_invalid_loading_order(self):
"""
Test for issue 10405
"""
class A(models.Model):
ref = models.ForeignKey("B", models.CASCADE)
class Meta:
model = A
fields = '__all__'
self.assertRaises(ValueError, ModelFormMetaclass, str('Form'), (ModelForm,), {'Meta': Meta})
class B(models.Model):
pass
def test_valid_loading_order(self):
"""
Test for issue 10405
"""
class C(models.Model):
ref = models.ForeignKey("D", models.CASCADE)
class D(models.Model):
pass
class Meta:
model = C
fields = '__all__'
self.assertTrue(issubclass(ModelFormMetaclass(str('Form'), (ModelForm,), {'Meta': Meta}), ModelForm))
class ManyToManyExclusionTestCase(TestCase):
def test_m2m_field_exclusion(self):
# Issue 12337. save_instance should honor the passed-in exclude keyword.
opt1 = ChoiceOptionModel.objects.create(id=1, name='default')
opt2 = ChoiceOptionModel.objects.create(id=2, name='option 2')
opt3 = ChoiceOptionModel.objects.create(id=3, name='option 3')
initial = {
'choice': opt1,
'choice_int': opt1,
}
data = {
'choice': opt2.pk,
'choice_int': opt2.pk,
'multi_choice': 'string data!',
'multi_choice_int': [opt1.pk],
}
instance = ChoiceFieldModel.objects.create(**initial)
instance.multi_choice = instance.multi_choice_int = [opt2, opt3]
form = ChoiceFieldExclusionForm(data=data, instance=instance)
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['multi_choice'], data['multi_choice'])
form.save()
self.assertEqual(form.instance.choice.pk, data['choice'])
self.assertEqual(form.instance.choice_int.pk, data['choice_int'])
self.assertEqual(list(form.instance.multi_choice.all()), [opt2, opt3])
self.assertEqual([obj.pk for obj in form.instance.multi_choice_int.all()], data['multi_choice_int'])
class EmptyLabelTestCase(TestCase):
def test_empty_field_char(self):
f = EmptyCharLabelChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" /></p>
<p><label for="id_choice">Choice:</label> <select id="id_choice" name="choice">
<option value="" selected="selected">No Preference</option>
<option value="f">Foo</option>
<option value="b">Bar</option>
</select></p>"""
)
def test_empty_field_char_none(self):
f = EmptyCharLabelNoneChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" /></p>
<p><label for="id_choice_string_w_none">Choice string w none:</label>
<select id="id_choice_string_w_none" name="choice_string_w_none">
<option value="" selected="selected">No Preference</option>
<option value="f">Foo</option>
<option value="b">Bar</option>
</select></p>"""
)
def test_save_empty_label_forms(self):
# Test that saving a form with a blank choice results in the expected
# value being stored in the database.
tests = [
(EmptyCharLabelNoneChoiceForm, 'choice_string_w_none', None),
(EmptyIntegerLabelChoiceForm, 'choice_integer', None),
(EmptyCharLabelChoiceForm, 'choice', ''),
]
for form, key, expected in tests:
f = form({'name': 'some-key', key: ''})
self.assertTrue(f.is_valid())
m = f.save()
self.assertEqual(expected, getattr(m, key))
self.assertEqual('No Preference',
getattr(m, 'get_{}_display'.format(key))())
def test_empty_field_integer(self):
f = EmptyIntegerLabelChoiceForm()
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label> <input id="id_name" maxlength="10" name="name" type="text" /></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="" selected="selected">No Preference</option>
<option value="1">Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
def test_get_display_value_on_none(self):
m = ChoiceModel.objects.create(name='test', choice='', choice_integer=None)
self.assertIsNone(m.choice_integer)
self.assertEqual('No Preference', m.get_choice_integer_display())
def test_html_rendering_of_prepopulated_models(self):
none_model = ChoiceModel(name='none-test', choice_integer=None)
f = EmptyIntegerLabelChoiceForm(instance=none_model)
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label>
<input id="id_name" maxlength="10" name="name" type="text" value="none-test"/></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="" selected="selected">No Preference</option>
<option value="1">Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
foo_model = ChoiceModel(name='foo-test', choice_integer=1)
f = EmptyIntegerLabelChoiceForm(instance=foo_model)
self.assertHTMLEqual(
f.as_p(),
"""<p><label for="id_name">Name:</label>
<input id="id_name" maxlength="10" name="name" type="text" value="foo-test"/></p>
<p><label for="id_choice_integer">Choice integer:</label>
<select id="id_choice_integer" name="choice_integer">
<option value="">No Preference</option>
<option value="1" selected="selected">Foo</option>
<option value="2">Bar</option>
</select></p>"""
)
|
bsd-3-clause
|
Rjtsahu/School-Bus-Tracking
|
BusTrack/repository/main.py
|
1
|
1036
|
from BusTrack.repository import Base
from BusTrack.repository import engine
# import all relevant db models here.
from BusTrack.repository.models.Bus import Bus
from BusTrack.repository.models.UserType import UserType
from BusTrack.repository.models.User import User
from BusTrack.repository.models.UserLogin import UserLogin
from BusTrack.repository.models.Feedback import Feedback
from BusTrack.repository.models.Kid import Kid
from BusTrack.repository.models.Journey import Journey
from BusTrack.repository.models.Location import Location
from BusTrack.repository.models.Attendance import Attendance
def create_database():
print('creating database from given mappings')
Base.metadata.create_all(engine)
Bus.__create_default_bus__()
UserType.__create_default_role__()
tables = [User.__tablename__, UserLogin.__tablename__, Feedback.__tablename__
, Kid.__tablename__, Journey.__tablename__, Location.__tablename__,
Attendance.__tablename__]
print('created mapping for tables:', tables)
|
gpl-3.0
|
le9i0nx/ansible
|
lib/ansible/modules/network/nxos/nxos_ntp_auth.py
|
19
|
9675
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_ntp_auth
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages NTP authentication.
description:
- Manages NTP authentication.
author:
- Jason Edelman (@jedelman8)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- If C(state=absent), the module will attempt to remove the given key configuration.
If a matching key configuration isn't found on the device, the module will fail.
- If C(state=absent) and C(authentication=on), authentication will be turned off.
- If C(state=absent) and C(authentication=off), authentication will be turned on.
options:
key_id:
description:
- Authentication key identifier (numeric).
required: true
md5string:
description:
- MD5 String.
required: true
default: null
auth_type:
description:
- Whether the given md5string is in cleartext or
has been encrypted. If in cleartext, the device
will encrypt it before storing it.
required: false
default: text
choices: ['text', 'encrypt']
trusted_key:
description:
- Whether the given key is required to be supplied by a time source
for the device to synchronize to the time source.
required: false
default: false
choices: ['true', 'false']
authentication:
description:
- Turns NTP authentication on or off.
required: false
default: null
choices: ['on', 'off']
state:
description:
- Manage the state of the resource.
required: false
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# Basic NTP authentication configuration
- nxos_ntp_auth:
key_id: 32
md5string: hello
auth_type: text
'''
RETURN = '''
commands:
description: command sent to the device
returned: always
type: list
sample: ["ntp authentication-key 32 md5 helloWorld 0", "ntp trusted-key 32"]
'''
import re
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
def execute_show_command(command, module):
if 'show run' not in command:
command = {
'command': command,
'output': 'json',
}
else:
command = {
'command': command,
'output': 'text',
}
return run_commands(module, [command])
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_ntp_auth(module):
command = 'show ntp authentication-status'
body = execute_show_command(command, module)[0]
ntp_auth_str = body['authentication']
if 'enabled' in ntp_auth_str:
ntp_auth = True
else:
ntp_auth = False
return ntp_auth
def get_ntp_trusted_key(module):
trusted_key_list = []
command = 'show run | inc ntp.trusted-key'
trusted_key_str = execute_show_command(command, module)[0]
if trusted_key_str:
trusted_keys = trusted_key_str.splitlines()
else:
trusted_keys = []
for line in trusted_keys:
if line:
trusted_key_list.append(str(line.split()[2]))
return trusted_key_list
def get_ntp_auth_key(key_id, module):
authentication_key = {}
command = 'show run | inc ntp.authentication-key.{0}'.format(key_id)
auth_regex = (r".*ntp\sauthentication-key\s(?P<key_id>\d+)\s"
r"md5\s(?P<md5string>\S+).*")
body = execute_show_command(command, module)[0]
try:
match_authentication = re.match(auth_regex, body, re.DOTALL)
group_authentication = match_authentication.groupdict()
key_id = group_authentication["key_id"]
md5string = group_authentication['md5string']
authentication_key['key_id'] = key_id
authentication_key['md5string'] = md5string
except (AttributeError, TypeError):
authentication_key = {}
return authentication_key
def get_ntp_auth_info(key_id, module):
auth_info = get_ntp_auth_key(key_id, module)
trusted_key_list = get_ntp_trusted_key(module)
auth_power = get_ntp_auth(module)
if key_id in trusted_key_list:
auth_info['trusted_key'] = 'true'
else:
auth_info['trusted_key'] = 'false'
if auth_power:
auth_info['authentication'] = 'on'
else:
auth_info['authentication'] = 'off'
return auth_info
def auth_type_to_num(auth_type):
if auth_type == 'encrypt':
return '7'
else:
return '0'
def set_ntp_auth_key(key_id, md5string, auth_type, trusted_key, authentication):
ntp_auth_cmds = []
auth_type_num = auth_type_to_num(auth_type)
ntp_auth_cmds.append(
'ntp authentication-key {0} md5 {1} {2}'.format(
key_id, md5string, auth_type_num))
if trusted_key == 'true':
ntp_auth_cmds.append(
'ntp trusted-key {0}'.format(key_id))
elif trusted_key == 'false':
ntp_auth_cmds.append(
'no ntp trusted-key {0}'.format(key_id))
if authentication == 'on':
ntp_auth_cmds.append(
'ntp authenticate')
elif authentication == 'off':
ntp_auth_cmds.append(
'no ntp authenticate')
return ntp_auth_cmds
def remove_ntp_auth_key(key_id, md5string, auth_type, trusted_key, authentication):
auth_remove_cmds = []
auth_type_num = auth_type_to_num(auth_type)
auth_remove_cmds.append(
'no ntp authentication-key {0} md5 {1} {2}'.format(
key_id, md5string, auth_type_num))
if authentication == 'on':
auth_remove_cmds.append(
'no ntp authenticate')
elif authentication == 'off':
auth_remove_cmds.append(
'ntp authenticate')
return auth_remove_cmds
def main():
argument_spec = dict(
key_id=dict(required=True, type='str'),
md5string=dict(required=True, type='str'),
auth_type=dict(choices=['text', 'encrypt'], default='text'),
trusted_key=dict(choices=['true', 'false'], default='false'),
authentication=dict(choices=['on', 'off']),
state=dict(choices=['absent', 'present'], default='present'),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
key_id = module.params['key_id']
md5string = module.params['md5string']
auth_type = module.params['auth_type']
trusted_key = module.params['trusted_key']
authentication = module.params['authentication']
state = module.params['state']
args = dict(key_id=key_id, md5string=md5string,
auth_type=auth_type, trusted_key=trusted_key,
authentication=authentication)
changed = False
proposed = dict((k, v) for k, v in args.items() if v is not None)
existing = get_ntp_auth_info(key_id, module)
end_state = existing
delta = dict(set(proposed.items()).difference(existing.items()))
commands = []
if state == 'present':
if delta:
command = set_ntp_auth_key(
key_id, md5string, auth_type, trusted_key, delta.get('authentication'))
if command:
commands.append(command)
elif state == 'absent':
if existing:
auth_toggle = None
if authentication == existing.get('authentication'):
auth_toggle = authentication
command = remove_ntp_auth_key(
key_id, md5string, auth_type, trusted_key, auth_toggle)
if command:
commands.append(command)
cmds = flatten_list(commands)
if cmds:
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
load_config(module, cmds)
end_state = get_ntp_auth_info(key_id, module)
delta = dict(set(end_state.items()).difference(existing.items()))
if delta or (len(existing) != len(end_state)):
changed = True
if 'configure' in cmds:
cmds.pop(0)
results = {}
results['proposed'] = proposed
results['existing'] = existing
results['updates'] = cmds
results['changed'] = changed
results['warnings'] = warnings
results['end_state'] = end_state
module.exit_json(**results)
if __name__ == '__main__':
main()
|
gpl-3.0
|
n0trax/ansible
|
lib/ansible/modules/network/nxos/nxos_gir.py
|
15
|
12164
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_gir
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Trigger a graceful removal or insertion (GIR) of the switch.
description:
- Trigger a graceful removal or insertion (GIR) of the switch.
author:
- Gabriele Gerbino (@GGabriele)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- C(state) has effect only in combination with
C(system_mode_maintenance_timeout) or
C(system_mode_maintenance_on_reload_reset_reason).
- Using C(system_mode_maintenance) and
C(system_mode_maintenance_dont_generate_profile) would make the module
fail, but the system mode will be triggered anyway.
options:
system_mode_maintenance:
description:
- When C(system_mode_maintenance=true) it puts all enabled
protocols in maintenance mode (using the isolate command).
When C(system_mode_maintenance=false) it puts all enabled
protocols in normal mode (using the no isolate command).
required: false
default: null
choices: ['true','false']
system_mode_maintenance_dont_generate_profile:
description:
- When C(system_mode_maintenance_dont_generate_profile=true) it
prevents the dynamic searching of enabled protocols and executes
commands configured in a maintenance-mode profile.
Use this option if you want the system to use a maintenance-mode
profile that you have created.
When C(system_mode_maintenance_dont_generate_profile=false) it
prevents the dynamic searching of enabled protocols and executes
commands configured in a normal-mode profile. Use this option if
you want the system to use a normal-mode profile that
you have created.
required: false
default: null
choices: ['true','false']
system_mode_maintenance_timeout:
description:
- Keeps the switch in maintenance mode for a specified
number of minutes. Range is 5-65535.
required: false
default: null
system_mode_maintenance_shutdown:
description:
- Shuts down all protocols, vPC domains, and interfaces except
the management interface (using the shutdown command).
This option is disruptive while C(system_mode_maintenance)
(which uses the isolate command) is not.
required: false
default: null
choices: ['true','false']
system_mode_maintenance_on_reload_reset_reason:
description:
- Boots the switch into maintenance mode automatically in the
event of a specified system crash.
required: false
default: null
choices: ['hw_error','svc_failure','kern_failure','wdog_timeout',
'fatal_error','lc_failure','match_any','manual_reload']
state:
description:
- Specify desired state of the resource.
required: true
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# Trigger system maintenance mode
- nxos_gir:
system_mode_maintenance: true
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Trigger system normal mode
- nxos_gir:
system_mode_maintenance: false
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Configure on-reload reset-reason for maintenance mode
- nxos_gir:
system_mode_maintenance_on_reload_reset_reason: manual_reload
state: present
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Add on-reload reset-reason for maintenance mode
- nxos_gir:
system_mode_maintenance_on_reload_reset_reason: hw_error
state: present
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Remove on-reload reset-reason for maintenance mode
- nxos_gir:
system_mode_maintenance_on_reload_reset_reason: manual_reload
state: absent
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Set timeout for maintenance mode
- nxos_gir:
system_mode_maintenance_timeout: 30
state: present
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Remove timeout for maintenance mode
- nxos_gir:
system_mode_maintenance_timeout: 30
state: absent
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
'''
RETURN = '''
final_system_mode:
description: describe the last system mode
returned: verbose mode
type: string
sample: normal
updates:
description: commands sent to the device
returned: verbose mode
type: list
sample: ["terminal dont-ask", "system mode maintenance timeout 10"]
changed:
description: check to see if a change was made on the device
returned: always
type: boolean
sample: true
'''
import re
from ansible.module_utils.nxos import get_config, load_config, run_commands
from ansible.module_utils.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
def execute_show_command(command, module, command_type='cli_show_ascii'):
cmds = [command]
provider = module.params['provider']
if provider['transport'] == 'cli':
body = run_commands(module, cmds)
elif provider['transport'] == 'nxapi':
body = run_commands(module, cmds)
return body
def get_system_mode(module):
command = 'show system mode'
body = execute_show_command(command, module)[0]
if 'normal' in body.lower():
mode = 'normal'
else:
mode = 'maintenance'
return mode
def get_maintenance_timeout(module):
command = 'show maintenance timeout'
body = execute_show_command(command, module)[0]
timeout = body.split()[4]
return timeout
def get_reset_reasons(module):
command = 'show maintenance on-reload reset-reasons'
body = execute_show_command(command, module)[0]
return body
def get_commands(module, state, mode):
commands = list()
system_mode = ''
if module.params['system_mode_maintenance'] is True and mode == 'normal':
commands.append('system mode maintenance')
elif (module.params['system_mode_maintenance'] is False and
mode == 'maintenance'):
commands.append('no system mode maintenance')
elif (module.params[
'system_mode_maintenance_dont_generate_profile'] is True and
mode == 'normal'):
commands.append('system mode maintenance dont-generate-profile')
elif (module.params[
'system_mode_maintenance_dont_generate_profile'] is False and
mode == 'maintenance'):
commands.append('no system mode maintenance dont-generate-profile')
elif module.params['system_mode_maintenance_timeout']:
timeout = get_maintenance_timeout(module)
if (state == 'present' and
timeout != module.params['system_mode_maintenance_timeout']):
commands.append('system mode maintenance timeout {0}'.format(
module.params['system_mode_maintenance_timeout']))
elif (state == 'absent' and
timeout == module.params['system_mode_maintenance_timeout']):
commands.append('no system mode maintenance timeout {0}'.format(
module.params['system_mode_maintenance_timeout']))
elif module.params['system_mode_maintenance_shutdown'] is True:
commands.append('system mode maintenance shutdown')
elif module.params['system_mode_maintenance_on_reload_reset_reason']:
reset_reasons = get_reset_reasons(module)
if (state == 'present' and
module.params['system_mode_maintenance_on_reload_reset_reason'].lower() not in reset_reasons.lower()):
commands.append('system mode maintenance on-reload '
'reset-reason {0}'.format(
module.params[
'system_mode_maintenance_on_reload_reset_reason']))
elif (state == 'absent' and
module.params[
'system_mode_maintenance_on_reload_reset_reason'].lower() in
reset_reasons.lower()):
commands.append('no system mode maintenance on-reload '
'reset-reason {0}'.format(
module.params[
'system_mode_maintenance_on_reload_reset_reason']))
if commands:
commands.insert(0, 'terminal dont-ask')
return commands
def main():
argument_spec = dict(
system_mode_maintenance=dict(required=False, type='bool'),
system_mode_maintenance_dont_generate_profile=dict(required=False,
type='bool'),
system_mode_maintenance_timeout=dict(required=False, type='str'),
system_mode_maintenance_shutdown=dict(required=False, type='bool'),
system_mode_maintenance_on_reload_reset_reason=dict(required=False,
choices=['hw_error','svc_failure','kern_failure',
'wdog_timeout','fatal_error','lc_failure',
'match_any','manual_reload']),
state=dict(choices=['absent', 'present', 'default'],
default='present', required=False)
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=[[
'system_mode_maintenance',
'system_mode_maintenance_dont_generate_profile',
'system_mode_maintenance_timeout',
'system_mode_maintenance_shutdown',
'system_mode_maintenance_on_reload_reset_reason'
]],
required_one_of=[[
'system_mode_maintenance',
'system_mode_maintenance_dont_generate_profile',
'system_mode_maintenance_timeout',
'system_mode_maintenance_shutdown',
'system_mode_maintenance_on_reload_reset_reason'
]],
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
state = module.params['state']
mode = get_system_mode(module)
commands = get_commands(module, state, mode)
changed = False
if commands:
if module.check_mode:
module.exit_json(changed=True, commands=commands)
else:
load_config(module, commands)
changed = True
result = {}
result['changed'] = changed
if module._verbosity > 0:
final_system_mode = get_system_mode(module)
result['final_system_mode'] = final_system_mode
result['updates'] = commands
result['warnings'] = warnings
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
ldoktor/avocado-vt
|
virttest/utils_libvirtd.py
|
6
|
10167
|
"""
Module to control libvirtd service.
"""
import re
import logging
import aexpect
from avocado.utils import path
from avocado.utils import process
from avocado.utils import wait
from . import remote
from . import utils_misc
from .staging import service
from .utils_gdb import GDB
try:
path.find_command("libvirtd")
LIBVIRTD = "libvirtd"
except path.CmdNotFoundError:
LIBVIRTD = None
class Libvirtd(object):
"""
Class to manage libvirtd service on host or guest.
"""
def __init__(self, session=None):
"""
Initialize an service object for libvirtd.
:params session: An session to guest or remote host.
"""
self.session = session
if self.session:
self.remote_runner = remote.RemoteRunner(session=self.session)
runner = self.remote_runner.run
else:
runner = process.run
if LIBVIRTD is None:
logging.warning("Libvirtd service is not available in host, "
"utils_libvirtd module will not function normally")
self.libvirtd = service.Factory.create_service(LIBVIRTD, run=runner)
def _wait_for_start(self, timeout=60):
"""
Wait n seconds for libvirt to start. Default is 10 seconds.
"""
def _check_start():
virsh_cmd = "virsh list"
try:
if self.session:
self.session.cmd(virsh_cmd, timeout=2)
else:
process.run(virsh_cmd, timeout=2)
return True
except Exception:
return False
return utils_misc.wait_for(_check_start, timeout=timeout)
def start(self, reset_failed=True):
if reset_failed:
self.libvirtd.reset_failed()
if not self.libvirtd.start():
return False
return self._wait_for_start()
def stop(self):
return self.libvirtd.stop()
def restart(self, reset_failed=True):
if reset_failed:
self.libvirtd.reset_failed()
if not self.libvirtd.restart():
return False
return self._wait_for_start()
def is_running(self):
return self.libvirtd.status()
class LibvirtdSession(object):
"""
Interaction libvirt daemon session by directly call the libvirtd command.
With gdb debugging feature can be optionally started.
"""
def __init__(self, gdb=False,
logging_handler=None,
logging_params=(),
logging_pattern=r'.*'):
"""
:param gdb: Whether call the session with gdb debugging support
:param logging_handler: Callback function to handle logging
:param logging_pattern: Regex for filtering specific log lines
"""
self.gdb = None
self.tail = None
self.running = False
self.pid = None
self.bundle = {"stop-info": None}
self.libvirtd_service = Libvirtd()
self.was_running = self.libvirtd_service.is_running()
if self.was_running:
logging.debug('Stopping libvirtd service')
self.libvirtd_service.stop()
self.logging_handler = logging_handler
self.logging_params = logging_params
self.logging_pattern = logging_pattern
if gdb:
self.gdb = GDB(LIBVIRTD)
self.gdb.set_callback('stop', self._stop_callback, self.bundle)
self.gdb.set_callback('start', self._start_callback, self.bundle)
self.gdb.set_callback('termination', self._termination_callback)
def _output_handler(self, line):
"""
Adapter output callback function.
"""
if self.logging_handler is not None:
if re.match(self.logging_pattern, line):
self.logging_handler(line, *self.logging_params)
def _termination_handler(self, status):
"""
Helper aexpect terminaltion handler
"""
self.running = False
self.exit_status = status
self.pid = None
def _termination_callback(self, gdb, status):
"""
Termination handler function triggered when libvirtd exited.
:param gdb: Instance of the gdb session
:param status: Return code of exited libvirtd session
"""
self.running = False
self.exit_status = status
self.pid = None
def _stop_callback(self, gdb, info, params):
"""
Stop handler function triggered when gdb libvirtd stopped.
:param gdb: Instance of the gdb session
:param status: Return code of exited libvirtd session
"""
self.running = False
params['stop-info'] = info
def _start_callback(self, gdb, info, params):
"""
Stop handler function triggered when gdb libvirtd started.
:param gdb: Instance of the gdb session
:param status: Return code of exited libvirtd session
"""
self.running = True
params['stop-info'] = None
def set_callback(self, callback_type, callback_func, callback_params=None):
"""
Set a customized gdb callback function.
"""
if self.gdb:
self.gdb.set_callback(
callback_type, callback_func, callback_params)
else:
logging.error("Only gdb session supports setting callback")
def start(self, arg_str='', wait_for_working=True):
"""
Start libvirtd session.
:param arg_str: Argument passing to the session
:param wait_for_working: Whether wait for libvirtd finish loading
"""
if self.gdb:
self.gdb.run(arg_str=arg_str)
self.pid = self.gdb.pid
else:
self.tail = aexpect.Tail(
"%s %s" % (LIBVIRTD, arg_str),
output_func=self._output_handler,
termination_func=self._termination_handler,
)
self.running = True
if wait_for_working:
self.wait_for_working()
def cont(self):
"""
Continue a stopped libvirtd session.
"""
if self.gdb:
self.gdb.cont()
else:
logging.error("Only gdb session supports continue")
def kill(self):
"""
Kill the libvirtd session.
"""
if self.gdb:
self.gdb.kill()
else:
self.tail.kill()
def restart(self, arg_str='', wait_for_working=True):
"""
Restart the libvirtd session.
:param arg_str: Argument passing to the session
:param wait_for_working: Whether wait for libvirtd finish loading
"""
logging.debug("Restarting libvirtd session")
self.kill()
self.start(arg_str=arg_str, wait_for_working=wait_for_working)
def wait_for_working(self, timeout=60):
"""
Wait for libvirtd to work.
:param timeout: Max wait time
"""
logging.debug('Waiting for libvirtd to work')
return utils_misc.wait_for(
self.is_working,
timeout=timeout,
)
def back_trace(self):
"""
Get the backtrace from gdb session.
"""
if self.gdb:
return self.gdb.back_trace()
else:
logging.warning('Can not get back trace without gdb')
def insert_break(self, break_func):
"""
Insert a function breakpoint.
:param break_func: Function at which breakpoint inserted
"""
if self.gdb:
return self.gdb.insert_break(break_func)
else:
logging.warning('Can not insert breakpoint without gdb')
def is_working(self):
"""
Check if libvirtd is start by return status of 'virsh list'
"""
virsh_cmd = "virsh list"
try:
process.run(virsh_cmd, timeout=2)
return True
except process.CmdError:
return False
def wait_for_stop(self, timeout=60, step=0.1):
"""
Wait for libvirtd to stop.
:param timeout: Max wait time
:param step: Checking interval
"""
logging.debug('Waiting for libvirtd to stop')
if self.gdb:
return self.gdb.wait_for_stop(timeout=timeout)
else:
return wait.wait_for(
lambda: not self.running,
timeout=timeout,
step=step,
)
def wait_for_termination(self, timeout=60):
"""
Wait for libvirtd gdb session to exit.
:param timeout: Max wait time
"""
logging.debug('Waiting for libvirtd to terminate')
if self.gdb:
return self.gdb.wait_for_termination(timeout=timeout)
else:
logging.error("Only gdb session supports wait_for_termination.")
def exit(self):
"""
Exit the libvirtd session.
"""
if self.gdb:
self.gdb.exit()
else:
if self.tail:
self.tail.close()
if self.was_running:
self.libvirtd_service.start()
def deprecation_warning():
"""
As the utils_libvirtd.libvirtd_xxx interfaces are deprecated,
this function are printing the warning to user.
"""
logging.warning("This function was deprecated, Please use "
"class utils_libvirtd.Libvirtd to manage "
"libvirtd service.")
def libvirtd_start():
libvirtd_instance = Libvirtd()
deprecation_warning()
return libvirtd_instance.start()
def libvirtd_is_running():
libvirtd_instance = Libvirtd()
deprecation_warning()
return libvirtd_instance.is_running()
def libvirtd_stop():
libvirtd_instance = Libvirtd()
deprecation_warning()
return libvirtd_instance.stop()
def libvirtd_restart():
libvirtd_instance = Libvirtd()
deprecation_warning()
return libvirtd_instance.restart()
def service_libvirtd_control(action, session=None):
libvirtd_instance = Libvirtd(session)
deprecation_warning()
getattr(libvirtd_instance, action)()
|
gpl-2.0
|
patricklaw/pip
|
pip/req/req_uninstall.py
|
87
|
7107
|
from __future__ import absolute_import
import imp
import logging
import os
import sys
import tempfile
from pip.compat import uses_pycache, WINDOWS
from pip.exceptions import UninstallationError
from pip.utils import (rmtree, ask, is_local, dist_is_local, renames,
normalize_path)
from pip.utils.logging import indent_log
logger = logging.getLogger(__name__)
class UninstallPathSet(object):
"""A set of file paths to be removed in the uninstallation of a
requirement."""
def __init__(self, dist):
self.paths = set()
self._refuse = set()
self.pth = {}
self.dist = dist
self.save_dir = None
self._moved_paths = []
def _permitted(self, path):
"""
Return True if the given path is one we are permitted to
remove/modify, False otherwise.
"""
return is_local(path)
def _can_uninstall(self):
if not dist_is_local(self.dist):
logger.info(
"Not uninstalling %s at %s, outside environment %s",
self.dist.project_name,
normalize_path(self.dist.location),
sys.prefix,
)
return False
return True
def add(self, path):
path = normalize_path(path)
if not os.path.exists(path):
return
if self._permitted(path):
self.paths.add(path)
else:
self._refuse.add(path)
# __pycache__ files can show up after 'installed-files.txt' is created,
# due to imports
if os.path.splitext(path)[1] == '.py' and uses_pycache:
self.add(imp.cache_from_source(path))
def add_pth(self, pth_file, entry):
pth_file = normalize_path(pth_file)
if self._permitted(pth_file):
if pth_file not in self.pth:
self.pth[pth_file] = UninstallPthEntries(pth_file)
self.pth[pth_file].add(entry)
else:
self._refuse.add(pth_file)
def compact(self, paths):
"""Compact a path set to contain the minimal number of paths
necessary to contain all paths in the set. If /a/path/ and
/a/path/to/a/file.txt are both in the set, leave only the
shorter path."""
short_paths = set()
for path in sorted(paths, key=len):
if not any([
(path.startswith(shortpath) and
path[len(shortpath.rstrip(os.path.sep))] == os.path.sep)
for shortpath in short_paths]):
short_paths.add(path)
return short_paths
def _stash(self, path):
return os.path.join(
self.save_dir, os.path.splitdrive(path)[1].lstrip(os.path.sep))
def remove(self, auto_confirm=False):
"""Remove paths in ``self.paths`` with confirmation (unless
``auto_confirm`` is True)."""
if not self._can_uninstall():
return
if not self.paths:
logger.info(
"Can't uninstall '%s'. No files were found to uninstall.",
self.dist.project_name,
)
return
logger.info(
'Uninstalling %s-%s:',
self.dist.project_name, self.dist.version
)
with indent_log():
paths = sorted(self.compact(self.paths))
if auto_confirm:
response = 'y'
else:
for path in paths:
logger.info(path)
response = ask('Proceed (y/n)? ', ('y', 'n'))
if self._refuse:
logger.info('Not removing or modifying (outside of prefix):')
for path in self.compact(self._refuse):
logger.info(path)
if response == 'y':
self.save_dir = tempfile.mkdtemp(suffix='-uninstall',
prefix='pip-')
for path in paths:
new_path = self._stash(path)
logger.debug('Removing file or directory %s', path)
self._moved_paths.append(path)
renames(path, new_path)
for pth in self.pth.values():
pth.remove()
logger.info(
'Successfully uninstalled %s-%s',
self.dist.project_name, self.dist.version
)
def rollback(self):
"""Rollback the changes previously made by remove()."""
if self.save_dir is None:
logger.error(
"Can't roll back %s; was not uninstalled",
self.dist.project_name,
)
return False
logger.info('Rolling back uninstall of %s', self.dist.project_name)
for path in self._moved_paths:
tmp_path = self._stash(path)
logger.debug('Replacing %s', path)
renames(tmp_path, path)
for pth in self.pth.values():
pth.rollback()
def commit(self):
"""Remove temporary save dir: rollback will no longer be possible."""
if self.save_dir is not None:
rmtree(self.save_dir)
self.save_dir = None
self._moved_paths = []
class UninstallPthEntries(object):
def __init__(self, pth_file):
if not os.path.isfile(pth_file):
raise UninstallationError(
"Cannot remove entries from nonexistent file %s" % pth_file
)
self.file = pth_file
self.entries = set()
self._saved_lines = None
def add(self, entry):
entry = os.path.normcase(entry)
# On Windows, os.path.normcase converts the entry to use
# backslashes. This is correct for entries that describe absolute
# paths outside of site-packages, but all the others use forward
# slashes.
if WINDOWS and not os.path.splitdrive(entry)[0]:
entry = entry.replace('\\', '/')
self.entries.add(entry)
def remove(self):
logger.debug('Removing pth entries from %s:', self.file)
with open(self.file, 'rb') as fh:
# windows uses '\r\n' with py3k, but uses '\n' with py2.x
lines = fh.readlines()
self._saved_lines = lines
if any(b'\r\n' in line for line in lines):
endline = '\r\n'
else:
endline = '\n'
for entry in self.entries:
try:
logger.debug('Removing entry: %s', entry)
lines.remove((entry + endline).encode("utf-8"))
except ValueError:
pass
with open(self.file, 'wb') as fh:
fh.writelines(lines)
def rollback(self):
if self._saved_lines is None:
logger.error(
'Cannot roll back changes to %s, none were made', self.file
)
return False
logger.debug('Rolling %s back to previous state', self.file)
with open(self.file, 'wb') as fh:
fh.writelines(self._saved_lines)
return True
|
mit
|
brandond/ansible
|
lib/ansible/modules/network/cnos/cnos_interface.py
|
52
|
18994
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2017 Lenovo, Inc.
# (c) 2017, Ansible by Red Hat, inc
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Module to work on Interfaces with Lenovo Switches
# Lenovo Networking
#
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: cnos_interface
version_added: "2.3"
author: "Anil Kumar Muraleedharan(@amuraleedhar)"
short_description: Manage Interface on Lenovo CNOS network devices
description:
- This module provides declarative management of Interfaces
on Lenovo CNOS network devices.
notes:
- Tested against CNOS 10.8.1
options:
name:
description:
- Name of the Interface.
required: true
version_added: "2.8"
description:
description:
- Description of Interface.
version_added: "2.8"
enabled:
description:
- Interface link status.
type: bool
default: True
version_added: "2.8"
speed:
description:
- Interface link speed.
version_added: "2.8"
mtu:
description:
- Maximum size of transmit packet.
version_added: "2.8"
duplex:
description:
- Interface link status
default: auto
choices: ['full', 'half', 'auto']
version_added: "2.8"
tx_rate:
description:
- Transmit rate in bits per second (bps).
- This is state check parameter only.
- Supports conditionals, see L(Conditionals in Networking Modules,
../network/user_guide/network_working_with_command_output.html)
version_added: "2.8"
rx_rate:
description:
- Receiver rate in bits per second (bps).
- This is state check parameter only.
- Supports conditionals, see L(Conditionals in Networking Modules,
../network/user_guide/network_working_with_command_output.html)
version_added: "2.8"
neighbors:
description:
- Check operational state of given interface C(name) for LLDP neighbor.
- The following suboptions are available.
version_added: "2.8"
suboptions:
host:
description:
- "LLDP neighbor host for given interface C(name)."
port:
description:
- "LLDP neighbor port to which interface C(name) is connected."
aggregate:
description: List of Interfaces definitions.
version_added: "2.8"
delay:
description:
- Time in seconds to wait before checking for the operational state on
remote device. This wait is applicable for operational state argument
which are I(state) with values C(up)/C(down), I(tx_rate) and I(rx_rate)
default: 20
version_added: "2.8"
state:
description:
- State of the Interface configuration, C(up) means present and
operationally up and C(down) means present and operationally C(down)
default: present
version_added: "2.8"
choices: ['present', 'absent', 'up', 'down']
provider:
description:
- B(Deprecated)
- "Starting with Ansible 2.5 we recommend using C(connection: network_cli)."
- For more information please see the L(CNOS Platform Options guide, ../network/user_guide/platform_cnos.html).
- HORIZONTALLINE
- A dict object containing connection details.
version_added: "2.8"
suboptions:
host:
description:
- Specifies the DNS host name or address for connecting to the remote
device over the specified transport. The value of host is used as
the destination address for the transport.
required: true
port:
description:
- Specifies the port to use when building the connection to the remote device.
default: 22
username:
description:
- Configures the username to use to authenticate the connection to
the remote device. This value is used to authenticate
the SSH session. If the value is not specified in the task, the
value of environment variable C(ANSIBLE_NET_USERNAME) will be used instead.
password:
description:
- Specifies the password to use to authenticate the connection to
the remote device. This value is used to authenticate
the SSH session. If the value is not specified in the task, the
value of environment variable C(ANSIBLE_NET_PASSWORD) will be used instead.
timeout:
description:
- Specifies the timeout in seconds for communicating with the network device
for either connecting or sending commands. If the timeout is
exceeded before the operation is completed, the module will error.
default: 10
ssh_keyfile:
description:
- Specifies the SSH key to use to authenticate the connection to
the remote device. This value is the path to the
key used to authenticate the SSH session. If the value is not specified
in the task, the value of environment variable C(ANSIBLE_NET_SSH_KEYFILE)
will be used instead.
authorize:
description:
- Instructs the module to enter privileged mode on the remote device
before sending any commands. If not specified, the device will
attempt to execute all commands in non-privileged mode. If the value
is not specified in the task, the value of environment variable
C(ANSIBLE_NET_AUTHORIZE) will be used instead.
type: bool
default: 'no'
auth_pass:
description:
- Specifies the password to use if required to enter privileged mode
on the remote device. If I(authorize) is false, then this argument
does nothing. If the value is not specified in the task, the value of
environment variable C(ANSIBLE_NET_AUTH_PASS) will be used instead.
"""
EXAMPLES = """
- name: configure interface
cnos_interface:
name: Ethernet1/33
description: test-interface
speed: 100
duplex: half
mtu: 999
- name: remove interface
cnos_interface:
name: loopback3
state: absent
- name: make interface up
cnos_interface:
name: Ethernet1/33
enabled: True
- name: make interface down
cnos_interface:
name: Ethernet1/33
enabled: False
- name: Check intent arguments
cnos_interface:
name: Ethernet1/33
state: up
tx_rate: ge(0)
rx_rate: le(0)
- name: Check neighbors intent arguments
cnos_interface:
name: Ethernet1/33
neighbors:
- port: eth0
host: netdev
- name: Config + intent
cnos_interface:
name: Ethernet1/33
enabled: False
state: down
- name: Add interface using aggregate
cnos_interface:
aggregate:
- { name: Ethernet1/33, mtu: 256, description: test-interface-1 }
- { name: Ethernet1/44, mtu: 516, description: test-interface-2 }
duplex: full
speed: 100
state: present
- name: Delete interface using aggregate
cnos_interface:
aggregate:
- name: loopback3
- name: loopback6
state: absent
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device.
returned: always, except for the platforms that use Netconf transport to
manage the device.
type: list
sample:
- interface Ethernet1/33
- description test-interface
- duplex half
- mtu 512
"""
import re
from copy import deepcopy
from time import sleep
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import exec_command
from ansible.module_utils.network.cnos.cnos import get_config, load_config
from ansible.module_utils.network.cnos.cnos import cnos_argument_spec
from ansible.module_utils.network.cnos.cnos import debugOutput, check_args
from ansible.module_utils.network.common.config import NetworkConfig
from ansible.module_utils.network.common.utils import conditional
from ansible.module_utils.network.common.utils import remove_default_spec
def validate_mtu(value, module):
if value and not 64 <= int(value) <= 9216:
module.fail_json(msg='mtu must be between 64 and 9216')
def validate_param_values(module, obj, param=None):
if param is None:
param = module.params
for key in obj:
# validate the param value (if validator func exists)
validator = globals().get('validate_%s' % key)
if callable(validator):
validator(param.get(key), module)
def parse_shutdown(configobj, name):
cfg = configobj['interface %s' % name]
cfg = '\n'.join(cfg.children)
match = re.search(r'^shutdown', cfg, re.M)
if match:
return True
else:
return False
def parse_config_argument(configobj, name, arg=None):
cfg = configobj['interface %s' % name]
cfg = '\n'.join(cfg.children)
match = re.search(r'%s (.+)$' % arg, cfg, re.M)
if match:
return match.group(1)
def search_obj_in_list(name, lst):
for o in lst:
if o['name'] == name:
return o
return None
def add_command_to_interface(interface, cmd, commands):
if interface not in commands:
commands.append(interface)
commands.append(cmd)
def map_config_to_obj(module):
config = get_config(module)
configobj = NetworkConfig(indent=1, contents=config)
match = re.findall(r'^interface (\S+)', config, re.M)
if not match:
return list()
instances = list()
for item in set(match):
obj = {
'name': item,
'description': parse_config_argument(configobj, item, 'description'),
'speed': parse_config_argument(configobj, item, 'speed'),
'duplex': parse_config_argument(configobj, item, 'duplex'),
'mtu': parse_config_argument(configobj, item, 'mtu'),
'disable': True if parse_shutdown(configobj, item) else False,
'state': 'present'
}
instances.append(obj)
return instances
def map_params_to_obj(module):
obj = []
aggregate = module.params.get('aggregate')
if aggregate:
for item in aggregate:
for key in item:
if item.get(key) is None:
item[key] = module.params[key]
validate_param_values(module, item, item)
d = item.copy()
if d['enabled']:
d['disable'] = False
else:
d['disable'] = True
obj.append(d)
else:
params = {
'name': module.params['name'],
'description': module.params['description'],
'speed': module.params['speed'],
'mtu': module.params['mtu'],
'duplex': module.params['duplex'],
'state': module.params['state'],
'delay': module.params['delay'],
'tx_rate': module.params['tx_rate'],
'rx_rate': module.params['rx_rate'],
'neighbors': module.params['neighbors']
}
validate_param_values(module, params)
if module.params['enabled']:
params.update({'disable': False})
else:
params.update({'disable': True})
obj.append(params)
return obj
def map_obj_to_commands(updates):
commands = list()
want, have = updates
args = ('speed', 'description', 'duplex', 'mtu')
for w in want:
name = w['name']
disable = w['disable']
state = w['state']
obj_in_have = search_obj_in_list(name, have)
interface = 'interface ' + name
if state == 'absent' and obj_in_have:
commands.append('no ' + interface)
elif state in ('present', 'up', 'down'):
if obj_in_have:
for item in args:
candidate = w.get(item)
running = obj_in_have.get(item)
if candidate != running:
if candidate:
cmd = item + ' ' + str(candidate)
add_command_to_interface(interface, cmd, commands)
if disable and not obj_in_have.get('disable', False):
add_command_to_interface(interface, 'shutdown', commands)
elif not disable and obj_in_have.get('disable', False):
add_command_to_interface(interface, 'no shutdown', commands)
else:
commands.append(interface)
for item in args:
value = w.get(item)
if value:
commands.append(item + ' ' + str(value))
if disable:
commands.append('no shutdown')
return commands
def check_declarative_intent_params(module, want, result):
failed_conditions = []
have_neighbors_lldp = None
for w in want:
want_state = w.get('state')
want_tx_rate = w.get('tx_rate')
want_rx_rate = w.get('rx_rate')
want_neighbors = w.get('neighbors')
if want_state not in ('up', 'down') and not want_tx_rate and not want_rx_rate and not want_neighbors:
continue
if result['changed']:
sleep(w['delay'])
command = 'show interface %s brief' % w['name']
rc, out, err = exec_command(module, command)
if rc != 0:
module.fail_json(msg=to_text(err, errors='surrogate_then_replace'), command=command, rc=rc)
if want_state in ('up', 'down'):
state_data = out.strip().lower().split(w['name'])
have_state = None
have_state = state_data[1].split()[3]
if have_state is None or not conditional(want_state, have_state.strip()):
failed_conditions.append('state ' + 'eq(%s)' % want_state)
command = 'show interface %s' % w['name']
rc, out, err = exec_command(module, command)
have_tx_rate = None
have_rx_rate = None
rates = out.splitlines()
for s in rates:
s = s.strip()
if 'output rate' in s and 'input rate' in s:
sub = s.split()
if want_tx_rate:
have_tx_rate = sub[8]
if have_tx_rate is None or not conditional(want_tx_rate, have_tx_rate.strip(), cast=int):
failed_conditions.append('tx_rate ' + want_tx_rate)
if want_rx_rate:
have_rx_rate = sub[2]
if have_rx_rate is None or not conditional(want_rx_rate, have_rx_rate.strip(), cast=int):
failed_conditions.append('rx_rate ' + want_rx_rate)
if want_neighbors:
have_host = []
have_port = []
# Process LLDP neighbors
if have_neighbors_lldp is None:
rc, have_neighbors_lldp, err = exec_command(module, 'show lldp neighbors detail')
if rc != 0:
module.fail_json(msg=to_text(err,
errors='surrogate_then_replace'),
command=command, rc=rc)
if have_neighbors_lldp:
lines = have_neighbors_lldp.strip().split('Local Port ID: ')
for line in lines:
field = line.split('\n')
if field[0].strip() == w['name']:
for item in field:
if item.startswith('System Name:'):
have_host.append(item.split(':')[1].strip())
if item.startswith('Port Description:'):
have_port.append(item.split(':')[1].strip())
for item in want_neighbors:
host = item.get('host')
port = item.get('port')
if host and host not in have_host:
failed_conditions.append('host ' + host)
if port and port not in have_port:
failed_conditions.append('port ' + port)
return failed_conditions
def main():
""" main entry point for module execution
"""
neighbors_spec = dict(
host=dict(),
port=dict()
)
element_spec = dict(
name=dict(),
description=dict(),
speed=dict(),
mtu=dict(),
duplex=dict(default='auto', choices=['full', 'half', 'auto']),
enabled=dict(default=True, type='bool'),
tx_rate=dict(),
rx_rate=dict(),
neighbors=dict(type='list', elements='dict', options=neighbors_spec),
delay=dict(default=20, type='int'),
state=dict(default='present',
choices=['present', 'absent', 'up', 'down'])
)
aggregate_spec = deepcopy(element_spec)
aggregate_spec['name'] = dict(required=True)
# remove default in aggregate spec, to handle common arguments
remove_default_spec(aggregate_spec)
argument_spec = dict(
aggregate=dict(type='list', elements='dict', options=aggregate_spec),
)
argument_spec.update(element_spec)
argument_spec.update(cnos_argument_spec)
required_one_of = [['name', 'aggregate']]
mutually_exclusive = [['name', 'aggregate']]
module = AnsibleModule(argument_spec=argument_spec,
required_one_of=required_one_of,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
result = {'changed': False}
if warnings:
result['warnings'] = warnings
want = map_params_to_obj(module)
have = map_config_to_obj(module)
commands = map_obj_to_commands((want, have))
result['commands'] = commands
if commands:
if not module.check_mode:
load_config(module, commands)
result['changed'] = True
failed_conditions = check_declarative_intent_params(module, want, result)
if failed_conditions:
msg = 'One or more conditional statements have not been satisfied'
module.fail_json(msg=msg, failed_conditions=failed_conditions)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
engr-hasanuzzaman/engr-hasanuzzaman.github.io
|
node_modules/npm/node_modules/node-gyp/gyp/PRESUBMIT.py
|
1369
|
3662
|
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Top-level presubmit script for GYP.
See http://dev.chromium.org/developers/how-tos/depottools/presubmit-scripts
for more details about the presubmit API built into gcl.
"""
PYLINT_BLACKLIST = [
# TODO: fix me.
# From SCons, not done in google style.
'test/lib/TestCmd.py',
'test/lib/TestCommon.py',
'test/lib/TestGyp.py',
]
PYLINT_DISABLED_WARNINGS = [
# TODO: fix me.
# Many tests include modules they don't use.
'W0611',
# Possible unbalanced tuple unpacking with sequence.
'W0632',
# Attempting to unpack a non-sequence.
'W0633',
# Include order doesn't properly include local files?
'F0401',
# Some use of built-in names.
'W0622',
# Some unused variables.
'W0612',
# Operator not preceded/followed by space.
'C0323',
'C0322',
# Unnecessary semicolon.
'W0301',
# Unused argument.
'W0613',
# String has no effect (docstring in wrong place).
'W0105',
# map/filter on lambda could be replaced by comprehension.
'W0110',
# Use of eval.
'W0123',
# Comma not followed by space.
'C0324',
# Access to a protected member.
'W0212',
# Bad indent.
'W0311',
# Line too long.
'C0301',
# Undefined variable.
'E0602',
# Not exception type specified.
'W0702',
# No member of that name.
'E1101',
# Dangerous default {}.
'W0102',
# Cyclic import.
'R0401',
# Others, too many to sort.
'W0201', 'W0232', 'E1103', 'W0621', 'W0108', 'W0223', 'W0231',
'R0201', 'E0101', 'C0321',
# ************* Module copy
# W0104:427,12:_test.odict.__setitem__: Statement seems to have no effect
'W0104',
]
def CheckChangeOnUpload(input_api, output_api):
report = []
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api))
return report
def CheckChangeOnCommit(input_api, output_api):
report = []
# Accept any year number from 2009 to the current year.
current_year = int(input_api.time.strftime('%Y'))
allowed_years = (str(s) for s in reversed(xrange(2009, current_year + 1)))
years_re = '(' + '|'.join(allowed_years) + ')'
# The (c) is deprecated, but tolerate it until it's removed from all files.
license = (
r'.*? Copyright (\(c\) )?%(year)s Google Inc\. All rights reserved\.\n'
r'.*? Use of this source code is governed by a BSD-style license that '
r'can be\n'
r'.*? found in the LICENSE file\.\n'
) % {
'year': years_re,
}
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api, license_header=license))
report.extend(input_api.canned_checks.CheckTreeIsOpen(
input_api, output_api,
'http://gyp-status.appspot.com/status',
'http://gyp-status.appspot.com/current'))
import os
import sys
old_sys_path = sys.path
try:
sys.path = ['pylib', 'test/lib'] + sys.path
blacklist = PYLINT_BLACKLIST
if sys.platform == 'win32':
blacklist = [os.path.normpath(x).replace('\\', '\\\\')
for x in PYLINT_BLACKLIST]
report.extend(input_api.canned_checks.RunPylint(
input_api,
output_api,
black_list=blacklist,
disabled_warnings=PYLINT_DISABLED_WARNINGS))
finally:
sys.path = old_sys_path
return report
TRYBOTS = [
'linux_try',
'mac_try',
'win_try',
]
def GetPreferredTryMasters(_, change):
return {
'client.gyp': { t: set(['defaulttests']) for t in TRYBOTS },
}
|
apache-2.0
|
intlabs/cannyos-backend-dashboard
|
horizon-master/openstack_dashboard/dashboards/admin/volumes/volume_types/qos_specs/tables.py
|
14
|
2408
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django.core.urlresolvers import reverse
from django.utils.translation import ugettext_lazy as _
from horizon import tables
from openstack_dashboard import api
class SpecCreateKeyValuePair(tables.LinkAction):
# this is to create a spec key-value pair for an existing QOS Spec
name = "create"
verbose_name = _("Create")
url = "horizon:admin:volumes:volume_types:qos_specs:create"
classes = ("ajax-modal",)
icon = "plus"
def get_link_url(self, qos_spec=None):
qos_spec_id = self.table.kwargs['qos_spec_id']
return reverse(self.url, args=[qos_spec_id])
class SpecDeleteKeyValuePair(tables.DeleteAction):
data_type_singular = _("Spec")
data_type_plural = _("Specs")
def delete(self, request, obj_ids):
qos_spec_id = self.table.kwargs['qos_spec_id']
# use "unset" api to remove this key-value pair from QOS Spec
api.cinder.qos_spec_unset_keys(request,
qos_spec_id,
[obj_ids])
class SpecEditKeyValuePair(tables.LinkAction):
name = "edit"
verbose_name = _("Edit")
url = "horizon:admin:volumes:volume_types:qos_specs:edit"
classes = ("ajax-modal",)
icon = "pencil"
def get_link_url(self, qos_spec):
return reverse(self.url, args=[qos_spec.id, qos_spec.key])
class SpecsTable(tables.DataTable):
key = tables.Column('key', verbose_name=_('Key'))
value = tables.Column('value', verbose_name=_('Value'))
class Meta:
name = "specs"
verbose_name = _("Key-Value Pairs")
table_actions = (SpecCreateKeyValuePair, SpecDeleteKeyValuePair)
row_actions = (SpecEditKeyValuePair, SpecDeleteKeyValuePair)
def get_object_id(self, datum):
return datum.key
def get_object_display(self, datum):
return datum.key
|
mit
|
MCMic/Sick-Beard
|
lib/requests/packages/chardet/euctwprober.py
|
2994
|
1676
|
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is mozilla.org code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .mbcharsetprober import MultiByteCharSetProber
from .codingstatemachine import CodingStateMachine
from .chardistribution import EUCTWDistributionAnalysis
from .mbcssm import EUCTWSMModel
class EUCTWProber(MultiByteCharSetProber):
def __init__(self):
MultiByteCharSetProber.__init__(self)
self._mCodingSM = CodingStateMachine(EUCTWSMModel)
self._mDistributionAnalyzer = EUCTWDistributionAnalysis()
self.reset()
def get_charset_name(self):
return "EUC-TW"
|
gpl-3.0
|
lokirius/python-for-android
|
python3-alpha/python3-src/Lib/test/test_runpy.py
|
48
|
17205
|
# Test the runpy module
import unittest
import os
import os.path
import sys
import re
import tempfile
import py_compile
from test.support import forget, make_legacy_pyc, run_unittest, unload, verbose
from test.script_helper import (
make_pkg, make_script, make_zip_pkg, make_zip_script, temp_dir)
from runpy import _run_code, _run_module_code, run_module, run_path
# Note: This module can't safely test _run_module_as_main as it
# runs its tests in the current process, which would mess with the
# real __main__ module (usually test.regrtest)
# See test_cmd_line_script for a test that executes that code path
# Set up the test code and expected results
class RunModuleCodeTest(unittest.TestCase):
"""Unit tests for runpy._run_code and runpy._run_module_code"""
expected_result = ["Top level assignment", "Lower level reference"]
test_source = (
"# Check basic code execution\n"
"result = ['Top level assignment']\n"
"def f():\n"
" result.append('Lower level reference')\n"
"f()\n"
"# Check the sys module\n"
"import sys\n"
"run_argv0 = sys.argv[0]\n"
"run_name_in_sys_modules = __name__ in sys.modules\n"
"if run_name_in_sys_modules:\n"
" module_in_sys_modules = globals() is sys.modules[__name__].__dict__\n"
"# Check nested operation\n"
"import runpy\n"
"nested = runpy._run_module_code('x=1\\n', mod_name='<run>')\n"
)
def test_run_code(self):
saved_argv0 = sys.argv[0]
d = _run_code(self.test_source, {})
self.assertEqual(d["result"], self.expected_result)
self.assertIs(d["__name__"], None)
self.assertIs(d["__file__"], None)
self.assertIs(d["__cached__"], None)
self.assertIs(d["__loader__"], None)
self.assertIs(d["__package__"], None)
self.assertIs(d["run_argv0"], saved_argv0)
self.assertNotIn("run_name", d)
self.assertIs(sys.argv[0], saved_argv0)
def test_run_module_code(self):
initial = object()
name = "<Nonsense>"
file = "Some other nonsense"
loader = "Now you're just being silly"
package = '' # Treat as a top level module
d1 = dict(initial=initial)
saved_argv0 = sys.argv[0]
d2 = _run_module_code(self.test_source,
d1,
name,
file,
loader,
package)
self.assertNotIn("result", d1)
self.assertIs(d2["initial"], initial)
self.assertEqual(d2["result"], self.expected_result)
self.assertEqual(d2["nested"]["x"], 1)
self.assertIs(d2["__name__"], name)
self.assertTrue(d2["run_name_in_sys_modules"])
self.assertTrue(d2["module_in_sys_modules"])
self.assertIs(d2["__file__"], file)
self.assertIs(d2["__cached__"], None)
self.assertIs(d2["run_argv0"], file)
self.assertIs(d2["__loader__"], loader)
self.assertIs(d2["__package__"], package)
self.assertIs(sys.argv[0], saved_argv0)
self.assertNotIn(name, sys.modules)
class RunModuleTest(unittest.TestCase):
"""Unit tests for runpy.run_module"""
def expect_import_error(self, mod_name):
try:
run_module(mod_name)
except ImportError:
pass
else:
self.fail("Expected import error for " + mod_name)
def test_invalid_names(self):
# Builtin module
self.expect_import_error("sys")
# Non-existent modules
self.expect_import_error("sys.imp.eric")
self.expect_import_error("os.path.half")
self.expect_import_error("a.bee")
self.expect_import_error(".howard")
self.expect_import_error("..eaten")
# Package without __main__.py
self.expect_import_error("multiprocessing")
def test_library_module(self):
run_module("runpy")
def _add_pkg_dir(self, pkg_dir):
os.mkdir(pkg_dir)
pkg_fname = os.path.join(pkg_dir, "__init__.py")
pkg_file = open(pkg_fname, "w")
pkg_file.close()
return pkg_fname
def _make_pkg(self, source, depth, mod_base="runpy_test"):
pkg_name = "__runpy_pkg__"
test_fname = mod_base+os.extsep+"py"
pkg_dir = sub_dir = tempfile.mkdtemp()
if verbose: print(" Package tree in:", sub_dir)
sys.path.insert(0, pkg_dir)
if verbose: print(" Updated sys.path:", sys.path[0])
for i in range(depth):
sub_dir = os.path.join(sub_dir, pkg_name)
pkg_fname = self._add_pkg_dir(sub_dir)
if verbose: print(" Next level in:", sub_dir)
if verbose: print(" Created:", pkg_fname)
mod_fname = os.path.join(sub_dir, test_fname)
mod_file = open(mod_fname, "w")
mod_file.write(source)
mod_file.close()
if verbose: print(" Created:", mod_fname)
mod_name = (pkg_name+".")*depth + mod_base
return pkg_dir, mod_fname, mod_name
def _del_pkg(self, top, depth, mod_name):
for entry in list(sys.modules):
if entry.startswith("__runpy_pkg__"):
del sys.modules[entry]
if verbose: print(" Removed sys.modules entries")
del sys.path[0]
if verbose: print(" Removed sys.path entry")
for root, dirs, files in os.walk(top, topdown=False):
for name in files:
try:
os.remove(os.path.join(root, name))
except OSError as ex:
if verbose: print(ex) # Persist with cleaning up
for name in dirs:
fullname = os.path.join(root, name)
try:
os.rmdir(fullname)
except OSError as ex:
if verbose: print(ex) # Persist with cleaning up
try:
os.rmdir(top)
if verbose: print(" Removed package tree")
except OSError as ex:
if verbose: print(ex) # Persist with cleaning up
def _check_module(self, depth):
pkg_dir, mod_fname, mod_name = (
self._make_pkg("x=1\n", depth))
forget(mod_name)
try:
if verbose: print("Running from source:", mod_name)
d1 = run_module(mod_name) # Read from source
self.assertIn("x", d1)
self.assertEqual(d1["x"], 1)
del d1 # Ensure __loader__ entry doesn't keep file open
__import__(mod_name)
os.remove(mod_fname)
make_legacy_pyc(mod_fname)
unload(mod_name) # In case loader caches paths
if verbose: print("Running from compiled:", mod_name)
d2 = run_module(mod_name) # Read from bytecode
self.assertIn("x", d2)
self.assertEqual(d2["x"], 1)
del d2 # Ensure __loader__ entry doesn't keep file open
finally:
self._del_pkg(pkg_dir, depth, mod_name)
if verbose: print("Module executed successfully")
def _check_package(self, depth):
pkg_dir, mod_fname, mod_name = (
self._make_pkg("x=1\n", depth, "__main__"))
pkg_name, _, _ = mod_name.rpartition(".")
forget(mod_name)
try:
if verbose: print("Running from source:", pkg_name)
d1 = run_module(pkg_name) # Read from source
self.assertIn("x", d1)
self.assertTrue(d1["x"] == 1)
del d1 # Ensure __loader__ entry doesn't keep file open
__import__(mod_name)
os.remove(mod_fname)
make_legacy_pyc(mod_fname)
unload(mod_name) # In case loader caches paths
if verbose: print("Running from compiled:", pkg_name)
d2 = run_module(pkg_name) # Read from bytecode
self.assertIn("x", d2)
self.assertTrue(d2["x"] == 1)
del d2 # Ensure __loader__ entry doesn't keep file open
finally:
self._del_pkg(pkg_dir, depth, pkg_name)
if verbose: print("Package executed successfully")
def _add_relative_modules(self, base_dir, source, depth):
if depth <= 1:
raise ValueError("Relative module test needs depth > 1")
pkg_name = "__runpy_pkg__"
module_dir = base_dir
for i in range(depth):
parent_dir = module_dir
module_dir = os.path.join(module_dir, pkg_name)
# Add sibling module
sibling_fname = os.path.join(module_dir, "sibling.py")
sibling_file = open(sibling_fname, "w")
sibling_file.close()
if verbose: print(" Added sibling module:", sibling_fname)
# Add nephew module
uncle_dir = os.path.join(parent_dir, "uncle")
self._add_pkg_dir(uncle_dir)
if verbose: print(" Added uncle package:", uncle_dir)
cousin_dir = os.path.join(uncle_dir, "cousin")
self._add_pkg_dir(cousin_dir)
if verbose: print(" Added cousin package:", cousin_dir)
nephew_fname = os.path.join(cousin_dir, "nephew.py")
nephew_file = open(nephew_fname, "w")
nephew_file.close()
if verbose: print(" Added nephew module:", nephew_fname)
def _check_relative_imports(self, depth, run_name=None):
contents = r"""\
from __future__ import absolute_import
from . import sibling
from ..uncle.cousin import nephew
"""
pkg_dir, mod_fname, mod_name = (
self._make_pkg(contents, depth))
try:
self._add_relative_modules(pkg_dir, contents, depth)
pkg_name = mod_name.rpartition('.')[0]
if verbose: print("Running from source:", mod_name)
d1 = run_module(mod_name, run_name=run_name) # Read from source
self.assertIn("__package__", d1)
self.assertTrue(d1["__package__"] == pkg_name)
self.assertIn("sibling", d1)
self.assertIn("nephew", d1)
del d1 # Ensure __loader__ entry doesn't keep file open
__import__(mod_name)
os.remove(mod_fname)
make_legacy_pyc(mod_fname)
unload(mod_name) # In case the loader caches paths
if verbose: print("Running from compiled:", mod_name)
d2 = run_module(mod_name, run_name=run_name) # Read from bytecode
self.assertIn("__package__", d2)
self.assertTrue(d2["__package__"] == pkg_name)
self.assertIn("sibling", d2)
self.assertIn("nephew", d2)
del d2 # Ensure __loader__ entry doesn't keep file open
finally:
self._del_pkg(pkg_dir, depth, mod_name)
if verbose: print("Module executed successfully")
def test_run_module(self):
for depth in range(4):
if verbose: print("Testing package depth:", depth)
self._check_module(depth)
def test_run_package(self):
for depth in range(1, 4):
if verbose: print("Testing package depth:", depth)
self._check_package(depth)
def test_explicit_relative_import(self):
for depth in range(2, 5):
if verbose: print("Testing relative imports at depth:", depth)
self._check_relative_imports(depth)
def test_main_relative_import(self):
for depth in range(2, 5):
if verbose: print("Testing main relative imports at depth:", depth)
self._check_relative_imports(depth, "__main__")
class RunPathTest(unittest.TestCase):
"""Unit tests for runpy.run_path"""
# Based on corresponding tests in test_cmd_line_script
test_source = """\
# Script may be run with optimisation enabled, so don't rely on assert
# statements being executed
def assertEqual(lhs, rhs):
if lhs != rhs:
raise AssertionError('%r != %r' % (lhs, rhs))
def assertIs(lhs, rhs):
if lhs is not rhs:
raise AssertionError('%r is not %r' % (lhs, rhs))
# Check basic code execution
result = ['Top level assignment']
def f():
result.append('Lower level reference')
f()
assertEqual(result, ['Top level assignment', 'Lower level reference'])
# Check the sys module
import sys
assertIs(globals(), sys.modules[__name__].__dict__)
argv0 = sys.argv[0]
"""
def _make_test_script(self, script_dir, script_basename, source=None):
if source is None:
source = self.test_source
return make_script(script_dir, script_basename, source)
def _check_script(self, script_name, expected_name, expected_file,
expected_argv0, expected_package):
result = run_path(script_name)
self.assertEqual(result["__name__"], expected_name)
self.assertEqual(result["__file__"], expected_file)
self.assertEqual(result["__cached__"], None)
self.assertIn("argv0", result)
self.assertEqual(result["argv0"], expected_argv0)
self.assertEqual(result["__package__"], expected_package)
def _check_import_error(self, script_name, msg):
msg = re.escape(msg)
self.assertRaisesRegex(ImportError, msg, run_path, script_name)
def test_basic_script(self):
with temp_dir() as script_dir:
mod_name = 'script'
script_name = self._make_test_script(script_dir, mod_name)
self._check_script(script_name, "<run_path>", script_name,
script_name, None)
def test_script_compiled(self):
with temp_dir() as script_dir:
mod_name = 'script'
script_name = self._make_test_script(script_dir, mod_name)
compiled_name = py_compile.compile(script_name, doraise=True)
os.remove(script_name)
self._check_script(compiled_name, "<run_path>", compiled_name,
compiled_name, None)
def test_directory(self):
with temp_dir() as script_dir:
mod_name = '__main__'
script_name = self._make_test_script(script_dir, mod_name)
self._check_script(script_dir, "<run_path>", script_name,
script_dir, '')
def test_directory_compiled(self):
with temp_dir() as script_dir:
mod_name = '__main__'
script_name = self._make_test_script(script_dir, mod_name)
compiled_name = py_compile.compile(script_name, doraise=True)
os.remove(script_name)
legacy_pyc = make_legacy_pyc(script_name)
self._check_script(script_dir, "<run_path>", legacy_pyc,
script_dir, '')
def test_directory_error(self):
with temp_dir() as script_dir:
mod_name = 'not_main'
script_name = self._make_test_script(script_dir, mod_name)
msg = "can't find '__main__' module in %r" % script_dir
self._check_import_error(script_dir, msg)
def test_zipfile(self):
with temp_dir() as script_dir:
mod_name = '__main__'
script_name = self._make_test_script(script_dir, mod_name)
zip_name, fname = make_zip_script(script_dir, 'test_zip', script_name)
self._check_script(zip_name, "<run_path>", fname, zip_name, '')
def test_zipfile_compiled(self):
with temp_dir() as script_dir:
mod_name = '__main__'
script_name = self._make_test_script(script_dir, mod_name)
compiled_name = py_compile.compile(script_name, doraise=True)
zip_name, fname = make_zip_script(script_dir, 'test_zip',
compiled_name)
self._check_script(zip_name, "<run_path>", fname, zip_name, '')
def test_zipfile_error(self):
with temp_dir() as script_dir:
mod_name = 'not_main'
script_name = self._make_test_script(script_dir, mod_name)
zip_name, fname = make_zip_script(script_dir, 'test_zip', script_name)
msg = "can't find '__main__' module in %r" % zip_name
self._check_import_error(zip_name, msg)
def test_main_recursion_error(self):
with temp_dir() as script_dir, temp_dir() as dummy_dir:
mod_name = '__main__'
source = ("import runpy\n"
"runpy.run_path(%r)\n") % dummy_dir
script_name = self._make_test_script(script_dir, mod_name, source)
zip_name, fname = make_zip_script(script_dir, 'test_zip', script_name)
msg = "recursion depth exceeded"
self.assertRaisesRegex(RuntimeError, msg, run_path, zip_name)
def test_encoding(self):
with temp_dir() as script_dir:
filename = os.path.join(script_dir, 'script.py')
with open(filename, 'w', encoding='latin1') as f:
f.write("""
#coding:latin1
"non-ASCII: h\xe9"
""")
result = run_path(filename)
self.assertEqual(result['__doc__'], "non-ASCII: h\xe9")
def test_main():
run_unittest(
RunModuleCodeTest,
RunModuleTest,
RunPathTest
)
if __name__ == "__main__":
test_main()
|
apache-2.0
|
rabid-inventor/ShiftOutPi
|
shiftout.py
|
1
|
2547
|
import RPi.GPIO as GPIO
from time import sleep as sleep
'''
Defining class to handle shifing data out to shift register
OutPin = output pin
ClkPin = clock Pin
Len = length of data in bytes (default 1 Byte)
Speed = delay between each bit (Default 0.01 sec)
'''
# Keyword args, ie: def my_function(*args, **kwargs):
# my_instanace = ShiftOut(1,2,outpin=1,blah_blah=1)
# Retrieve with
# kwargs.get('name',default_value)
class ShiftOut():
def __init__(self, OutPin, ClkPin, ClearPin, ByteLength=1, Speed=0.001):
self.OutPin = OutPin
self.ClkPin = ClkPin
self.ClearPin = ClearPin
self.ByteLength = ByteLength
self.Speed = Speed
self.setpins()
self.DEBUG = 1
def setpins(self):
#Set pin number to BCM
GPIO.setmode(GPIO.BCM)
#Set required pins as output
GPIO.setup(self.OutPin, GPIO.OUT)
GPIO.output(self.OutPin, GPIO.LOW)
GPIO.setup(self.ClkPin, GPIO.OUT)
GPIO.output(self.ClkPin, GPIO.LOW)
GPIO.setup(self.ClearPin, GPIO.OUT)
GPIO.output(self.ClearPin, GPIO.LOW)
def sendbit(self, Bit):
if(self.DEBUG == 1):
print('Shiftout' ,Bit)
#Load Bit to output pin
GPIO.output(self.OutPin, Bit)
#toggle ClockPin
GPIO.output(self.ClkPin, GPIO.HIGH)
sleep(self.Speed)
GPIO.output(self.ClkPin, GPIO.LOW)
GPIO.output(self.OutPin,GPIO.LOW)
sleep(self.Speed)
'''
ShiftOut::shiftout(data,length)
Description.....
#Shifts out Number LSB first
Usage....
data = inter
'''
def shiftout(self, data, length = 0):
bitmask = 0x00
bytemask = 0x00
#If Length is not defined then set to default
if (length == 0):
length = self.ByteLength
try:
for ByteSelector in range(length):
#select current byte
bytemask = 0xFF << (ByteSelector * 8 )
ByteToSend = bytemask & data
ByteToSend = ByteToSend >> (ByteSelector * 8)
if(self.DEBUG == 1):
print('Byte Mask = ' , bytemask , 'Current Byte = ' ,ByteToSend)
#bit selector
for BitSelector in range(8):
bytemask = 1 << BitSelector
BitToSend = bytemask & ByteToSend
BitToSend = BitToSend/bytemask
self.sendbit(BitToSend)
except KeyboardInterrupt:
GPIO.cleanup()
exit()
def reset(self):
GPIO.output(self.ClearPin, GPIO.HIGH)
sleep(self.speed)
GPIO.output(self.ClearPin, GPIO.LOW)
sleep(self.speed)
|
gpl-3.0
|
moomou/heron
|
heron/common/tests/python/utils/outgoing_tuple_helper_unittest.py
|
10
|
1912
|
# Copyright 2016 Twitter. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=missing-docstring
import unittest
import heron.common.tests.python.utils.mock_generator as mock_generator
class OutgoingTupleHelperTest(unittest.TestCase):
DEFAULT_STREAM_ID = "stream_id"
def setUp(self):
pass
def test_sample_success(self):
out_helper = mock_generator.MockOutgoingTupleHelper()
prim_data_tuple, size = mock_generator.make_data_tuple_from_list(mock_generator.prim_list)
# Check adding data tuples and try sending out
out_helper.add_data_tuple(self.DEFAULT_STREAM_ID, prim_data_tuple, size)
# check if init_new_data_tuple() was properly called
self.assertTrue(out_helper.called_init_new_data)
self.assertEqual(out_helper.current_data_tuple_set.stream.id, self.DEFAULT_STREAM_ID)
# check if it was properly added
self.assertEqual(out_helper.current_data_tuple_size_in_bytes, size)
# try sending out
out_helper.send_out_tuples()
self.assertEqual(out_helper.current_data_tuple_size_in_bytes, 0)
self.assertEqual(out_helper.total_data_emitted_in_bytes, size)
self.assertIsNone(out_helper.current_data_tuple_set)
sent_data_tuple_set = out_helper.out_stream.poll().data
self.assertEqual(sent_data_tuple_set.stream.id, self.DEFAULT_STREAM_ID)
self.assertEqual(sent_data_tuple_set.tuples[0], prim_data_tuple)
|
apache-2.0
|
mogoweb/chromium-crosswalk
|
native_client_sdk/src/build_tools/build_updater.py
|
28
|
6330
|
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Build script to generate a new sdk_tools bundle.
This script packages the files necessary to generate the SDK updater -- the
tool users run to download new bundles, update existing bundles, etc.
"""
import buildbot_common
import build_version
import glob
import optparse
import os
import sys
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
SDK_SRC_DIR = os.path.dirname(SCRIPT_DIR)
SDK_DIR = os.path.dirname(SDK_SRC_DIR)
SRC_DIR = os.path.dirname(SDK_DIR)
NACL_DIR = os.path.join(SRC_DIR, 'native_client')
CYGTAR = os.path.join(NACL_DIR, 'build', 'cygtar.py')
sys.path.append(os.path.join(SDK_SRC_DIR, 'tools'))
import oshelpers
UPDATER_FILES = [
# launch scripts
('build_tools/naclsdk', 'nacl_sdk/naclsdk'),
('build_tools/naclsdk.bat', 'nacl_sdk/naclsdk.bat'),
# base manifest
('build_tools/json/naclsdk_manifest0.json',
'nacl_sdk/sdk_cache/naclsdk_manifest2.json'),
# SDK tools
('build_tools/sdk_tools/cacerts.txt', 'nacl_sdk/sdk_tools/cacerts.txt'),
('build_tools/sdk_tools/*.py', 'nacl_sdk/sdk_tools/'),
('build_tools/sdk_tools/command/*.py', 'nacl_sdk/sdk_tools/command/'),
('build_tools/sdk_tools/third_party/*.py', 'nacl_sdk/sdk_tools/third_party/'),
('build_tools/sdk_tools/third_party/fancy_urllib/*.py',
'nacl_sdk/sdk_tools/third_party/fancy_urllib/'),
('build_tools/sdk_tools/third_party/fancy_urllib/README',
'nacl_sdk/sdk_tools/third_party/fancy_urllib/README'),
('build_tools/manifest_util.py', 'nacl_sdk/sdk_tools/manifest_util.py'),
('LICENSE', 'nacl_sdk/sdk_tools/LICENSE'),
(CYGTAR, 'nacl_sdk/sdk_tools/cygtar.py'),
]
def MakeUpdaterFilesAbsolute(out_dir):
"""Return the result of changing all relative paths in UPDATER_FILES to
absolute paths.
Args:
out_dir: The output directory.
Returns:
A list of 2-tuples. The first element in each tuple is the source path and
the second is the destination path.
"""
assert os.path.isabs(out_dir)
result = []
for in_file, out_file in UPDATER_FILES:
if not os.path.isabs(in_file):
in_file = os.path.join(SDK_SRC_DIR, in_file)
out_file = os.path.join(out_dir, out_file)
result.append((in_file, out_file))
return result
def GlobFiles(files):
"""Expand wildcards for 2-tuples of sources/destinations.
This function also will convert destinations from directories into filenames.
For example:
('foo/*.py', 'bar/') => [('foo/a.py', 'bar/a.py'), ('foo/b.py', 'bar/b.py')]
Args:
files: A list of 2-tuples of (source, dest) paths.
Returns:
A new list of 2-tuples, after the sources have been wildcard-expanded, and
the destinations have been changed from directories to filenames.
"""
result = []
for in_file_glob, out_file in files:
if out_file.endswith('/'):
for in_file in glob.glob(in_file_glob):
result.append((in_file,
os.path.join(out_file, os.path.basename(in_file))))
else:
result.append((in_file_glob, out_file))
return result
def CopyFiles(files):
"""Given a list of 2-tuples (source, dest), copy each source file to a dest
file.
Args:
files: A list of 2-tuples."""
for in_file, out_file in files:
buildbot_common.MakeDir(os.path.dirname(out_file))
buildbot_common.CopyFile(in_file, out_file)
def UpdateRevisionNumber(out_dir, revision_number):
"""Update the sdk_tools bundle to have the given revision number.
This function finds all occurrences of the string "{REVISION}" in
sdk_update_main.py and replaces them with |revision_number|. The only
observable effect of this change should be that running:
naclsdk -v
will contain the new |revision_number|.
Args:
out_dir: The output directory containing the scripts to update.
revision_number: The revision number as an integer, or None to use the
current Chrome revision (as retrieved through svn/git).
"""
if revision_number is None:
revision_number = build_version.ChromeRevision()
SDK_UPDATE_MAIN = os.path.join(out_dir,
'nacl_sdk/sdk_tools/sdk_update_main.py')
contents = open(SDK_UPDATE_MAIN, 'r').read().replace(
'{REVISION}', str(revision_number))
open(SDK_UPDATE_MAIN, 'w').write(contents)
def BuildUpdater(out_dir, revision_number=None):
"""Build naclsdk.zip and sdk_tools.tgz in |out_dir|.
Args:
out_dir: The output directory.
revision_number: The revision number of this updater, as an integer. Or
None, to use the current Chrome revision."""
buildbot_common.BuildStep('Create Updater')
out_dir = os.path.abspath(out_dir)
# Build SDK directory
buildbot_common.RemoveDir(os.path.join(out_dir, 'nacl_sdk'))
updater_files = MakeUpdaterFilesAbsolute(out_dir)
updater_files = GlobFiles(updater_files)
CopyFiles(updater_files)
UpdateRevisionNumber(out_dir, revision_number)
out_files = [os.path.relpath(out_file, out_dir)
for _, out_file in updater_files]
# Make zip
buildbot_common.RemoveFile(os.path.join(out_dir, 'nacl_sdk.zip'))
buildbot_common.Run([sys.executable, oshelpers.__file__, 'zip',
'nacl_sdk.zip'] + out_files,
cwd=out_dir)
# Tar of all files under nacl_sdk/sdk_tools
sdktoolsdir = os.path.join('nacl_sdk', 'sdk_tools')
tarname = os.path.join(out_dir, 'sdk_tools.tgz')
files_to_tar = [os.path.relpath(out_file, sdktoolsdir)
for out_file in out_files if out_file.startswith(sdktoolsdir)]
buildbot_common.RemoveFile(tarname)
buildbot_common.Run([sys.executable, CYGTAR, '-C',
os.path.join(out_dir, sdktoolsdir), '-czf', tarname] + files_to_tar)
sys.stdout.write('\n')
def main(args):
parser = optparse.OptionParser()
parser.add_option('-o', '--out', help='output directory',
dest='out_dir', default=os.path.join(SRC_DIR, 'out'))
parser.add_option('-r', '--revision', help='revision number of this updater',
dest='revision', default=None)
options, args = parser.parse_args(args[1:])
if options.revision:
options.revision = int(options.revision)
BuildUpdater(options.out_dir, options.revision)
if __name__ == '__main__':
sys.exit(main(sys.argv))
|
bsd-3-clause
|
jaymin-panchal/zang-python
|
tests/inboundxml/test_say.py
|
2
|
1404
|
import unittest
from zang.inboundxml.elements.say import Say
from zang.inboundxml.elements.base_node import BaseNode
from zang.inboundxml.elements.enums.voice import Voice
class TestSay(unittest.TestCase):
def setUp(self):
self.text = 'Hello from Zang'
def test_init_with_required_values(self):
expected = '<Say>' + self.text + '</Say>'
assert Say(self.text).xml == expected
def test_init_with_optional_attributes(self):
loop = 100
say = Say(self.text, loop=loop)
expected = '<Say loop="%s">%s</Say>' % (loop, self.text)
assert say.xml == expected
def test_init_with_unsupported_attributes(self):
self.assertRaises(TypeError, lambda: Say(self.text, foo='bar'))
def test_with_update_attributes(self):
say = Say(self.text)
text = 'Now I will not stop talking'
voice = Voice.MALE
say.text = text
say.voice = voice
expected = '<Say voice="%s">%s</Say>' % (voice.value, text)
assert say.xml == expected
def test_udefinded_method_with_primitive_type(self):
self.assertRaises(
AttributeError, lambda: Say(self.text).addElement('bar'))
def test_udefinded_method_with_base_node(self):
self.assertRaises(
AttributeError, lambda: Say(self.text).addElement(BaseNode()))
if __name__ == '__main__':
unittest.main()
|
mit
|
underarmour/destalinator
|
tests/test_destalinator.py
|
1
|
28465
|
# pylint: disable=W0201
from datetime import date, datetime, timedelta
import mock
import os
import unittest
import destalinator
import slacker
import slackbot
sample_slack_messages = [
{
"type": "message",
"channel": "C2147483705",
"user": "U2147483697",
"text": "Human human human.",
"ts": "1355517523.000005",
"edited": {
"user": "U2147483697",
"ts": "1355517536.000001"
}
},
{
"type": "message",
"subtype": "bot_message",
"text": "Robot robot robot.",
"ts": "1403051575.000407",
"user": "U023BEAD1"
},
{
"type": "message",
"subtype": "channel_name",
"text": "#stalin has been renamed <C2147483705|khrushchev>",
"ts": "1403051575.000407",
"user": "U023BECGF"
},
{
"type": "message",
"channel": "C2147483705",
"user": "U2147483697",
"text": "Contemplating existence.",
"ts": "1355517523.000005"
},
{
"type": "message",
"subtype": "bot_message",
"attachments": [
{
"fallback": "Required plain-text summary of the attachment.",
"color": "#36a64f",
"pretext": "Optional text that appears above the attachment block",
"author_name": "Bobby Tables",
"author_link": "http://flickr.com/bobby/",
"author_icon": "http://flickr.com/icons/bobby.jpg",
"title": "Slack API Documentation",
"title_link": "https://api.slack.com/",
"text": "Optional text that appears within the attachment",
"fields": [
{
"title": "Priority",
"value": "High",
"short": False
}
],
"image_url": "http://my-website.com/path/to/image.jpg",
"thumb_url": "http://example.com/path/to/thumb.png",
"footer": "Slack API",
"footer_icon": "https://platform.slack-edge.com/img/default_application_icon.png",
"ts": 123456789
}
],
"ts": "1403051575.000407",
"user": "U023BEAD1"
}
]
sample_warning_messages = [
{
"user": "U023BCDA1",
"text": "This is a channel warning! Put on your helmets!",
"username": "bot",
"bot_id": "B0T8EDVLY",
"attachments": [{"fallback": "channel_warning", "id": 1}],
"type": "message",
"subtype": "bot_message",
"ts": "1496855882.185855"
}
]
class MockValidator(object):
def __init__(self, validator):
# validator is a function that takes a single argument and returns a bool.
self.validator = validator
def __eq__(self, other):
return bool(self.validator(other))
class SlackerMock(slacker.Slacker):
def get_users(self):
pass
def get_channels(self):
pass
class DestalinatorChannelMarkupTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_add_slack_channel_markup(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
input_text = "Please find my #general channel reference."
mock_slacker.add_channel_markup.return_value = "<#ABC123|general>"
self.assertEqual(
self.destalinator.add_slack_channel_markup(input_text),
"Please find my <#ABC123|general> channel reference."
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_add_slack_channel_markup_multiple(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
input_text = "Please find my #general multiple #general channel #general references."
mock_slacker.add_channel_markup.return_value = "<#ABC123|general>"
self.assertEqual(
self.destalinator.add_slack_channel_markup(input_text),
"Please find my <#ABC123|general> multiple <#ABC123|general> channel <#ABC123|general> references."
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_add_slack_channel_markup_hyphens(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
input_text = "Please find my #channel-with-hyphens references."
mock_slacker.add_channel_markup.return_value = "<#EXA456|channel-with-hyphens>"
self.assertEqual(
self.destalinator.add_slack_channel_markup(input_text),
"Please find my <#EXA456|channel-with-hyphens> references."
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_add_slack_channel_markup_ignore_screaming(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
input_text = "Please find my #general channel reference and ignore my #HASHTAGSCREAMING thanks."
mock_slacker.add_channel_markup.return_value = "<#ABC123|general>"
self.assertEqual(
self.destalinator.add_slack_channel_markup(input_text),
"Please find my <#ABC123|general> channel reference and ignore my #HASHTAGSCREAMING thanks."
)
class DestalinatorChannelMinimumAgeTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_channel_is_old(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 86400 * 60}
self.assertTrue(self.destalinator.channel_minimum_age("testing", 30))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_channel_is_exactly_expected_age(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 86400 * 30}
self.assertFalse(self.destalinator.channel_minimum_age("testing", 30))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_channel_is_young(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 86400 * 1}
self.assertFalse(self.destalinator.channel_minimum_age("testing", 30))
target_archive_date = date.today() + timedelta(days=10)
target_archive_date_string = target_archive_date.isoformat()
class DestalinatorGetEarliestArchiveDateTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch.dict(os.environ, {'EARLIEST_ARCHIVE_DATE': target_archive_date_string})
def test_env_var_name_set_in_config(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['earliest_archive_date_env_varname'] = 'EARLIEST_ARCHIVE_DATE'
self.assertEqual(self.destalinator.get_earliest_archive_date(), target_archive_date)
def test_archive_date_set_in_config(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['earliest_archive_date_env_varname'] = None
self.destalinator.config.config['earliest_archive_date'] = target_archive_date_string
self.assertEqual(self.destalinator.get_earliest_archive_date(), target_archive_date)
def test_falls_back_to_past_date(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['earliest_archive_date_env_varname'] = None
self.destalinator.config.config['earliest_archive_date'] = None
self.assertEqual(
self.destalinator.get_earliest_archive_date(),
datetime.strptime(destalinator.PAST_DATE_STRING, "%Y-%m-%d").date()
)
class DestalinatorGetMessagesTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_default_included_subtypes(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channelid.return_value = "123456"
mock_slacker.get_messages_in_time_range.return_value = sample_slack_messages
self.assertEqual(len(self.destalinator.get_messages("general", 30)), len(sample_slack_messages))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_empty_included_subtypes(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['included_subtypes'] = []
mock_slacker.get_channelid.return_value = "123456"
mock_slacker.get_messages_in_time_range.return_value = sample_slack_messages
self.assertEqual(
len(self.destalinator.get_messages("general", 30)),
sum('subtype' not in m for m in sample_slack_messages)
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_limited_included_subtypes(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['included_subtypes'] = ['bot_message']
mock_slacker.get_channelid.return_value = "123456"
mock_slacker.get_messages_in_time_range.return_value = sample_slack_messages
self.assertEqual(
len(self.destalinator.get_messages("general", 30)),
sum(m.get('subtype', None) in (None, 'bot_message') for m in sample_slack_messages)
)
class DestalinatorGetStaleChannelsTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_no_stale_channels_but_all_minimum_age_with_default_ignore_users(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
self.destalinator.get_messages = mock.MagicMock(return_value=sample_slack_messages)
self.assertEqual(len(self.destalinator.get_stale_channels(30)), 0)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_no_stale_channels_but_all_minimum_age_with_specific_ignore_users(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_users'] = [m['user'] for m in sample_slack_messages if m.get('user')]
mock_slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
self.destalinator.get_messages = mock.MagicMock(return_value=sample_slack_messages)
self.assertEqual(len(self.destalinator.get_stale_channels(30)), 2)
class DestalinatorIgnoreChannelTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
def test_with_explicit_ignore_channel(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channels'] = ['stalinists']
self.assertTrue(self.destalinator.ignore_channel('stalinists'))
def test_with_matching_ignore_channel_pattern(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channel_patterns'] = ['^stal']
self.assertTrue(self.destalinator.ignore_channel('stalinists'))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_non_mathing_ignore_channel_pattern(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channel_patterns'] = ['^len']
self.assertFalse(self.destalinator.ignore_channel('stalinists'))
def test_with_many_matching_ignore_channel_patterns(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channel_patterns'] = ['^len', 'lin', '^st']
self.assertTrue(self.destalinator.ignore_channel('stalinists'))
def test_with_empty_ignore_channel_config(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channels'] = []
self.destalinator.config.config['ignore_channel_patterns'] = []
self.assertFalse(self.destalinator.ignore_channel('stalinists'))
class DestalinatorPostMarkedUpMessageTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
def test_with_a_string_having_a_channel(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
self.slacker.post_message = mock.MagicMock(return_value={})
self.destalinator.post_marked_up_message('stalinists', "Really great message about #leninists.")
self.slacker.post_message.assert_called_once_with('stalinists',
"Really great message about <#C012839|leninists>.")
def test_with_a_string_having_many_channels(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
self.slacker.post_message = mock.MagicMock(return_value={})
self.destalinator.post_marked_up_message('stalinists', "Really great message about #leninists and #stalinists.")
self.slacker.post_message.assert_called_once_with(
'stalinists',
"Really great message about <#C012839|leninists> and <#C102843|stalinists>."
)
def test_with_a_string_having_no_channels(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
self.slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
self.slacker.post_message = mock.MagicMock(return_value={})
self.destalinator.post_marked_up_message('stalinists', "Really great message.")
self.slacker.post_message.assert_called_once_with('stalinists', "Really great message.")
class DestalinatorStaleTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_all_sample_messages(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
self.destalinator.get_messages = mock.MagicMock(return_value=sample_slack_messages)
self.assertFalse(self.destalinator.stale('stalinists', 30))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_all_users_ignored(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_users'] = [m['user'] for m in sample_slack_messages if m.get('user')]
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
self.destalinator.get_messages = mock.MagicMock(return_value=sample_slack_messages)
self.assertTrue(self.destalinator.stale('stalinists', 30))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_only_a_dolphin_message(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
messages = [
{
"type": "message",
"channel": "C2147483705",
"user": "U2147483697",
"text": ":dolphin:",
"ts": "1355517523.000005"
}
]
self.destalinator.get_messages = mock.MagicMock(return_value=messages)
self.assertTrue(self.destalinator.stale('stalinists', 30))
@mock.patch('tests.test_destalinator.SlackerMock')
def test_with_only_an_attachment_message(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.get_channel_info.return_value = {'age': 60 * 86400}
self.destalinator.get_messages = mock.MagicMock(return_value=[m for m in sample_slack_messages if 'attachments' in m])
self.assertFalse(self.destalinator.stale('stalinists', 30))
class DestalinatorArchiveTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_skips_ignored_channel(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
mock_slacker.archive.return_value = {'ok': True}
self.destalinator.config.config['ignore_channels'] = ['stalinists']
self.destalinator.archive("stalinists")
self.assertFalse(mock_slacker.post_message.called)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_skips_when_destalinator_not_activated(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=False)
mock_slacker.post_message.return_value = {}
self.destalinator.archive("stalinists")
self.assertFalse(mock_slacker.post_message.called)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_announces_closure_with_closure_text(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
mock_slacker.archive.return_value = {'ok': True}
mock_slacker.get_channel_member_names.return_value = ['sridhar', 'jane']
self.destalinator.archive("stalinists")
self.assertIn(
mock.call('stalinists', mock.ANY, message_type='channel_archive'),
mock_slacker.post_message.mock_calls
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_announces_members_at_channel_closing(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
mock_slacker.archive.return_value = {'ok': True}
names = ['sridhar', 'jane']
mock_slacker.get_channel_member_names.return_value = names
self.destalinator.archive("stalinists")
self.assertIn(
mock.call('stalinists', MockValidator(lambda s: all(name in s for name in names)), message_type=mock.ANY),
mock_slacker.post_message.mock_calls
)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_calls_archive_method(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
mock_slacker.archive.return_value = {'ok': True}
self.destalinator.archive("stalinists")
mock_slacker.archive.assert_called_once_with('stalinists')
class DestalinatorSafeArchiveTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_skips_channel_with_only_restricted_users(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
mock_slacker.archive.return_value = {'ok': True}
mock_slacker.channel_has_only_restricted_members.return_value = True
self.destalinator.safe_archive("stalinists")
self.assertFalse(mock_slacker.archive.called)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_skips_archiving_if_before_earliest_archive_date(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
self.destalinator.archive = mock.MagicMock(return_value=True)
mock_slacker.channel_has_only_restricted_members.return_value = False
today = date.today()
self.destalinator.earliest_archive_date = today.replace(day=today.day + 1)
self.destalinator.safe_archive("stalinists")
self.assertFalse(self.destalinator.archive.called)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_calls_archive_method(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.post_message.return_value = {}
self.destalinator.archive = mock.MagicMock(return_value=True)
mock_slacker.channel_has_only_restricted_members.return_value = False
self.destalinator.safe_archive("stalinists")
self.destalinator.archive.assert_called_once_with('stalinists')
class DestalinatorSafeArchiveAllTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_calls_stale_once_for_each_channel(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
self.destalinator.stale = mock.MagicMock(return_value=False)
days = self.destalinator.config.archive_threshold
self.destalinator.safe_archive_all(days)
self.assertEqual(self.destalinator.stale.mock_calls, [mock.call('leninists', days), mock.call('stalinists', days)])
@mock.patch('tests.test_destalinator.SlackerMock')
def test_only_archives_stale_channels(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
def fake_stale(channel, days):
return {'leninists': True, 'stalinists': False}[channel]
self.destalinator.stale = mock.MagicMock(side_effect=fake_stale)
days = self.destalinator.config.archive_threshold
self.destalinator.safe_archive = mock.MagicMock()
self.destalinator.safe_archive_all(days)
self.destalinator.safe_archive.assert_called_once_with('leninists')
@mock.patch('tests.test_destalinator.SlackerMock')
def test_does_not_archive_ignored_channels(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
self.destalinator.config.config['ignore_channels'] = ['leninists']
mock_slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
def fake_stale(channel, days):
return {'leninists': True, 'stalinists': False}[channel]
self.destalinator.stale = mock.MagicMock(side_effect=fake_stale)
mock_slacker.channel_has_only_restricted_members.return_value = False
self.destalinator.earliest_archive_date = date.today()
self.destalinator.safe_archive_all(self.destalinator.config.archive_threshold)
self.assertFalse(mock_slacker.archive.called)
class DestalinatorWarnTestCase(unittest.TestCase):
def setUp(self):
self.slacker = SlackerMock("testing", "token")
self.slackbot = slackbot.Slackbot("testing", "token")
@mock.patch('tests.test_destalinator.SlackerMock')
def test_warns_by_posting_message(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channel_has_only_restricted_members.return_value = False
mock_slacker.get_messages_in_time_range.return_value = sample_slack_messages
self.destalinator.warn("stalinists", 30)
mock_slacker.post_message.assert_called_with("stalinists",
self.destalinator.warning_text,
message_type='channel_warning')
def test_warns_by_posting_message_with_channel_names(self):
self.destalinator = destalinator.Destalinator(self.slacker, self.slackbot, activated=True)
warning_text = self.destalinator.warning_text + " #leninists"
self.destalinator.warning_text = warning_text
self.slacker.channels_by_name = {'leninists': 'C012839', 'stalinists': 'C102843'}
self.slacker.channel_has_only_restricted_members = mock.MagicMock(return_value=False)
self.slacker.get_messages_in_time_range = mock.MagicMock(return_value=sample_slack_messages)
self.slacker.post_message = mock.MagicMock(return_value={})
self.destalinator.warn("stalinists", 30)
self.slacker.post_message.assert_called_with("stalinists",
self.destalinator.add_slack_channel_markup(warning_text),
message_type='channel_warning')
@mock.patch('tests.test_destalinator.SlackerMock')
def test_does_not_warn_when_previous_warning_found(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channel_has_only_restricted_members.return_value = False
mock_slacker.get_messages_in_time_range.return_value = [
{
"text": self.destalinator.warning_text,
"user": "ABC123",
"attachments": [{"fallback": "channel_warning"}]
}
]
self.destalinator.warn("stalinists", 30)
self.assertFalse(mock_slacker.post_message.called)
@mock.patch('tests.test_destalinator.SlackerMock')
def test_does_not_warn_when_previous_warning_with_changed_text_found(self, mock_slacker):
self.destalinator = destalinator.Destalinator(mock_slacker, self.slackbot, activated=True)
mock_slacker.channel_has_only_restricted_members.return_value = False
mock_slacker.get_messages_in_time_range.return_value = [
{
"text": self.destalinator.warning_text + "Some new stuff",
"user": "ABC123",
"attachments": [{"fallback": "channel_warning"}]
}
]
self.destalinator.warn("stalinists", 30)
self.assertFalse(mock_slacker.post_message.called)
if __name__ == '__main__':
unittest.main()
|
apache-2.0
|
frascoweb/frasco
|
frasco/billing/eu_vat/data.py
|
1
|
5059
|
from frasco import current_app
from frasco.ext import get_extension_state
from suds.client import Client as SudsClient
from suds import WebFault
import xml.etree.ElementTree as ET
import requests
import datetime
EU_COUNTRIES = {
"AT": "EUR", # Austria
"BE": "EUR", # Belgium
"BG": "BGN", # Bulgaria
"DE": "EUR", # Germany
"CY": "EUR", # Cyprus
"CZ": "CZK", # Czech Republic
"DK": "DKK", # Denmark
"EE": "EUR", # Estonia
"ES": "EUR", # Spain
"FI": "EUR", # Finland
"FR": "EUR", # France,
"GR": "EUR", # Greece
"HR": "HRK", # Croatia
"HU": "HUF", # Hungary
"IE": "EUR", # Ireland
"IT": "EUR", # Italy
"LT": "EUR", # Lithuania
"LV": "EUR", # Latvia
"LU": "EUR", # Luxembourg
"MT": "EUR", # Malta
"NL": "EUR", # Netherlands
"PL": "PLN", # Poland
"PT": "EUR", # Portugal
"RO": "RON", # Romania
"SE": "SEK", # Sweden
"SI": "EUR", # Slovenia
"SK": "EUR" # Slovakia
}
KNOWN_VAT_RATES = {
"AT": 20.0, # Austria
"BE": 21.0, # Belgium
"BG": 20.0, # Bulgaria
"DE": 19.0, # Germany
"CY": 19.0, # Cyprus
"CZ": 21.0, # Czech Republic
"DK": 25.0, # Denmark
"EE": 20.0, # Estonia
"ES": 21.0, # Spain
"FI": 24.0, # Finland
"FR": 20.0, # France,
"GR": 23.0, # Greece
"HR": 25.0, # Croatia
"HU": 27.0, # Hungary
"IE": 23.0, # Ireland
"IT": 22.0, # Italy
"LT": 21.0, # Lithuania
"LV": 21.0, # Latvia
"LU": 15.0, # Luxembourg
"MT": 18.0, # Malta
"NL": 21.0, # Netherlands
"PL": 23.0, # Poland
"PT": 23.0, # Portugal
"RO": 24.0, # Romania
"SE": 25.0, # Sweden
"SI": 22.0, # Slovenia
"SK": 20.0 # Slovakia
}
ECB_EUROFXREF_URL = 'http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml'
ECB_EUROFXREF_XML_NS = 'http://www.ecb.int/vocabulary/2002-08-01/eurofxref'
VIES_SOAP_WSDL_URL = 'http://ec.europa.eu/taxation_customs/vies/checkVatService.wsdl'
TIC_SOAP_WSDL_URL = 'http://ec.europa.eu/taxation_customs/tic/VatRateWebService.wsdl'
def is_eu_country(country_code):
return country_code and country_code.upper() in EU_COUNTRIES
def should_charge_vat(country_code, eu_vat_number=None):
return is_eu_country(country_code) and (
get_extension_state('frasco_eu_vat').options['own_country'] == country_code or not eu_vat_number)
_exchange_rates_cache = {}
_vat_rates_cache = {}
VIESClient = None
def get_vies_soap_client():
global VIESClient
if not VIESClient:
VIESClient = SudsClient(VIES_SOAP_WSDL_URL)
return VIESClient
TICClient = None
def get_ticc_soap_client():
global TICClient
if not TICClient:
TICClient = SudsClient(TIC_SOAP_WSDL_URL)
return TICClient
class EUVATError(Exception):
pass
def get_vat_rate(country_code, rate_type='standard'):
country_code = country_code.upper()
if not is_eu_country(country_code):
raise EUVATError('Not an EU country')
if country_code not in _vat_rates_cache:
_vat_rates_cache[country_code] = {}
try:
r = get_ticc_soap_client().service.getRates(dict(memberState=country_code,
requestDate=datetime.date.today().isoformat()))
for rate in r.ratesResponse.rate:
_vat_rates_cache[country_code][rate.type.lower()] = float(rate.value)
except Exception as e:
current_app.logger.debug(e)
_vat_rates_cache.pop(country_code)
return KNOWN_VAT_RATES.get(country_code)
return _vat_rates_cache[country_code].get(rate_type.lower())
def validate_vat_number(vat_number, invalid_format_raise_error=False):
if len(vat_number) < 3:
if invalid_format_raise_error:
raise EUVATError('VAT number too short')
return False
try:
r = get_vies_soap_client().service.checkVat(vat_number[0:2].upper(), vat_number[2:])
return r.valid
except WebFault:
pass
return False
def fetch_exchange_rates():
today = datetime.date.today()
if today in _exchange_rates_cache:
return _exchange_rates_cache[today]
rates = {'EUR': 1.0}
try:
r = requests.get(ECB_EUROFXREF_URL)
root = ET.fromstring(r.text)
for cube in root.findall('eu:Cube/eu:Cube/eu:Cube', {'eu': ECB_EUROFXREF_XML_NS}):
rates[cube.attrib['currency']] = float(cube.attrib['rate'])
_exchange_rates_cache[today] = rates
except Exception as e:
current_app.logger.debug(e)
return rates
def get_exchange_rate(country_code, src_currency='EUR'):
if not is_eu_country(country_code):
raise EUVATError('Not an EU country')
dest_currency = EU_COUNTRIES[country_code]
rates = fetch_exchange_rates()
if src_currency == dest_currency:
return 1.0
if src_currency == 'EUR':
return rates.get(dest_currency, 1.0)
if src_currency not in rates:
raise EUVATError('Can only use a currency listed in the ECB rates')
return round(1.0 / rates[src_currency] * rates.get(dest_currency, 1.0), 5)
|
mit
|
daoluan/decode-Django
|
Django-1.5.1/tests/regressiontests/introspection/tests.py
|
44
|
7272
|
from __future__ import absolute_import, unicode_literals
from functools import update_wrapper
from django.db import connection
from django.test import TestCase, skipUnlessDBFeature, skipIfDBFeature
from django.utils import six, unittest
from .models import Reporter, Article
if connection.vendor == 'oracle':
expectedFailureOnOracle = unittest.expectedFailure
else:
expectedFailureOnOracle = lambda f: f
# The introspection module is optional, so methods tested here might raise
# NotImplementedError. This is perfectly acceptable behavior for the backend
# in question, but the tests need to handle this without failing. Ideally we'd
# skip these tests, but until #4788 is done we'll just ignore them.
#
# The easiest way to accomplish this is to decorate every test case with a
# wrapper that ignores the exception.
#
# The metaclass is just for fun.
def ignore_not_implemented(func):
def _inner(*args, **kwargs):
try:
return func(*args, **kwargs)
except NotImplementedError:
return None
update_wrapper(_inner, func)
return _inner
class IgnoreNotimplementedError(type):
def __new__(cls, name, bases, attrs):
for k, v in attrs.items():
if k.startswith('test'):
attrs[k] = ignore_not_implemented(v)
return type.__new__(cls, name, bases, attrs)
class IntrospectionTests(six.with_metaclass(IgnoreNotimplementedError, TestCase)):
def test_table_names(self):
tl = connection.introspection.table_names()
self.assertEqual(tl, sorted(tl))
self.assertTrue(Reporter._meta.db_table in tl,
"'%s' isn't in table_list()." % Reporter._meta.db_table)
self.assertTrue(Article._meta.db_table in tl,
"'%s' isn't in table_list()." % Article._meta.db_table)
def test_django_table_names(self):
cursor = connection.cursor()
cursor.execute('CREATE TABLE django_ixn_test_table (id INTEGER);')
tl = connection.introspection.django_table_names()
cursor.execute("DROP TABLE django_ixn_test_table;")
self.assertTrue('django_ixn_testcase_table' not in tl,
"django_table_names() returned a non-Django table")
def test_django_table_names_retval_type(self):
# Ticket #15216
cursor = connection.cursor()
cursor.execute('CREATE TABLE django_ixn_test_table (id INTEGER);')
tl = connection.introspection.django_table_names(only_existing=True)
self.assertIs(type(tl), list)
tl = connection.introspection.django_table_names(only_existing=False)
self.assertIs(type(tl), list)
def test_installed_models(self):
tables = [Article._meta.db_table, Reporter._meta.db_table]
models = connection.introspection.installed_models(tables)
self.assertEqual(models, set([Article, Reporter]))
def test_sequence_list(self):
sequences = connection.introspection.sequence_list()
expected = {'table': Reporter._meta.db_table, 'column': 'id'}
self.assertTrue(expected in sequences,
'Reporter sequence not found in sequence_list()')
def test_get_table_description_names(self):
cursor = connection.cursor()
desc = connection.introspection.get_table_description(cursor, Reporter._meta.db_table)
self.assertEqual([r[0] for r in desc],
[f.column for f in Reporter._meta.fields])
def test_get_table_description_types(self):
cursor = connection.cursor()
desc = connection.introspection.get_table_description(cursor, Reporter._meta.db_table)
self.assertEqual(
[datatype(r[1], r) for r in desc],
['IntegerField', 'CharField', 'CharField', 'CharField', 'BigIntegerField']
)
# The following test fails on Oracle due to #17202 (can't correctly
# inspect the length of character columns).
@expectedFailureOnOracle
def test_get_table_description_col_lengths(self):
cursor = connection.cursor()
desc = connection.introspection.get_table_description(cursor, Reporter._meta.db_table)
self.assertEqual(
[r[3] for r in desc if datatype(r[1], r) == 'CharField'],
[30, 30, 75]
)
# Oracle forces null=True under the hood in some cases (see
# https://docs.djangoproject.com/en/dev/ref/databases/#null-and-empty-strings)
# so its idea about null_ok in cursor.description is different from ours.
@skipIfDBFeature('interprets_empty_strings_as_nulls')
def test_get_table_description_nullable(self):
cursor = connection.cursor()
desc = connection.introspection.get_table_description(cursor, Reporter._meta.db_table)
self.assertEqual(
[r[6] for r in desc],
[False, False, False, False, True]
)
# Regression test for #9991 - 'real' types in postgres
@skipUnlessDBFeature('has_real_datatype')
def test_postgresql_real_type(self):
cursor = connection.cursor()
cursor.execute("CREATE TABLE django_ixn_real_test_table (number REAL);")
desc = connection.introspection.get_table_description(cursor, 'django_ixn_real_test_table')
cursor.execute('DROP TABLE django_ixn_real_test_table;')
self.assertEqual(datatype(desc[0][1], desc[0]), 'FloatField')
def test_get_relations(self):
cursor = connection.cursor()
relations = connection.introspection.get_relations(cursor, Article._meta.db_table)
# Older versions of MySQL don't have the chops to report on this stuff,
# so just skip it if no relations come back. If they do, though, we
# should test that the response is correct.
if relations:
# That's {field_index: (field_index_other_table, other_table)}
self.assertEqual(relations, {3: (0, Reporter._meta.db_table)})
def test_get_key_columns(self):
cursor = connection.cursor()
key_columns = connection.introspection.get_key_columns(cursor, Article._meta.db_table)
self.assertEqual(key_columns, [('reporter_id', Reporter._meta.db_table, 'id')])
def test_get_primary_key_column(self):
cursor = connection.cursor()
primary_key_column = connection.introspection.get_primary_key_column(cursor, Article._meta.db_table)
self.assertEqual(primary_key_column, 'id')
def test_get_indexes(self):
cursor = connection.cursor()
indexes = connection.introspection.get_indexes(cursor, Article._meta.db_table)
self.assertEqual(indexes['reporter_id'], {'unique': False, 'primary_key': False})
def test_get_indexes_multicol(self):
"""
Test that multicolumn indexes are not included in the introspection
results.
"""
cursor = connection.cursor()
indexes = connection.introspection.get_indexes(cursor, Reporter._meta.db_table)
self.assertNotIn('first_name', indexes)
self.assertIn('id', indexes)
def datatype(dbtype, description):
"""Helper to convert a data type into a string."""
dt = connection.introspection.get_field_type(dbtype, description)
if type(dt) is tuple:
return dt[0]
else:
return dt
|
gpl-2.0
|
ondra-novak/chromium.src
|
tools/telemetry/telemetry/results/gtest_progress_reporter.py
|
6
|
4171
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import time
from telemetry.results import progress_reporter
from telemetry.value import failure
from telemetry.value import skip
class GTestProgressReporter(progress_reporter.ProgressReporter):
"""A progress reporter that outputs the progress report in gtest style."""
def __init__(self, output_stream, output_skipped_tests_summary=False):
super(GTestProgressReporter, self).__init__()
self._output_stream = output_stream
self._timestamp = None
self._output_skipped_tests_summary = output_skipped_tests_summary
def _GetMs(self):
assert self._timestamp is not None, 'Did not call WillRunPage.'
return (time.time() - self._timestamp) * 1000
def DidAddValue(self, value):
super(GTestProgressReporter, self).DidAddValue(value)
if isinstance(value, failure.FailureValue):
print >> self._output_stream, failure.GetStringFromExcInfo(
value.exc_info)
self._output_stream.flush()
elif isinstance(value, skip.SkipValue):
print >> self._output_stream, '===== SKIPPING TEST %s: %s =====' % (
value.page.display_name, value.reason)
# TODO(chrishenry): Consider outputting metric values as well. For
# e.g., it can replace BuildbotOutputFormatter in
# --output-format=html, which we used only so that users can grep
# the results without opening results.html.
def WillRunPage(self, page_test_results):
super(GTestProgressReporter, self).WillRunPage(page_test_results)
print >> self._output_stream, '[ RUN ]', (
page_test_results.current_page.display_name)
self._output_stream.flush()
self._timestamp = time.time()
def DidRunPage(self, page_test_results):
super(GTestProgressReporter, self).DidRunPage(page_test_results)
page = page_test_results.current_page
if page_test_results.current_page_run.failed:
print >> self._output_stream, '[ FAILED ]', page.display_name, (
'(%0.f ms)' % self._GetMs())
else:
print >> self._output_stream, '[ OK ]', page.display_name, (
'(%0.f ms)' % self._GetMs())
self._output_stream.flush()
def WillAttemptPageRun(self, page_test_results, attempt_count, max_attempts):
super(GTestProgressReporter, self).WillAttemptPageRun(
page_test_results, attempt_count, max_attempts)
# A failed attempt will have at least 1 value.
if attempt_count != 1:
print >> self._output_stream, (
'===== RETRYING PAGE RUN (attempt %s out of %s allowed) =====' % (
attempt_count, max_attempts))
print >> self._output_stream, (
'Page run attempt failed and will be retried. '
'Discarding previous results.')
def DidFinishAllTests(self, page_test_results):
super(GTestProgressReporter, self).DidFinishAllTests(page_test_results)
successful_runs = []
failed_runs = []
for run in page_test_results.all_page_runs:
if run.failed:
failed_runs.append(run)
else:
successful_runs.append(run)
unit = 'test' if len(successful_runs) == 1 else 'tests'
print >> self._output_stream, '[ PASSED ]', (
'%d %s.' % (len(successful_runs), unit))
if len(failed_runs) > 0:
unit = 'test' if len(failed_runs) == 1 else 'tests'
print >> self._output_stream, '[ FAILED ]', (
'%d %s, listed below:' % (len(page_test_results.failures), unit))
for failed_run in failed_runs:
print >> self._output_stream, '[ FAILED ] ', (
failed_run.page.display_name)
print >> self._output_stream
count = len(failed_runs)
unit = 'TEST' if count == 1 else 'TESTS'
print >> self._output_stream, '%d FAILED %s' % (count, unit)
print >> self._output_stream
if self._output_skipped_tests_summary:
if len(page_test_results.skipped_values) > 0:
print >> self._output_stream, 'Skipped pages:\n%s\n' % ('\n'.join(
v.page.display_name for v in page_test_results.skipped_values))
self._output_stream.flush()
|
bsd-3-clause
|
Xanwar/android_kernel_asus_a500cg
|
tools/perf/scripts/python/sctop.py
|
11180
|
1924
|
# system call top
# (c) 2010, Tom Zanussi <[email protected]>
# Licensed under the terms of the GNU GPL License version 2
#
# Periodically displays system-wide system call totals, broken down by
# syscall. If a [comm] arg is specified, only syscalls called by
# [comm] are displayed. If an [interval] arg is specified, the display
# will be refreshed every [interval] seconds. The default interval is
# 3 seconds.
import os, sys, thread, time
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
usage = "perf script -s sctop.py [comm] [interval]\n";
for_comm = None
default_interval = 3
interval = default_interval
if len(sys.argv) > 3:
sys.exit(usage)
if len(sys.argv) > 2:
for_comm = sys.argv[1]
interval = int(sys.argv[2])
elif len(sys.argv) > 1:
try:
interval = int(sys.argv[1])
except ValueError:
for_comm = sys.argv[1]
interval = default_interval
syscalls = autodict()
def trace_begin():
thread.start_new_thread(print_syscall_totals, (interval,))
pass
def raw_syscalls__sys_enter(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
id, args):
if for_comm is not None:
if common_comm != for_comm:
return
try:
syscalls[id] += 1
except TypeError:
syscalls[id] = 1
def print_syscall_totals(interval):
while 1:
clear_term()
if for_comm is not None:
print "\nsyscall events for %s:\n\n" % (for_comm),
else:
print "\nsyscall events:\n\n",
print "%-40s %10s\n" % ("event", "count"),
print "%-40s %10s\n" % ("----------------------------------------", \
"----------"),
for id, val in sorted(syscalls.iteritems(), key = lambda(k, v): (v, k), \
reverse = True):
try:
print "%-40s %10d\n" % (syscall_name(id), val),
except TypeError:
pass
syscalls.clear()
time.sleep(interval)
|
gpl-2.0
|
facebookexperimental/eden
|
eden/scm/edenscm/mercurial/pathutil.py
|
2
|
9984
|
# Portions Copyright (c) Facebook, Inc. and its affiliates.
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2.
# Copyright 2013 Mercurial Contributors
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
import errno
import os
import posixpath
import stat
from . import encoding, error, pycompat, util
from .i18n import _
def _lowerclean(s):
return encoding.hfsignoreclean(s.lower())
class pathauditor(object):
"""ensure that a filesystem path contains no banned components.
the following properties of a path are checked:
- ends with a directory separator
- under top-level .hg
- starts at the root of a windows drive
- contains ".."
More check are also done about the file system states:
- traverses a symlink (e.g. a/symlink_here/b)
- inside a nested repository (a callback can be used to approve
some nested repositories, e.g., subrepositories)
The file system checks are only done when 'realfs' is set to True (the
default). They should be disable then we are auditing path for operation on
stored history.
If 'cached' is set to True, audited paths and sub-directories are cached.
Be careful to not keep the cache of unmanaged directories for long because
audited paths may be replaced with symlinks.
"""
def __init__(self, root, callback=None, realfs=True, cached=False):
self.audited = set()
self.auditeddir = set()
self.root = root
self._realfs = realfs
self._cached = cached
self.callback = callback
if os.path.lexists(root) and not util.fscasesensitive(root):
self.normcase = util.normcase
else:
self.normcase = lambda x: x
def __call__(self, path, mode=None):
"""Check the relative path.
path may contain a pattern (e.g. foodir/**.txt)"""
path = util.localpath(path)
normpath = self.normcase(path)
if normpath in self.audited:
return
# AIX ignores "/" at end of path, others raise EISDIR.
if util.endswithsep(path):
raise error.Abort(_("path ends in directory separator: %s") % path)
parts = util.splitpath(path)
if (
os.path.splitdrive(path)[0]
or _lowerclean(parts[0]) in (".hg", ".hg.", "")
or os.pardir in parts
):
raise error.Abort(_("path contains illegal component: %s") % path)
# Windows shortname aliases
for p in parts:
if "~" in p:
first, last = p.split("~", 1)
if last.isdigit() and first.upper() in ["HG", "HG8B6C"]:
raise error.Abort(_("path contains illegal component: %s") % path)
if ".hg" in _lowerclean(path):
lparts = [_lowerclean(p.lower()) for p in parts]
for p in ".hg", ".hg.":
if p in lparts[1:]:
pos = lparts.index(p)
base = os.path.join(*parts[:pos])
raise error.Abort(
_("path '%s' is inside nested repo %r") % (path, base)
)
normparts = util.splitpath(normpath)
assert len(parts) == len(normparts)
parts.pop()
normparts.pop()
prefixes = []
# It's important that we check the path parts starting from the root.
# This means we won't accidentally traverse a symlink into some other
# filesystem (which is potentially expensive to access).
for i in range(len(parts)):
prefix = pycompat.ossep.join(parts[: i + 1])
normprefix = pycompat.ossep.join(normparts[: i + 1])
if normprefix in self.auditeddir:
continue
if self._realfs:
self._checkfs(prefix, path)
prefixes.append(normprefix)
if self._cached:
self.audited.add(normpath)
# only add prefixes to the cache after checking everything: we don't
# want to add "foo/bar/baz" before checking if there's a "foo/.hg"
self.auditeddir.update(prefixes)
def _checkfs(self, prefix, path):
"""raise exception if a file system backed check fails"""
curpath = os.path.join(self.root, prefix)
try:
st = os.lstat(curpath)
except OSError as err:
# EINVAL can be raised as invalid path syntax under win32.
# They must be ignored for patterns can be checked too.
if err.errno not in (errno.ENOENT, errno.ENOTDIR, errno.EINVAL):
raise
else:
if stat.S_ISLNK(st.st_mode):
msg = _("path %r traverses symbolic link %r") % (path, prefix)
raise error.Abort(msg)
elif stat.S_ISDIR(st.st_mode) and os.path.isdir(
os.path.join(curpath, ".hg")
):
if not self.callback or not self.callback(curpath):
msg = _("path '%s' is inside nested repo %r")
raise error.Abort(msg % (path, prefix))
def check(self, path):
try:
self(path)
return True
except (OSError, error.Abort):
return False
def canonpath(root, cwd, myname, auditor=None):
"""return the canonical path of myname, given cwd and root
>>> def check(root, cwd, myname):
... a = pathauditor(root, realfs=False)
... try:
... return canonpath(root, cwd, myname, a)
... except error.Abort:
... return 'aborted'
>>> def unixonly(root, cwd, myname, expected='aborted'):
... if pycompat.iswindows:
... return expected
... return check(root, cwd, myname)
>>> def winonly(root, cwd, myname, expected='aborted'):
... if not pycompat.iswindows:
... return expected
... return check(root, cwd, myname)
>>> winonly(b'd:\\\\repo', b'c:\\\\dir', b'filename')
'aborted'
>>> winonly(b'c:\\\\repo', b'c:\\\\dir', b'filename')
'aborted'
>>> winonly(b'c:\\\\repo', b'c:\\\\', b'filename')
'aborted'
>>> winonly(b'c:\\\\repo', b'c:\\\\', b'repo\\\\filename',
... b'filename')
'filename'
>>> winonly(b'c:\\\\repo', b'c:\\\\repo', b'filename', b'filename')
'filename'
>>> winonly(b'c:\\\\repo', b'c:\\\\repo\\\\subdir', b'filename',
... b'subdir/filename')
'subdir/filename'
>>> unixonly(b'/repo', b'/dir', b'filename')
'aborted'
>>> unixonly(b'/repo', b'/', b'filename')
'aborted'
>>> unixonly(b'/repo', b'/', b'repo/filename', b'filename')
'filename'
>>> unixonly(b'/repo', b'/repo', b'filename', b'filename')
'filename'
>>> unixonly(b'/repo', b'/repo/subdir', b'filename', b'subdir/filename')
'subdir/filename'
"""
if util.endswithsep(root):
rootsep = root
else:
rootsep = root + pycompat.ossep
name = myname
if not os.path.isabs(name):
name = os.path.join(root, cwd, name)
name = os.path.normpath(name)
if auditor is None:
auditor = pathauditor(root)
if name != rootsep and name.startswith(rootsep):
name = name[len(rootsep) :]
auditor(name)
return util.pconvert(name)
elif name == root:
return ""
else:
# Determine whether `name' is in the hierarchy at or beneath `root',
# by iterating name=dirname(name) until that causes no change (can't
# check name == '/', because that doesn't work on windows). The list
# `rel' holds the reversed list of components making up the relative
# file name we want.
rel = []
while True:
try:
s = util.samefile(name, root)
except OSError:
s = False
if s:
if not rel:
# name was actually the same as root (maybe a symlink)
return ""
rel.reverse()
name = os.path.join(*rel)
auditor(name)
return util.pconvert(name)
dirname, basename = util.split(name)
rel.append(basename)
if dirname == name:
break
name = dirname
# A common mistake is to use -R, but specify a file relative to the repo
# instead of cwd. Detect that case, and provide a hint to the user.
hint = None
try:
if cwd != root:
canonpath(root, root, myname, auditor)
relpath = util.pathto(root, cwd, "")
if relpath[-1] == pycompat.ossep:
relpath = relpath[:-1]
hint = _("consider using '--cwd %s'") % relpath
except error.Abort:
pass
raise error.Abort(_("%s not under root '%s'") % (myname, root), hint=hint)
def normasprefix(path):
"""normalize the specified path as path prefix
Returned value can be used safely for "p.startswith(prefix)",
"p[len(prefix):]", and so on.
For efficiency, this expects "path" argument to be already
normalized by "os.path.normpath", "os.path.realpath", and so on.
See also issue3033 for detail about need of this function.
>>> normasprefix(b'/foo/bar').replace(pycompat.ossep, b'/')
'/foo/bar/'
>>> normasprefix(b'/').replace(pycompat.ossep, b'/')
'/'
"""
d, p = os.path.splitdrive(path)
if len(p) != len(pycompat.ossep):
return path + pycompat.ossep
else:
return path
# forward two methods from posixpath that do what we need, but we'd
# rather not let our internals know that we're thinking in posix terms
# - instead we'll let them be oblivious.
join = posixpath.join
dirname = posixpath.dirname
|
gpl-2.0
|
piyush82/icclab-rcb-web
|
virtualenv/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py
|
17
|
7569
|
"""
PostgreSQL database backend for Django.
Requires psycopg 2: http://initd.org/projects/psycopg2
"""
import logging
import sys
from django.db.backends import *
from django.db.backends.postgresql_psycopg2.operations import DatabaseOperations
from django.db.backends.postgresql_psycopg2.client import DatabaseClient
from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation
from django.db.backends.postgresql_psycopg2.version import get_version
from django.db.backends.postgresql_psycopg2.introspection import DatabaseIntrospection
from django.utils.encoding import force_str
from django.utils.functional import cached_property
from django.utils.safestring import SafeText, SafeBytes
from django.utils.timezone import utc
try:
import psycopg2 as Database
import psycopg2.extensions
except ImportError as e:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
DatabaseError = Database.DatabaseError
IntegrityError = Database.IntegrityError
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_adapter(SafeBytes, psycopg2.extensions.QuotedString)
psycopg2.extensions.register_adapter(SafeText, psycopg2.extensions.QuotedString)
logger = logging.getLogger('django.db.backends')
def utc_tzinfo_factory(offset):
if offset != 0:
raise AssertionError("database connection isn't set to UTC")
return utc
class DatabaseFeatures(BaseDatabaseFeatures):
needs_datetime_string_cast = False
can_return_id_from_insert = True
requires_rollback_on_dirty_transaction = True
has_real_datatype = True
can_defer_constraint_checks = True
has_select_for_update = True
has_select_for_update_nowait = True
has_bulk_insert = True
uses_savepoints = True
supports_tablespaces = True
supports_transactions = True
can_distinct_on_fields = True
class DatabaseWrapper(BaseDatabaseWrapper):
vendor = 'postgresql'
operators = {
'exact': '= %s',
'iexact': '= UPPER(%s)',
'contains': 'LIKE %s',
'icontains': 'LIKE UPPER(%s)',
'regex': '~ %s',
'iregex': '~* %s',
'gt': '> %s',
'gte': '>= %s',
'lt': '< %s',
'lte': '<= %s',
'startswith': 'LIKE %s',
'endswith': 'LIKE %s',
'istartswith': 'LIKE UPPER(%s)',
'iendswith': 'LIKE UPPER(%s)',
}
Database = Database
def __init__(self, *args, **kwargs):
super(DatabaseWrapper, self).__init__(*args, **kwargs)
opts = self.settings_dict["OPTIONS"]
RC = psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED
self.isolation_level = opts.get('isolation_level', RC)
self.features = DatabaseFeatures(self)
self.ops = DatabaseOperations(self)
self.client = DatabaseClient(self)
self.creation = DatabaseCreation(self)
self.introspection = DatabaseIntrospection(self)
self.validation = BaseDatabaseValidation(self)
def get_connection_params(self):
settings_dict = self.settings_dict
if not settings_dict['NAME']:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured(
"settings.DATABASES is improperly configured. "
"Please supply the NAME value.")
conn_params = {
'database': settings_dict['NAME'],
}
conn_params.update(settings_dict['OPTIONS'])
if 'autocommit' in conn_params:
del conn_params['autocommit']
if 'isolation_level' in conn_params:
del conn_params['isolation_level']
if settings_dict['USER']:
conn_params['user'] = settings_dict['USER']
if settings_dict['PASSWORD']:
conn_params['password'] = force_str(settings_dict['PASSWORD'])
if settings_dict['HOST']:
conn_params['host'] = settings_dict['HOST']
if settings_dict['PORT']:
conn_params['port'] = settings_dict['PORT']
return conn_params
def get_new_connection(self, conn_params):
return Database.connect(**conn_params)
def init_connection_state(self):
settings_dict = self.settings_dict
self.connection.set_client_encoding('UTF8')
tz = 'UTC' if settings.USE_TZ else settings_dict.get('TIME_ZONE')
if tz:
try:
get_parameter_status = self.connection.get_parameter_status
except AttributeError:
# psycopg2 < 2.0.12 doesn't have get_parameter_status
conn_tz = None
else:
conn_tz = get_parameter_status('TimeZone')
if conn_tz != tz:
# Set the time zone in autocommit mode (see #17062)
self.set_autocommit(True)
self.connection.cursor().execute(
self.ops.set_time_zone_sql(), [tz])
self.connection.set_isolation_level(self.isolation_level)
def create_cursor(self):
cursor = self.connection.cursor()
cursor.tzinfo_factory = utc_tzinfo_factory if settings.USE_TZ else None
return cursor
def close(self):
self.validate_thread_sharing()
if self.connection is None:
return
try:
self.connection.close()
self.connection = None
except Database.Error:
# In some cases (database restart, network connection lost etc...)
# the connection to the database is lost without giving Django a
# notification. If we don't set self.connection to None, the error
# will occur a every request.
self.connection = None
logger.warning('psycopg2 error while closing the connection.',
exc_info=sys.exc_info()
)
raise
finally:
self.set_clean()
def _set_isolation_level(self, isolation_level):
assert isolation_level in range(1, 5) # Use set_autocommit for level = 0
if self.psycopg2_version >= (2, 4, 2):
self.connection.set_session(isolation_level=isolation_level)
else:
self.connection.set_isolation_level(isolation_level)
def _set_autocommit(self, autocommit):
if self.psycopg2_version >= (2, 4, 2):
self.connection.autocommit = autocommit
else:
if autocommit:
level = psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT
else:
level = self.isolation_level
self.connection.set_isolation_level(level)
def check_constraints(self, table_names=None):
"""
To check constraints, we set constraints to immediate. Then, when, we're done we must ensure they
are returned to deferred.
"""
self.cursor().execute('SET CONSTRAINTS ALL IMMEDIATE')
self.cursor().execute('SET CONSTRAINTS ALL DEFERRED')
def is_usable(self):
try:
# Use a psycopg cursor directly, bypassing Django's utilities.
self.connection.cursor().execute("SELECT 1")
except DatabaseError:
return False
else:
return True
@cached_property
def psycopg2_version(self):
version = psycopg2.__version__.split(' ', 1)[0]
return tuple(int(v) for v in version.split('.'))
@cached_property
def pg_version(self):
with self.temporary_connection():
return get_version(self.connection)
|
apache-2.0
|
XianliangJ/collections
|
Jellyfish/pox/pox/controllers/distributed_controller.py
|
7
|
5602
|
#!/usr/bin/env python
# Nom nom nom nom
# TODO: there is currently a dependency on the order of initialization of
# client and server... . for example:
# $ pox.py nom_client nom_server # blocks indefinitely
# whereas
# $ pox.py nom_server nom_client # works
from pox.core import core, UpEvent
from pox.lib.revent.revent import EventMixin
import pox.messenger.messenger as messenger
import pox.topology.topology as topology
import sys
import threading
import signal
import time
import copy
import socket
import pickle
from collections import namedtuple
UpdateACK = namedtuple('UpdateACK', 'xid controller_name')
class DistributedController(EventMixin, topology.Controller):
"""
Keeps a copy of the Nom in its cache. Arbitrary controller applications
can be implemented on top of NomClient through inheritance. Mutating calls to
self.nom transparently write-through to the NomServer
Visually, NomClient's connect to the NomServer through
the following interfaces:
========================== ==========================
| NomClient | | NomServer |
| | any mutating operation | |
| | --------------------> |server.put(nom) |
| | | |
| client. | cache invalidation, or | |
| update_nom()| network event | |
| | <------------------- | |
========================== ==========================
"""
def __init__(self, name):
"""
Note that server may be a direct reference to the NomServer (for simulation), or a Pyro4 proxy
(for emulation)
pre: name is unique across the network
"""
EventMixin.__init__(self)
# We are a "controller" entity in pox.topology.
# (Actually injecting ourself into pox.topology is handled
# by nom_server)
topology.Controller.__init__(self, name)
self.name = name
self.log = core.getLogger(name)
# Construct an empty topology
# The "master" copy topology will soon be merged into this guy
self.topology = topology.Topology("topo:%s" % self.name)
# Register subclass' event handlers
self.listenTo(self.topology, "topology")
self._server_connection = None
self._queued_commits = []
# For simulation. can't connect to NomServer until the Messenger is listening to new connections
# TODO: for emulation, this should be removed / refactored --
# just assume that the NomServer machine is up
core.messenger.addListener(messenger.MessengerListening, self._register_with_server)
def _register_with_server(self, event):
self.log.debug("Attempting to register with NomServer")
sock = socket.socket()
# TODO: don't assume localhost -> should point to machine NomServer is running on
# TODO: magic numbers should be re-factored as constants
sock.connect(("localhost",7790))
self._server_connection = messenger.TCPMessengerConnection(socket = sock)
self._server_connection.addListener(messenger.MessageReceived, self._handle_MessageReceived)
self.log.debug("Sending nom_server handshake")
self._server_connection.send({"nom_server_handshake":self.name})
self.log.debug("nom_server handhsake sent -- sending get request")
# Answer comes back asynchronously as a call to nom_update
self._server_connection.send({"get":None})
self.log.debug("get request sent. Starting listen task")
self._server_connection.start()
def _handle_MessageReceived (self, event, msg):
# TODO: event.claim() should be factored out -- I want to claim the connection
# before the first MessageReceived event occurs.
event.claim()
if event.con.isReadable():
r = event.con.read()
if type(r) is not dict:
self.log.warn("message was not a dict!")
return
self.log.debug("Message received, type: %s-" % r.keys())
if "nom_update" in r:
self.nom_update(r["nom_update"])
else:
self.log.debug("- conversation finished")
def nom_update(self, update):
"""
According to Scott's philosophy of SDN, a control application is a
function: F(view) => configuration
This method is the entry point for the POX platform to update the
view.
The POX platform may invoke it in two situations:
i. NomServer will invalidate this client's cache in the
case where another client modifies its copy of the NOM
ii. Either POX or this client (should) register this method as a
handler for network events.
"""
xid, id2entity = update
self.log.debug("nom_update %d" % xid)
self.topology.deserializeAndMerge(id2entity)
update_ack = UpdateACK(xid, self.name)
self._server_connection.send({"nom_update_ack":update_ack})
self.log.debug("Sent nom_update_ack %d, %s" % update_ack)
# TODO: react to the change in the topology, by firing queued events to
# subclass' ?
return True
def commit_nom_change(self):
self.log.debug("Committing NOM update")
if self._server_connection:
self._server_connection.send({"put":self.topology.serialize()})
else:
self.log.debug("Queuing nom commit")
self._queued_commits.append(copy.deepcopy(self.topology))
# TODO: need to commit nom changes whenever the learning switch updates its state...
|
gpl-3.0
|
mlperf/training_results_v0.6
|
NVIDIA/benchmarks/gnmt/implementations/pytorch/seq2seq/train/trainer.py
|
1
|
18009
|
import logging
import os
import time
from itertools import cycle
import numpy as np
import torch
import torch.optim
import torch.utils.data
from apex.parallel import DistributedDataParallel as DDP
from apex.optimizers import FusedAdam
from apex import amp
import mlperf_compliance
from seq2seq.train.fp_optimizers import Fp16Optimizer
from seq2seq.train.fp_optimizers import Fp32Optimizer
from seq2seq.train.fp_optimizers import AMPOptimizer
from seq2seq.utils import AverageMeter
from seq2seq.utils import mlperf_print
from seq2seq.utils import sync_workers
from seq2seq.utils import get_world_size
class Seq2SeqTrainer:
"""
Seq2SeqTrainer
"""
def __init__(self,
model,
criterion,
opt_config,
print_freq=10,
save_freq=1000,
grad_clip=float('inf'),
batch_first=False,
save_info={},
save_path='.',
train_iterations=0,
checkpoint_filename='checkpoint%s.pth',
keep_checkpoints=5,
math='fp32',
loss_scaling={},
cuda=True,
distributed=False,
distributed_overlap_allreduce=False,
distributed_overlap_num_allreduce_streams=1,
distributed_overlap_allreduce_messagesize=1e7,
distributed_overlap_allreduce_communicators=None,
intra_epoch_eval=0,
prealloc_mode='always',
iter_size=1,
verbose=False):
"""
Constructor for the Seq2SeqTrainer.
:param model: model to train
:param criterion: criterion (loss function)
:param opt_config: dictionary with options for the optimizer
:param print_freq: prints short summary every 'print_freq' iterations
:param save_freq: saves checkpoint every 'save_freq' iterations
:param grad_clip: coefficient for gradient clipping
:param batch_first: if True the model uses (batch,seq,feature) tensors,
if false the model uses (seq, batch, feature)
:param save_info: dict with additional state stored in each checkpoint
:param save_path: path to the directiory for checkpoints
:param train_iterations: total number of training iterations to execute
:param checkpoint_filename: name of files with checkpoints
:param keep_checkpoints: max number of checkpoints to keep
:param math: arithmetic type
:param loss_scaling: options for dynamic loss scaling
:param cuda: if True use cuda, if False train on cpu
:param distributed: if True run distributed training
:param intra_epoch_eval: number of additional eval runs within each
training epoch
:param prealloc_mode: controls preallocation,
choices=['off', 'once', 'always']
:param iter_size: number of iterations between weight updates
:param verbose: enables verbose logging
"""
super(Seq2SeqTrainer, self).__init__()
self.model = model
self.criterion = criterion
self.epoch = 0
self.save_info = save_info
self.save_path = save_path
self.save_freq = save_freq
self.save_counter = 0
self.checkpoint_filename = checkpoint_filename
self.checkpoint_counter = cycle(range(keep_checkpoints))
self.opt_config = opt_config
self.cuda = cuda
self.distributed = distributed
self.print_freq = print_freq
self.batch_first = batch_first
self.verbose = verbose
self.loss = None
self.translator = None
self.scheduler = None
self.intra_epoch_eval = intra_epoch_eval
self.iter_size = iter_size
self.prealloc_mode = prealloc_mode
self.preallocated = False
self.retain_allreduce_buffers = True
self.gradient_average = False
if cuda:
self.model = self.model.cuda()
self.criterion = self.criterion.cuda()
params = self.model.parameters()
if math == 'fp16':
self.model = self.model.half()
if distributed:
self.model = DDP(self.model,
message_size=distributed_overlap_allreduce_messagesize,
delay_allreduce=(not distributed_overlap_allreduce),
retain_allreduce_buffers=self.retain_allreduce_buffers,
gradient_average=self.gradient_average)
self.fp_optimizer = Fp16Optimizer(
self.model, grad_clip,
loss_scale=loss_scaling['init_scale'],
dls_upscale_interval=loss_scaling['upscale_interval']
)
params = [self.fp_optimizer.fp32_params]
elif math == 'fp32':
if distributed:
self.model = DDP(self.model,
message_size=distributed_overlap_allreduce_messagesize,
delay_allreduce=(not distributed_overlap_allreduce))
self.fp_optimizer = Fp32Optimizer(self.model, grad_clip)
# params = self.model.parameters()
opt_name = opt_config.pop('optimizer')
if opt_name == 'FusedAdam':
if math == 'fp16' or math == 'fp32':
self.optimizer = FusedAdam(params, **opt_config)
else:
self.optimizer = FusedAdam(params, use_mt=True, max_grad_norm=grad_clip,
amp_scale_adjustment=get_world_size(), **opt_config)
else:
self.optimizer = torch.optim.__dict__[opt_name](params,
**opt_config)
if math == 'amp_fp16':
self.model, self.optimizer = amp.initialize(
self.model,
self.optimizer,
cast_model_outputs=torch.float16,
keep_batchnorm_fp32=False,
opt_level='O2')
self.fp_optimizer = AMPOptimizer(
self.model,
grad_clip,
loss_scale=loss_scaling['init_scale'],
dls_upscale_interval=loss_scaling['upscale_interval']
)
if distributed:
self.model = DDP(self.model,
message_size=distributed_overlap_allreduce_messagesize,
delay_allreduce=(not distributed_overlap_allreduce),
num_allreduce_streams=distributed_overlap_num_allreduce_streams,
allreduce_communicators=distributed_overlap_allreduce_communicators,
retain_allreduce_buffers=self.retain_allreduce_buffers,
gradient_average=self.gradient_average)
logging.info(f'Using optimizer: {self.optimizer}')
mlperf_print(key=mlperf_compliance.constants.OPT_BASE_LR,
value=opt_config['lr'])
def iterate(self, src, tgt, update=True, training=True):
"""
Performs one iteration of the training/validation.
:param src: batch of examples from the source language
:param tgt: batch of examples from the target language
:param update: if True: optimizer does update of the weights
:param training: if True: executes optimizer
"""
src, src_length = src
tgt, tgt_length = tgt
src_length = torch.LongTensor(src_length)
tgt_length = torch.LongTensor(tgt_length)
num_toks = {}
num_toks['tgt'] = int(sum(tgt_length - 1))
num_toks['src'] = int(sum(src_length))
if self.cuda:
src = src.cuda()
src_length = src_length.cuda()
tgt = tgt.cuda()
if self.batch_first:
output = self.model(src, src_length, tgt[:, :-1])
tgt_labels = tgt[:, 1:]
T, B = output.size(1), output.size(0)
else:
output = self.model(src, src_length, tgt[:-1])
tgt_labels = tgt[1:]
T, B = output.size(0), output.size(1)
loss = self.criterion(output.view(T * B, -1),
tgt_labels.contiguous().view(-1))
loss_per_batch = loss.item()
loss /= (B * self.iter_size)
if training:
self.fp_optimizer.step(loss, self.optimizer, self.scheduler,
update)
loss_per_token = loss_per_batch / num_toks['tgt']
loss_per_sentence = loss_per_batch / B
return loss_per_token, loss_per_sentence, num_toks
def feed_data(self, data_loader, training=True):
"""
Runs training or validation on batches from data_loader.
:param data_loader: data loader
:param training: if True runs training else runs validation
"""
if training:
assert self.optimizer is not None
eval_fractions = np.linspace(0, 1, self.intra_epoch_eval+2)[1:-1]
iters_with_update = len(data_loader) // self.iter_size
eval_iters = (eval_fractions * iters_with_update).astype(int)
eval_iters = eval_iters * self.iter_size
eval_iters = set(eval_iters)
batch_time = AverageMeter(skip_first=False)
data_time = AverageMeter(skip_first=False)
losses_per_token = AverageMeter(skip_first=False)
losses_per_sentence = AverageMeter(skip_first=False)
tot_tok_time = AverageMeter(skip_first=False)
src_tok_time = AverageMeter(skip_first=False)
tgt_tok_time = AverageMeter(skip_first=False)
batch_size = data_loader.batch_size
end = time.time()
for i, (src, tgt) in enumerate(data_loader):
self.save_counter += 1
# measure data loading time
data_time.update(time.time() - end)
update = False
if i % self.iter_size == self.iter_size - 1:
update = True
# do a train/evaluate iteration
stats = self.iterate(src, tgt, update, training=training)
loss_per_token, loss_per_sentence, num_toks = stats
# measure accuracy and record loss
losses_per_token.update(loss_per_token, num_toks['tgt'])
losses_per_sentence.update(loss_per_sentence, batch_size)
# measure elapsed time
elapsed = time.time() - end
batch_time.update(elapsed)
src_tok_time.update(num_toks['src'] / elapsed)
tgt_tok_time.update(num_toks['tgt'] / elapsed)
tot_num_toks = num_toks['tgt'] + num_toks['src']
tot_tok_time.update(tot_num_toks / elapsed)
self.loss = losses_per_token.avg
if training and i in eval_iters:
assert self.translator is not None
test_bleu, _ = self.translator.run(calc_bleu=True,
epoch=self.epoch,
iteration=i)
log = []
log += [f'TRAIN [{self.epoch}][{i}/{len(data_loader)}]']
log += [f'BLEU: {test_bleu:.2f}']
log = '\t'.join(log)
logging.info(log)
self.model.train()
self.preallocate(data_loader.batch_size,
data_loader.dataset.max_len, training=True)
if i % self.print_freq == 0:
phase = 'TRAIN' if training else 'VALIDATION'
log = []
log += [f'{phase} [{self.epoch}][{i}/{len(data_loader)}]']
log += [f'Time {batch_time.val:.3f} ({batch_time.avg:.3f})']
log += [f'Data {data_time.val:.2e} ({data_time.avg:.2e})']
log += [f'Tok/s {tot_tok_time.val:.0f} ({tot_tok_time.avg:.0f})']
if self.verbose:
log += [f'Src tok/s {src_tok_time.val:.0f} ({src_tok_time.avg:.0f})']
log += [f'Tgt tok/s {tgt_tok_time.val:.0f} ({tgt_tok_time.avg:.0f})']
log += [f'Loss/sentence {losses_per_sentence.val:.1f} ({losses_per_sentence.avg:.1f})']
log += [f'Loss/tok {losses_per_token.val:.4f} ({losses_per_token.avg:.4f})']
if training:
lr = self.optimizer.param_groups[0]['lr']
log += [f'LR {lr:.3e}']
log = '\t'.join(log)
logging.info(log)
save_chkpt = (self.save_counter % self.save_freq) == (self.save_freq - 1)
if training and save_chkpt:
self.save_counter = 0
self.save_info['iteration'] = i
identifier = next(self.checkpoint_counter, -1)
if identifier != -1:
with sync_workers() as rank:
if rank == 0:
self.save(identifier=identifier)
end = time.time()
tot_tok_time.reduce('sum')
losses_per_token.reduce('mean')
return losses_per_token.avg, tot_tok_time.avg
def preallocate(self, batch_size, max_length, training):
"""
Generates maximum sequence length batch and runs forward and backward
pass without updating model parameters.
:param batch_size: batch size for preallocation
:param max_length: max sequence length for preallocation
:param training: if True preallocates memory for backward pass
"""
if self.prealloc_mode == 'always' or (self.prealloc_mode == 'once' and
not self.preallocated):
logging.info('Executing preallocation')
torch.cuda.empty_cache()
src_length = [max_length] * batch_size
tgt_length = [max_length] * batch_size
if self.batch_first:
shape = (batch_size, max_length)
else:
shape = (max_length, batch_size)
src = torch.full(shape, 4, dtype=torch.int64)
tgt = torch.full(shape, 4, dtype=torch.int64)
src = src, src_length
tgt = tgt, tgt_length
self.iterate(src, tgt, update=False, training=training)
self.model.zero_grad()
self.preallocated = True
def optimize(self, data_loader):
"""
Sets model in training mode, preallocates memory and runs training on
data provided by data_loader.
:param data_loader: data loader
"""
torch.set_grad_enabled(True)
self.model.train()
self.preallocate(data_loader.batch_size, data_loader.dataset.max_len,
training=True)
output = self.feed_data(data_loader, training=True)
self.model.zero_grad()
return output
def evaluate(self, data_loader):
"""
Sets model in eval mode, disables gradients, preallocates memory and
runs validation on data provided by data_loader.
:param data_loader: data loader
"""
torch.set_grad_enabled(False)
self.model.eval()
self.preallocate(data_loader.batch_size, data_loader.dataset.max_len,
training=False)
output = self.feed_data(data_loader, training=False)
self.model.zero_grad()
return output
def load(self, filename):
"""
Loads checkpoint from filename.
:param filename: path to the checkpoint file
"""
if os.path.isfile(filename):
checkpoint = torch.load(filename, map_location={'cuda:0': 'cpu'})
if self.distributed:
self.model.module.load_state_dict(checkpoint['state_dict'])
else:
self.model.load_state_dict(checkpoint['state_dict'])
self.fp_optimizer.initialize_model(self.model)
self.optimizer.load_state_dict(checkpoint['optimizer'])
assert self.scheduler is not None
self.scheduler.load_state_dict(checkpoint['scheduler'])
self.epoch = checkpoint['epoch']
self.loss = checkpoint['loss']
logging.info(f'Loaded checkpoint {filename} (epoch {self.epoch})')
else:
logging.error(f'Invalid checkpoint: {filename}')
def save(self, identifier=None, is_best=False, save_all=False):
"""
Stores checkpoint to a file.
:param identifier: identifier for periodic checkpoint
:param is_best: if True stores checkpoint to 'model_best.pth'
:param save_all: if True stores checkpoint after completed training
epoch
"""
def write_checkpoint(state, filename):
filename = os.path.join(self.save_path, filename)
logging.info(f'Saving model to {filename}')
torch.save(state, filename)
if self.distributed:
model_state = self.model.module.state_dict()
else:
model_state = self.model.state_dict()
assert self.scheduler is not None
state = {
'epoch': self.epoch,
'state_dict': model_state,
'optimizer': self.optimizer.state_dict(),
'scheduler': self.scheduler.state_dict(),
'loss': getattr(self, 'loss', None),
}
state = dict(list(state.items()) + list(self.save_info.items()))
if identifier is not None:
filename = self.checkpoint_filename % identifier
write_checkpoint(state, filename)
if is_best:
filename = 'model_best.pth'
write_checkpoint(state, filename)
if save_all:
filename = f'checkpoint_epoch_{self.epoch:03d}.pth'
write_checkpoint(state, filename)
|
apache-2.0
|
nicobustillos/odoo
|
addons/im_chat/im_chat.py
|
28
|
21908
|
# -*- coding: utf-8 -*-
import base64
import datetime
import logging
import time
import uuid
import random
import simplejson
import openerp
from openerp.http import request
from openerp.osv import osv, fields
from openerp.tools.misc import DEFAULT_SERVER_DATETIME_FORMAT
from openerp.addons.bus.bus import TIMEOUT
_logger = logging.getLogger(__name__)
DISCONNECTION_TIMER = TIMEOUT + 5
AWAY_TIMER = 600 # 10 minutes
#----------------------------------------------------------
# Models
#----------------------------------------------------------
class im_chat_conversation_state(osv.Model):
""" Adds a state on the m2m between user and session. """
_name = 'im_chat.conversation_state'
_table = "im_chat_session_res_users_rel"
_columns = {
"state" : fields.selection([('open', 'Open'), ('folded', 'Folded'), ('closed', 'Closed')]),
"session_id" : fields.many2one('im_chat.session', 'Session', required=True, ondelete="cascade"),
"user_id" : fields.many2one('res.users', 'Users', required=True, ondelete="cascade"),
}
_defaults = {
"state" : 'open'
}
class im_chat_session(osv.Model):
""" Conversations."""
_order = 'id desc'
_name = 'im_chat.session'
_rec_name = 'uuid'
_columns = {
'uuid': fields.char('UUID', size=50, select=True),
'message_ids': fields.one2many('im_chat.message', 'to_id', 'Messages'),
'user_ids': fields.many2many('res.users', 'im_chat_session_res_users_rel', 'session_id', 'user_id', "Session Users"),
'session_res_users_rel': fields.one2many('im_chat.conversation_state', 'session_id', 'Relation Session Users'),
}
_defaults = {
'uuid': lambda *args: '%s' % uuid.uuid4(),
}
def is_in_session(self, cr, uid, uuid, user_id, context=None):
""" return if the given user_id is in the session """
sids = self.search(cr, uid, [('uuid', '=', uuid)], context=context, limit=1)
for session in self.browse(cr, uid, sids, context=context):
return user_id and user_id in [u.id for u in session.user_ids]
return False
def users_infos(self, cr, uid, ids, context=None):
""" get the user infos for all the user in the session """
for session in self.pool["im_chat.session"].browse(cr, uid, ids, context=context):
users_infos = self.pool["res.users"].read(cr, uid, [u.id for u in session.user_ids], ['id','name', 'im_status'], context=context)
return users_infos
def is_private(self, cr, uid, ids, context=None):
for session_id in ids:
""" return true if the session is private between users no external messages """
mess_ids = self.pool["im_chat.message"].search(cr, uid, [('to_id','=',session_id),('from_id','=',None)], context=context)
return len(mess_ids) == 0
def session_info(self, cr, uid, ids, context=None):
""" get the session info/header of a given session """
for session in self.browse(cr, uid, ids, context=context):
info = {
'uuid': session.uuid,
'users': session.users_infos(),
'state': 'open',
}
# add uid_state if available
if uid:
domain = [('user_id','=',uid), ('session_id','=',session.id)]
uid_state = self.pool['im_chat.conversation_state'].search_read(cr, uid, domain, ['state'], context=context)
if uid_state:
info['state'] = uid_state[0]['state']
return info
def session_get(self, cr, uid, user_to, context=None):
""" returns the canonical session between 2 users, create it if needed """
session_id = False
if user_to:
sids = self.search(cr, uid, [('user_ids','in', user_to),('user_ids', 'in', [uid])], context=context, limit=1)
for sess in self.browse(cr, uid, sids, context=context):
if len(sess.user_ids) == 2 and sess.is_private():
session_id = sess.id
break
else:
session_id = self.create(cr, uid, { 'user_ids': [(6,0, (user_to, uid))] }, context=context)
return self.session_info(cr, uid, [session_id], context=context)
def update_state(self, cr, uid, uuid, state=None, context=None):
""" modify the fold_state of the given session, and broadcast to himself (e.i. : to sync multiple tabs) """
domain = [('user_id','=',uid), ('session_id.uuid','=',uuid)]
ids = self.pool['im_chat.conversation_state'].search(cr, uid, domain, context=context)
for sr in self.pool['im_chat.conversation_state'].browse(cr, uid, ids, context=context):
if not state:
state = sr.state
if sr.state == 'open':
state = 'folded'
else:
state = 'open'
self.pool['im_chat.conversation_state'].write(cr, uid, ids, {'state': state}, context=context)
self.pool['bus.bus'].sendone(cr, uid, (cr.dbname, 'im_chat.session', uid), sr.session_id.session_info())
def add_user(self, cr, uid, uuid, user_id, context=None):
""" add the given user to the given session """
sids = self.search(cr, uid, [('uuid', '=', uuid)], context=context, limit=1)
for session in self.browse(cr, uid, sids, context=context):
if user_id not in [u.id for u in session.user_ids]:
self.write(cr, uid, [session.id], {'user_ids': [(4, user_id)]}, context=context)
# notify the all the channel users and anonymous channel
notifications = []
for channel_user_id in session.user_ids:
info = self.session_info(cr, channel_user_id.id, [session.id], context=context)
notifications.append([(cr.dbname, 'im_chat.session', channel_user_id.id), info])
# Anonymous are not notified when a new user is added : cannot exec session_info as uid = None
info = self.session_info(cr, openerp.SUPERUSER_ID, [session.id], context=context)
notifications.append([session.uuid, info])
self.pool['bus.bus'].sendmany(cr, uid, notifications)
# send a message to the conversation
user = self.pool['res.users'].read(cr, uid, user_id, ['name'], context=context)
self.pool["im_chat.message"].post(cr, uid, uid, session.uuid, "meta", user['name'] + " joined the conversation.", context=context)
def get_image(self, cr, uid, uuid, user_id, context=None):
""" get the avatar of a user in the given session """
#default image
image_b64 = 'R0lGODlhAQABAIABAP///wAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=='
# get the session
if user_id:
session_id = self.pool["im_chat.session"].search(cr, uid, [('uuid','=',uuid), ('user_ids','in', user_id)])
if session_id:
# get the image of the user
res = self.pool["res.users"].read(cr, uid, [user_id], ["image_small"])[0]
if res["image_small"]:
image_b64 = res["image_small"]
return image_b64
class im_chat_message(osv.Model):
""" Sessions messsages type can be 'message' or 'meta'.
For anonymous message, the from_id is False.
Messages are sent to a session not to users.
"""
_name = 'im_chat.message'
_order = "id desc"
_columns = {
'create_date': fields.datetime('Create Date', required=True, select=True),
'from_id': fields.many2one('res.users', 'Author'),
'to_id': fields.many2one('im_chat.session', 'Session To', required=True, select=True, ondelete='cascade'),
'type': fields.selection([('message','Message'), ('meta','Meta')], 'Type'),
'message': fields.char('Message'),
}
_defaults = {
'type' : 'message',
}
def init_messages(self, cr, uid, context=None):
""" get unread messages and old messages received less than AWAY_TIMER
ago and the session_info for open or folded window
"""
# get the message since the AWAY_TIMER
threshold = datetime.datetime.now() - datetime.timedelta(seconds=AWAY_TIMER)
threshold = threshold.strftime(DEFAULT_SERVER_DATETIME_FORMAT)
domain = [('to_id.user_ids', 'in', [uid]), ('create_date','>',threshold)]
# get the message since the last poll of the user
presence_ids = self.pool['im_chat.presence'].search(cr, uid, [('user_id', '=', uid)], context=context)
if presence_ids:
presence = self.pool['im_chat.presence'].browse(cr, uid, presence_ids, context=context)[0]
threshold = presence.last_poll
domain.append(('create_date','>',threshold))
messages = self.search_read(cr, uid, domain, ['from_id','to_id','create_date','type','message'], order='id asc', context=context)
# get the session of the messages and the not-closed ones
session_ids = map(lambda m: m['to_id'][0], messages)
domain = [('user_id','=',uid), '|', ('state','!=','closed'), ('session_id', 'in', session_ids)]
session_rels_ids = self.pool['im_chat.conversation_state'].search(cr, uid, domain, context=context)
# re-open the session where a message have been recieve recently
session_rels = self.pool['im_chat.conversation_state'].browse(cr, uid, session_rels_ids, context=context)
reopening_session = []
notifications = []
for sr in session_rels:
si = sr.session_id.session_info()
si['state'] = sr.state
if sr.state == 'closed':
si['state'] = 'folded'
reopening_session.append(sr.id)
notifications.append([(cr.dbname,'im_chat.session', uid), si])
for m in messages:
notifications.append([(cr.dbname,'im_chat.session', uid), m])
self.pool['im_chat.conversation_state'].write(cr, uid, reopening_session, {'state': 'folded'}, context=context)
return notifications
def post(self, cr, uid, from_uid, uuid, message_type, message_content, context=None):
""" post and broadcast a message, return the message id """
message_id = False
Session = self.pool['im_chat.session']
session_ids = Session.search(cr, uid, [('uuid','=',uuid)], context=context)
notifications = []
for session in Session.browse(cr, uid, session_ids, context=context):
# build the new message
vals = {
"from_id": from_uid,
"to_id": session.id,
"type": message_type,
"message": message_content,
}
# save it
message_id = self.create(cr, uid, vals, context=context)
# broadcast it to channel (anonymous users) and users_ids
data = self.read(cr, uid, [message_id], ['from_id','to_id','create_date','type','message'], context=context)[0]
notifications.append([uuid, data])
for user in session.user_ids:
notifications.append([(cr.dbname, 'im_chat.session', user.id), data])
self.pool['bus.bus'].sendmany(cr, uid, notifications)
return message_id
def get_messages(self, cr, uid, uuid, last_id=False, limit=20, context=None):
""" get messages (id desc) from given last_id in the given session """
Session = self.pool['im_chat.session']
if Session.is_in_session(cr, uid, uuid, uid, context=context):
domain = [("to_id.uuid", "=", uuid)]
if last_id:
domain.append(("id", "<", last_id));
return self.search_read(cr, uid, domain, ['id', 'create_date','to_id','from_id', 'type', 'message'], limit=limit, context=context)
return False
class im_chat_presence(osv.Model):
""" im_chat_presence status can be: online, away or offline.
This model is a one2one, but is not attached to res_users to avoid database concurrence errors
"""
_name = 'im_chat.presence'
_columns = {
'user_id' : fields.many2one('res.users', 'Users', required=True, select=True),
'last_poll': fields.datetime('Last Poll'),
'last_presence': fields.datetime('Last Presence'),
'status' : fields.selection([('online','Online'), ('away','Away'), ('offline','Offline')], 'IM Status'),
}
_defaults = {
'last_poll' : fields.datetime.now,
'last_presence' : fields.datetime.now,
'status' : 'offline'
}
_sql_constraints = [('im_chat_user_status_unique','unique(user_id)', 'A user can only have one IM status.')]
def update(self, cr, uid, presence=True, context=None):
""" register the poll, and change its im status if necessary. It also notify the Bus if the status has changed. """
presence_ids = self.search(cr, uid, [('user_id', '=', uid)], context=context)
presences = self.browse(cr, uid, presence_ids, context=context)
# set the default values
send_notification = True
vals = {
'last_poll': time.strftime(DEFAULT_SERVER_DATETIME_FORMAT),
'status' : presences and presences[0].status or 'offline'
}
# update the user or a create a new one
if not presences:
vals['status'] = 'online'
vals['user_id'] = uid
self.create(cr, uid, vals, context=context)
else:
if presence:
vals['last_presence'] = time.strftime(DEFAULT_SERVER_DATETIME_FORMAT)
vals['status'] = 'online'
else:
threshold = datetime.datetime.now() - datetime.timedelta(seconds=AWAY_TIMER)
if datetime.datetime.strptime(presences[0].last_presence, DEFAULT_SERVER_DATETIME_FORMAT) < threshold:
vals['status'] = 'away'
send_notification = presences[0].status != vals['status']
# write only if the last_poll is passed TIMEOUT, or if the status has changed
delta = datetime.datetime.now() - datetime.datetime.strptime(presences[0].last_poll, DEFAULT_SERVER_DATETIME_FORMAT)
if (delta > datetime.timedelta(seconds=TIMEOUT) or send_notification):
self.write(cr, uid, presence_ids, vals, context=context)
# avoid TransactionRollbackError
cr.commit()
# notify if the status has changed
if send_notification:
self.pool['bus.bus'].sendone(cr, uid, (cr.dbname,'im_chat.presence'), {'id': uid, 'im_status': vals['status']})
# gc : disconnect the users having a too old last_poll. 1 on 100 chance to do it.
if random.random() < 0.01:
self.check_users_disconnection(cr, uid, context=context)
return True
def check_users_disconnection(self, cr, uid, context=None):
""" disconnect the users having a too old last_poll """
dt = (datetime.datetime.now() - datetime.timedelta(0, DISCONNECTION_TIMER)).strftime(DEFAULT_SERVER_DATETIME_FORMAT)
presence_ids = self.search(cr, uid, [('last_poll', '<', dt), ('status' , '!=', 'offline')], context=context)
self.write(cr, uid, presence_ids, {'status': 'offline'}, context=context)
presences = self.browse(cr, uid, presence_ids, context=context)
notifications = []
for presence in presences:
notifications.append([(cr.dbname,'im_chat.presence'), {'id': presence.user_id.id, 'im_status': presence.status}])
self.pool['bus.bus'].sendmany(cr, uid, notifications)
return True
class res_users(osv.Model):
_inherit = "res.users"
def _get_im_status(self, cr, uid, ids, fields, arg, context=None):
""" function computing the im_status field of the users """
r = dict((i, 'offline') for i in ids)
status_ids = self.pool['im_chat.presence'].search(cr, uid, [('user_id', 'in', ids)], context=context)
status = self.pool['im_chat.presence'].browse(cr, uid, status_ids, context=context)
for s in status:
r[s.user_id.id] = s.status
return r
_columns = {
'im_status' : fields.function(_get_im_status, type="char", string="IM Status"),
}
def im_search(self, cr, uid, name, limit=20, context=None):
""" search users with a name and return its id, name and im_status """
result = [];
# find the employee group
group_employee = self.pool['ir.model.data'].get_object_reference(cr, uid, 'base', 'group_user')[1]
where_clause_base = " U.active = 't' "
query_params = ()
if name:
where_clause_base += " AND P.name ILIKE %s "
query_params = query_params + ('%'+name+'%',)
# first query to find online employee
cr.execute('''SELECT U.id as id, P.name as name, COALESCE(S.status, 'offline') as im_status
FROM im_chat_presence S
JOIN res_users U ON S.user_id = U.id
JOIN res_partner P ON P.id = U.partner_id
WHERE '''+where_clause_base+'''
AND U.id != %s
AND EXISTS (SELECT 1 FROM res_groups_users_rel G WHERE G.gid = %s AND G.uid = U.id)
AND S.status = 'online'
ORDER BY P.name
LIMIT %s
''', query_params + (uid, group_employee, limit))
result = result + cr.dictfetchall()
# second query to find other online people
if(len(result) < limit):
cr.execute('''SELECT U.id as id, P.name as name, COALESCE(S.status, 'offline') as im_status
FROM im_chat_presence S
JOIN res_users U ON S.user_id = U.id
JOIN res_partner P ON P.id = U.partner_id
WHERE '''+where_clause_base+'''
AND U.id NOT IN %s
AND S.status = 'online'
ORDER BY P.name
LIMIT %s
''', query_params + (tuple([u["id"] for u in result]) + (uid,), limit-len(result)))
result = result + cr.dictfetchall()
# third query to find all other people
if(len(result) < limit):
cr.execute('''SELECT U.id as id, P.name as name, COALESCE(S.status, 'offline') as im_status
FROM res_users U
LEFT JOIN im_chat_presence S ON S.user_id = U.id
LEFT JOIN res_partner P ON P.id = U.partner_id
WHERE '''+where_clause_base+'''
AND U.id NOT IN %s
ORDER BY P.name
LIMIT %s
''', query_params + (tuple([u["id"] for u in result]) + (uid,), limit-len(result)))
result = result + cr.dictfetchall()
return result
#----------------------------------------------------------
# Controllers
#----------------------------------------------------------
class Controller(openerp.addons.bus.bus.Controller):
def _poll(self, dbname, channels, last, options):
if request.session.uid:
registry, cr, uid, context = request.registry, request.cr, request.session.uid, request.context
registry.get('im_chat.presence').update(cr, uid, options.get('im_presence', False), context=context)
## For performance issue, the real time status notification is disabled. This means a change of status are still braoadcasted
## but not received by anyone. Otherwise, all listening user restart their longpolling at the same time and cause a 'ConnectionPool Full Error'
## since there is not enought cursors for everyone. Now, when a user open his list of users, an RPC call is made to update his user status list.
##channels.append((request.db,'im_chat.presence'))
# channel to receive message
channels.append((request.db,'im_chat.session', request.uid))
return super(Controller, self)._poll(dbname, channels, last, options)
@openerp.http.route('/im_chat/init', type="json", auth="none")
def init(self):
registry, cr, uid, context = request.registry, request.cr, request.session.uid, request.context
notifications = registry['im_chat.message'].init_messages(cr, uid, context=context)
return notifications
@openerp.http.route('/im_chat/post', type="json", auth="none")
def post(self, uuid, message_type, message_content):
registry, cr, uid, context = request.registry, request.cr, request.session.uid, request.context
# execute the post method as SUPERUSER_ID
message_id = registry["im_chat.message"].post(cr, openerp.SUPERUSER_ID, uid, uuid, message_type, message_content, context=context)
return message_id
@openerp.http.route(['/im_chat/image/<string:uuid>/<string:user_id>'], type='http', auth="none")
def image(self, uuid, user_id):
registry, cr, context, uid = request.registry, request.cr, request.context, request.session.uid
# get the image
Session = registry.get("im_chat.session")
image_b64 = Session.get_image(cr, openerp.SUPERUSER_ID, uuid, simplejson.loads(user_id), context)
# built the response
image_data = base64.b64decode(image_b64)
headers = [('Content-Type', 'image/png')]
headers.append(('Content-Length', len(image_data)))
return request.make_response(image_data, headers)
@openerp.http.route(['/im_chat/history'], type="json", auth="none")
def history(self, uuid, last_id=False, limit=20):
registry, cr, uid, context = request.registry, request.cr, request.session.uid or openerp.SUPERUSER_ID, request.context
return registry["im_chat.message"].get_messages(cr, uid, uuid, last_id, limit, context=context)
# vim:et:
|
agpl-3.0
|
kerr-huang/SL4A
|
python/src/Lib/plat-mac/Carbon/Components.py
|
81
|
2301
|
# Generated from 'Components.h'
def FOUR_CHAR_CODE(x): return x
kAppleManufacturer = FOUR_CHAR_CODE('appl')
kComponentResourceType = FOUR_CHAR_CODE('thng')
kComponentAliasResourceType = FOUR_CHAR_CODE('thga')
kAnyComponentType = 0
kAnyComponentSubType = 0
kAnyComponentManufacturer = 0
kAnyComponentFlagsMask = 0
cmpIsMissing = 1L << 29
cmpWantsRegisterMessage = 1L << 31
kComponentOpenSelect = -1
kComponentCloseSelect = -2
kComponentCanDoSelect = -3
kComponentVersionSelect = -4
kComponentRegisterSelect = -5
kComponentTargetSelect = -6
kComponentUnregisterSelect = -7
kComponentGetMPWorkFunctionSelect = -8
kComponentExecuteWiredActionSelect = -9
kComponentGetPublicResourceSelect = -10
componentDoAutoVersion = (1 << 0)
componentWantsUnregister = (1 << 1)
componentAutoVersionIncludeFlags = (1 << 2)
componentHasMultiplePlatforms = (1 << 3)
componentLoadResident = (1 << 4)
defaultComponentIdentical = 0
defaultComponentAnyFlags = 1
defaultComponentAnyManufacturer = 2
defaultComponentAnySubType = 4
defaultComponentAnyFlagsAnyManufacturer = (defaultComponentAnyFlags + defaultComponentAnyManufacturer)
defaultComponentAnyFlagsAnyManufacturerAnySubType = (defaultComponentAnyFlags + defaultComponentAnyManufacturer + defaultComponentAnySubType)
registerComponentGlobal = 1
registerComponentNoDuplicates = 2
registerComponentAfterExisting = 4
registerComponentAliasesOnly = 8
platform68k = 1
platformPowerPC = 2
platformInterpreted = 3
platformWin32 = 4
platformPowerPCNativeEntryPoint = 5
mpWorkFlagDoWork = (1 << 0)
mpWorkFlagDoCompletion = (1 << 1)
mpWorkFlagCopyWorkBlock = (1 << 2)
mpWorkFlagDontBlock = (1 << 3)
mpWorkFlagGetProcessorCount = (1 << 4)
mpWorkFlagGetIsRunning = (1 << 6)
cmpAliasNoFlags = 0
cmpAliasOnlyThisFile = 1
uppComponentFunctionImplementedProcInfo = 0x000002F0
uppGetComponentVersionProcInfo = 0x000000F0
uppComponentSetTargetProcInfo = 0x000003F0
uppCallComponentOpenProcInfo = 0x000003F0
uppCallComponentCloseProcInfo = 0x000003F0
uppCallComponentCanDoProcInfo = 0x000002F0
uppCallComponentVersionProcInfo = 0x000000F0
uppCallComponentRegisterProcInfo = 0x000000F0
uppCallComponentTargetProcInfo = 0x000003F0
uppCallComponentUnregisterProcInfo = 0x000000F0
uppCallComponentGetMPWorkFunctionProcInfo = 0x00000FF0
uppCallComponentGetPublicResourceProcInfo = 0x00003BF0
|
apache-2.0
|
ahnchan2/linux
|
scripts/gdb/linux/modules.py
|
774
|
2718
|
#
# gdb helper commands and functions for Linux kernel debugging
#
# module tools
#
# Copyright (c) Siemens AG, 2013
#
# Authors:
# Jan Kiszka <[email protected]>
#
# This work is licensed under the terms of the GNU GPL version 2.
#
import gdb
from linux import cpus, utils
module_type = utils.CachedType("struct module")
def module_list():
global module_type
module_ptr_type = module_type.get_type().pointer()
modules = gdb.parse_and_eval("modules")
entry = modules['next']
end_of_list = modules.address
while entry != end_of_list:
yield utils.container_of(entry, module_ptr_type, "list")
entry = entry['next']
def find_module_by_name(name):
for module in module_list():
if module['name'].string() == name:
return module
return None
class LxModule(gdb.Function):
"""Find module by name and return the module variable.
$lx_module("MODULE"): Given the name MODULE, iterate over all loaded modules
of the target and return that module variable which MODULE matches."""
def __init__(self):
super(LxModule, self).__init__("lx_module")
def invoke(self, mod_name):
mod_name = mod_name.string()
module = find_module_by_name(mod_name)
if module:
return module.dereference()
else:
raise gdb.GdbError("Unable to find MODULE " + mod_name)
LxModule()
class LxLsmod(gdb.Command):
"""List currently loaded modules."""
_module_use_type = utils.CachedType("struct module_use")
def __init__(self):
super(LxLsmod, self).__init__("lx-lsmod", gdb.COMMAND_DATA)
def invoke(self, arg, from_tty):
gdb.write(
"Address{0} Module Size Used by\n".format(
" " if utils.get_long_type().sizeof == 8 else ""))
for module in module_list():
gdb.write("{address} {name:<19} {size:>8} {ref}".format(
address=str(module['module_core']).split()[0],
name=module['name'].string(),
size=str(module['core_size']),
ref=str(module['refcnt']['counter'])))
source_list = module['source_list']
t = self._module_use_type.get_type().pointer()
entry = source_list['next']
first = True
while entry != source_list.address:
use = utils.container_of(entry, t, "source_list")
gdb.write("{separator}{name}".format(
separator=" " if first else ",",
name=use['source']['name'].string()))
first = False
entry = entry['next']
gdb.write("\n")
LxLsmod()
|
gpl-2.0
|
amyliu345/zulip
|
tools/documentation_crawler/documentation_crawler/spiders/check_help_documentation.py
|
16
|
2033
|
#!/usr/bin/env python
from __future__ import print_function
import os
from posixpath import basename
from six.moves.urllib.parse import urlparse
from .common.spiders import BaseDocumentationSpider
from typing import Any, List, Set
def get_help_images_dir(help_images_path):
# type: (str) -> str
# Get index html file as start url and convert it to file uri
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, os.path.join(*[os.pardir] * 4), help_images_path)
return os.path.realpath(target_path)
class HelpDocumentationSpider(BaseDocumentationSpider):
name = "help_documentation_crawler"
start_urls = ['http://localhost:9981/help']
deny_domains = [] # type: List[str]
deny = ['/privacy']
help_images_path = "static/images/help"
help_images_static_dir = get_help_images_dir(help_images_path)
def __init__(self, *args, **kwargs):
# type: (*Any, **Any) -> None
super(HelpDocumentationSpider, self).__init__(*args, **kwargs)
self.static_images = set() # type: Set
def _is_external_url(self, url):
# type: (str) -> bool
is_external = url.startswith('http') and 'localhost:9981/help' not in url
if self._has_extension(url) and 'localhost:9981/static/images/help' in url:
self.static_images.add(basename(urlparse(url).path))
return is_external or self._has_extension(url)
def closed(self, *args, **kwargs):
# type: (*Any, **Any) -> None
unused_images = set(os.listdir(self.help_images_static_dir)) - self.static_images
if unused_images:
exception_message = "The following images are not used in help documentation " \
"and can be removed: {}"
self._set_error_state()
unused_images_relatedpath = [
os.path.join(self.help_images_path, img) for img in unused_images]
raise Exception(exception_message.format(', '.join(unused_images_relatedpath)))
|
apache-2.0
|
sdx23/khal
|
khal/calendar_display.py
|
4
|
8355
|
# Copyright (c) 2013-2021 khal contributors
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import calendar
import datetime as dt
from locale import LC_ALL, LC_TIME, getlocale, setlocale
from click import style
from .terminal import colored
from .utils import get_month_abbr_len
setlocale(LC_ALL, '')
def get_weekheader(firstweekday):
try:
mylocale = '.'.join(getlocale(LC_TIME))
except TypeError:
mylocale = 'C'
_calendar = calendar.LocaleTextCalendar(firstweekday, locale=mylocale)
return _calendar.formatweekheader(2)
def getweeknumber(date):
"""return iso week number for datetime.date object
:param date: date
:type date: datetime.date()
:return: weeknumber
:rtype: int
"""
return dt.date.isocalendar(date)[1]
def get_calendar_color(calendar, default_color, collection):
"""Because multi-line lambdas would be un-Pythonic
"""
if collection._calendars[calendar]['color'] == '':
return default_color
return collection._calendars[calendar]['color']
def get_color_list(calendars, default_color, collection):
"""Get the list of possible colors for the day, taking into account priority
"""
dcolors = list(
map(lambda x: (get_calendar_color(x, default_color, collection),
collection._calendars[x]['priority']), calendars)
)
dcolors.sort(key=lambda x: x[1], reverse=True)
maxPriority = dcolors[0][1]
dcolors = list(
filter(lambda x: x[1] == maxPriority, dcolors)
)
dcolors = list(
map(lambda x: x[0], dcolors)
)
dcolors = list(set(dcolors))
return dcolors
def str_highlight_day(
day, calendars, hmethod, default_color, multiple, color, bold_for_light_color, collection):
"""returns a string with day highlighted according to configuration
"""
dstr = str(day.day).rjust(2)
if color == '':
dcolors = get_color_list(calendars, default_color, collection)
if len(dcolors) > 1:
if multiple == '':
if hmethod == "foreground" or hmethod == "fg":
return colored(dstr[:1], fg=dcolors[0],
bold_for_light_color=bold_for_light_color) + \
colored(dstr[1:], fg=dcolors[1], bold_for_light_color=bold_for_light_color)
else:
return colored(dstr[:1], bg=dcolors[0],
bold_for_light_color=bold_for_light_color) + \
colored(dstr[1:], bg=dcolors[1], bold_for_light_color=bold_for_light_color)
else:
dcolor = multiple
else:
dcolor = dcolors[0] or default_color
else:
dcolor = color
if dcolor != '':
if hmethod == "foreground" or hmethod == "fg":
return colored(dstr, fg=dcolor, bold_for_light_color=bold_for_light_color)
else:
return colored(dstr, bg=dcolor, bold_for_light_color=bold_for_light_color)
return dstr
def str_week(week, today, collection=None,
hmethod=None, default_color=None, multiple=None, color=None,
highlight_event_days=False, locale=None, bold_for_light_color=True):
"""returns a string representing one week,
if for day == today color is reversed
:param week: list of 7 datetime.date objects (one week)
:type day: list()
:param today: the date of today
:type today: datetime.date
:return: string, which if printed on terminal appears to have length 20,
but may contain ascii escape sequences
:rtype: str
"""
strweek = ''
for day in week:
if day == today:
day = style(str(day.day).rjust(2), reverse=True)
elif highlight_event_days:
devents = list(collection.get_calendars_on(day))
if len(devents) > 0:
day = str_highlight_day(day, devents, hmethod, default_color,
multiple, color, bold_for_light_color, collection)
else:
day = str(day.day).rjust(2)
else:
day = str(day.day).rjust(2)
strweek = strweek + day + ' '
return strweek
def vertical_month(month=None,
year=None,
today=None,
weeknumber=False,
count=3,
firstweekday=0,
monthdisplay='firstday',
collection=None,
hmethod='fg',
default_color='',
multiple='',
color='',
highlight_event_days=False,
locale=None,
bold_for_light_color=True):
"""
returns a list() of str() of weeks for a vertical arranged calendar
:param month: first month of the calendar,
if non given, current month is assumed
:type month: int
:param year: year of the first month included,
if non given, current year is assumed
:type year: int
:param today: day highlighted, if non is given, current date is assumed
:type today: datetime.date()
:param weeknumber: if not False the iso weeknumber will be shown for each
week, if weeknumber is 'right' it will be shown in its
own column, if it is 'left' it will be shown interleaved
with the month names
:type weeknumber: str/bool
:returns: calendar strings, may also include some
ANSI (color) escape strings
:rtype: list() of str()
"""
if month is None:
month = dt.date.today().month
if year is None:
year = dt.date.today().year
if today is None:
today = dt.date.today()
khal = list()
w_number = ' ' if weeknumber == 'right' else ''
calendar.setfirstweekday(firstweekday)
weekheaders = get_weekheader(firstweekday)
month_abbr_len = get_month_abbr_len()
khal.append(style(' ' * month_abbr_len + weekheaders + ' ' + w_number, bold=True))
_calendar = calendar.Calendar(firstweekday)
for _ in range(count):
for week in _calendar.monthdatescalendar(year, month):
if monthdisplay == 'firstday':
new_month = len([day for day in week if day.day == 1])
else:
new_month = len(week if week[0].day <= 7 else [])
strweek = str_week(week, today, collection, hmethod, default_color,
multiple, color, highlight_event_days, locale, bold_for_light_color)
if new_month:
m_name = style(calendar.month_abbr[week[6].month].ljust(month_abbr_len), bold=True)
elif weeknumber == 'left':
m_name = style(str(getweeknumber(week[0])).center(month_abbr_len), bold=True)
else:
m_name = ' ' * month_abbr_len
if weeknumber == 'right':
w_number = style('{:2}'.format(getweeknumber(week[0])), bold=True)
else:
w_number = ''
sweek = m_name + strweek + w_number
if sweek != khal[-1]:
khal.append(sweek)
month = month + 1
if month > 12:
month = 1
year = year + 1
return khal
|
mit
|
wildjan/Flask
|
Work/TriviaMVA/TriviaMVA/env/Lib/site-packages/werkzeug/contrib/sessions.py
|
295
|
12450
|
# -*- coding: utf-8 -*-
r"""
werkzeug.contrib.sessions
~~~~~~~~~~~~~~~~~~~~~~~~~
This module contains some helper classes that help one to add session
support to a python WSGI application. For full client-side session
storage see :mod:`~werkzeug.contrib.securecookie` which implements a
secure, client-side session storage.
Application Integration
=======================
::
from werkzeug.contrib.sessions import SessionMiddleware, \
FilesystemSessionStore
app = SessionMiddleware(app, FilesystemSessionStore())
The current session will then appear in the WSGI environment as
`werkzeug.session`. However it's recommended to not use the middleware
but the stores directly in the application. However for very simple
scripts a middleware for sessions could be sufficient.
This module does not implement methods or ways to check if a session is
expired. That should be done by a cronjob and storage specific. For
example to prune unused filesystem sessions one could check the modified
time of the files. It sessions are stored in the database the new()
method should add an expiration timestamp for the session.
For better flexibility it's recommended to not use the middleware but the
store and session object directly in the application dispatching::
session_store = FilesystemSessionStore()
def application(environ, start_response):
request = Request(environ)
sid = request.cookies.get('cookie_name')
if sid is None:
request.session = session_store.new()
else:
request.session = session_store.get(sid)
response = get_the_response_object(request)
if request.session.should_save:
session_store.save(request.session)
response.set_cookie('cookie_name', request.session.sid)
return response(environ, start_response)
:copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
import re
import os
import sys
import tempfile
from os import path
from time import time
from random import random
from hashlib import sha1
from pickle import dump, load, HIGHEST_PROTOCOL
from werkzeug.datastructures import CallbackDict
from werkzeug.utils import dump_cookie, parse_cookie
from werkzeug.wsgi import ClosingIterator
from werkzeug.posixemulation import rename
from werkzeug._compat import PY2, text_type
_sha1_re = re.compile(r'^[a-f0-9]{40}$')
def _urandom():
if hasattr(os, 'urandom'):
return os.urandom(30)
return text_type(random()).encode('ascii')
def generate_key(salt=None):
if salt is None:
salt = repr(salt).encode('ascii')
return sha1(b''.join([
salt,
str(time()).encode('ascii'),
_urandom()
])).hexdigest()
class ModificationTrackingDict(CallbackDict):
__slots__ = ('modified',)
def __init__(self, *args, **kwargs):
def on_update(self):
self.modified = True
self.modified = False
CallbackDict.__init__(self, on_update=on_update)
dict.update(self, *args, **kwargs)
def copy(self):
"""Create a flat copy of the dict."""
missing = object()
result = object.__new__(self.__class__)
for name in self.__slots__:
val = getattr(self, name, missing)
if val is not missing:
setattr(result, name, val)
return result
def __copy__(self):
return self.copy()
class Session(ModificationTrackingDict):
"""Subclass of a dict that keeps track of direct object changes. Changes
in mutable structures are not tracked, for those you have to set
`modified` to `True` by hand.
"""
__slots__ = ModificationTrackingDict.__slots__ + ('sid', 'new')
def __init__(self, data, sid, new=False):
ModificationTrackingDict.__init__(self, data)
self.sid = sid
self.new = new
def __repr__(self):
return '<%s %s%s>' % (
self.__class__.__name__,
dict.__repr__(self),
self.should_save and '*' or ''
)
@property
def should_save(self):
"""True if the session should be saved.
.. versionchanged:: 0.6
By default the session is now only saved if the session is
modified, not if it is new like it was before.
"""
return self.modified
class SessionStore(object):
"""Baseclass for all session stores. The Werkzeug contrib module does not
implement any useful stores besides the filesystem store, application
developers are encouraged to create their own stores.
:param session_class: The session class to use. Defaults to
:class:`Session`.
"""
def __init__(self, session_class=None):
if session_class is None:
session_class = Session
self.session_class = session_class
def is_valid_key(self, key):
"""Check if a key has the correct format."""
return _sha1_re.match(key) is not None
def generate_key(self, salt=None):
"""Simple function that generates a new session key."""
return generate_key(salt)
def new(self):
"""Generate a new session."""
return self.session_class({}, self.generate_key(), True)
def save(self, session):
"""Save a session."""
def save_if_modified(self, session):
"""Save if a session class wants an update."""
if session.should_save:
self.save(session)
def delete(self, session):
"""Delete a session."""
def get(self, sid):
"""Get a session for this sid or a new session object. This method
has to check if the session key is valid and create a new session if
that wasn't the case.
"""
return self.session_class({}, sid, True)
#: used for temporary files by the filesystem session store
_fs_transaction_suffix = '.__wz_sess'
class FilesystemSessionStore(SessionStore):
"""Simple example session store that saves sessions on the filesystem.
This store works best on POSIX systems and Windows Vista / Windows
Server 2008 and newer.
.. versionchanged:: 0.6
`renew_missing` was added. Previously this was considered `True`,
now the default changed to `False` and it can be explicitly
deactivated.
:param path: the path to the folder used for storing the sessions.
If not provided the default temporary directory is used.
:param filename_template: a string template used to give the session
a filename. ``%s`` is replaced with the
session id.
:param session_class: The session class to use. Defaults to
:class:`Session`.
:param renew_missing: set to `True` if you want the store to
give the user a new sid if the session was
not yet saved.
"""
def __init__(self, path=None, filename_template='werkzeug_%s.sess',
session_class=None, renew_missing=False, mode=0o644):
SessionStore.__init__(self, session_class)
if path is None:
path = tempfile.gettempdir()
self.path = path
if isinstance(filename_template, text_type) and PY2:
filename_template = filename_template.encode(
sys.getfilesystemencoding() or 'utf-8')
assert not filename_template.endswith(_fs_transaction_suffix), \
'filename templates may not end with %s' % _fs_transaction_suffix
self.filename_template = filename_template
self.renew_missing = renew_missing
self.mode = mode
def get_session_filename(self, sid):
# out of the box, this should be a strict ASCII subset but
# you might reconfigure the session object to have a more
# arbitrary string.
if isinstance(sid, text_type) and PY2:
sid = sid.encode(sys.getfilesystemencoding() or 'utf-8')
return path.join(self.path, self.filename_template % sid)
def save(self, session):
fn = self.get_session_filename(session.sid)
fd, tmp = tempfile.mkstemp(suffix=_fs_transaction_suffix,
dir=self.path)
f = os.fdopen(fd, 'wb')
try:
dump(dict(session), f, HIGHEST_PROTOCOL)
finally:
f.close()
try:
rename(tmp, fn)
os.chmod(fn, self.mode)
except (IOError, OSError):
pass
def delete(self, session):
fn = self.get_session_filename(session.sid)
try:
os.unlink(fn)
except OSError:
pass
def get(self, sid):
if not self.is_valid_key(sid):
return self.new()
try:
f = open(self.get_session_filename(sid), 'rb')
except IOError:
if self.renew_missing:
return self.new()
data = {}
else:
try:
try:
data = load(f)
except Exception:
data = {}
finally:
f.close()
return self.session_class(data, sid, False)
def list(self):
"""Lists all sessions in the store.
.. versionadded:: 0.6
"""
before, after = self.filename_template.split('%s', 1)
filename_re = re.compile(r'%s(.{5,})%s$' % (re.escape(before),
re.escape(after)))
result = []
for filename in os.listdir(self.path):
#: this is a session that is still being saved.
if filename.endswith(_fs_transaction_suffix):
continue
match = filename_re.match(filename)
if match is not None:
result.append(match.group(1))
return result
class SessionMiddleware(object):
"""A simple middleware that puts the session object of a store provided
into the WSGI environ. It automatically sets cookies and restores
sessions.
However a middleware is not the preferred solution because it won't be as
fast as sessions managed by the application itself and will put a key into
the WSGI environment only relevant for the application which is against
the concept of WSGI.
The cookie parameters are the same as for the :func:`~dump_cookie`
function just prefixed with ``cookie_``. Additionally `max_age` is
called `cookie_age` and not `cookie_max_age` because of backwards
compatibility.
"""
def __init__(self, app, store, cookie_name='session_id',
cookie_age=None, cookie_expires=None, cookie_path='/',
cookie_domain=None, cookie_secure=None,
cookie_httponly=False, environ_key='werkzeug.session'):
self.app = app
self.store = store
self.cookie_name = cookie_name
self.cookie_age = cookie_age
self.cookie_expires = cookie_expires
self.cookie_path = cookie_path
self.cookie_domain = cookie_domain
self.cookie_secure = cookie_secure
self.cookie_httponly = cookie_httponly
self.environ_key = environ_key
def __call__(self, environ, start_response):
cookie = parse_cookie(environ.get('HTTP_COOKIE', ''))
sid = cookie.get(self.cookie_name, None)
if sid is None:
session = self.store.new()
else:
session = self.store.get(sid)
environ[self.environ_key] = session
def injecting_start_response(status, headers, exc_info=None):
if session.should_save:
self.store.save(session)
headers.append(('Set-Cookie', dump_cookie(self.cookie_name,
session.sid, self.cookie_age,
self.cookie_expires, self.cookie_path,
self.cookie_domain, self.cookie_secure,
self.cookie_httponly)))
return start_response(status, headers, exc_info)
return ClosingIterator(self.app(environ, injecting_start_response),
lambda: self.store.save_if_modified(session))
|
apache-2.0
|
Edu-Glez/Bank_sentiment_analysis
|
env/lib/python3.6/site-packages/nltk/metrics/segmentation.py
|
5
|
7186
|
# Natural Language Toolkit: Text Segmentation Metrics
#
# Copyright (C) 2001-2017 NLTK Project
# Author: Edward Loper <[email protected]>
# Steven Bird <[email protected]>
# David Doukhan <[email protected]>
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
"""
Text Segmentation Metrics
1. Windowdiff
Pevzner, L., and Hearst, M., A Critique and Improvement of
an Evaluation Metric for Text Segmentation,
Computational Linguistics 28, 19-36
2. Generalized Hamming Distance
Bookstein A., Kulyukin V.A., Raita T.
Generalized Hamming Distance
Information Retrieval 5, 2002, pp 353-375
Baseline implementation in C++
http://digital.cs.usu.edu/~vkulyukin/vkweb/software/ghd/ghd.html
Study describing benefits of Generalized Hamming Distance Versus
WindowDiff for evaluating text segmentation tasks
Begsten, Y. Quel indice pour mesurer l'efficacite en segmentation de textes ?
TALN 2009
3. Pk text segmentation metric
Beeferman D., Berger A., Lafferty J. (1999)
Statistical Models for Text Segmentation
Machine Learning, 34, 177-210
"""
try:
import numpy as np
except ImportError:
pass
from nltk.compat import xrange
def windowdiff(seg1, seg2, k, boundary="1", weighted=False):
"""
Compute the windowdiff score for a pair of segmentations. A
segmentation is any sequence over a vocabulary of two items
(e.g. "0", "1"), where the specified boundary value is used to
mark the edge of a segmentation.
>>> s1 = "000100000010"
>>> s2 = "000010000100"
>>> s3 = "100000010000"
>>> '%.2f' % windowdiff(s1, s1, 3)
'0.00'
>>> '%.2f' % windowdiff(s1, s2, 3)
'0.30'
>>> '%.2f' % windowdiff(s2, s3, 3)
'0.80'
:param seg1: a segmentation
:type seg1: str or list
:param seg2: a segmentation
:type seg2: str or list
:param k: window width
:type k: int
:param boundary: boundary value
:type boundary: str or int or bool
:param weighted: use the weighted variant of windowdiff
:type weighted: boolean
:rtype: float
"""
if len(seg1) != len(seg2):
raise ValueError("Segmentations have unequal length")
if k > len(seg1):
raise ValueError("Window width k should be smaller or equal than segmentation lengths")
wd = 0
for i in range(len(seg1) - k + 1):
ndiff = abs(seg1[i:i+k].count(boundary) - seg2[i:i+k].count(boundary))
if weighted:
wd += ndiff
else:
wd += min(1, ndiff)
return wd / (len(seg1) - k + 1.)
# Generalized Hamming Distance
def _init_mat(nrows, ncols, ins_cost, del_cost):
mat = np.empty((nrows, ncols))
mat[0, :] = ins_cost * np.arange(ncols)
mat[:, 0] = del_cost * np.arange(nrows)
return mat
def _ghd_aux(mat, rowv, colv, ins_cost, del_cost, shift_cost_coeff):
for i, rowi in enumerate(rowv):
for j, colj in enumerate(colv):
shift_cost = shift_cost_coeff * abs(rowi - colj) + mat[i, j]
if rowi == colj:
# boundaries are at the same location, no transformation required
tcost = mat[i, j]
elif rowi > colj:
# boundary match through a deletion
tcost = del_cost + mat[i, j + 1]
else:
# boundary match through an insertion
tcost = ins_cost + mat[i + 1, j]
mat[i + 1, j + 1] = min(tcost, shift_cost)
def ghd(ref, hyp, ins_cost=2.0, del_cost=2.0, shift_cost_coeff=1.0, boundary='1'):
"""
Compute the Generalized Hamming Distance for a reference and a hypothetical
segmentation, corresponding to the cost related to the transformation
of the hypothetical segmentation into the reference segmentation
through boundary insertion, deletion and shift operations.
A segmentation is any sequence over a vocabulary of two items
(e.g. "0", "1"), where the specified boundary value is used to
mark the edge of a segmentation.
Recommended parameter values are a shift_cost_coeff of 2.
Associated with a ins_cost, and del_cost equal to the mean segment
length in the reference segmentation.
>>> # Same examples as Kulyukin C++ implementation
>>> ghd('1100100000', '1100010000', 1.0, 1.0, 0.5)
0.5
>>> ghd('1100100000', '1100000001', 1.0, 1.0, 0.5)
2.0
>>> ghd('011', '110', 1.0, 1.0, 0.5)
1.0
>>> ghd('1', '0', 1.0, 1.0, 0.5)
1.0
>>> ghd('111', '000', 1.0, 1.0, 0.5)
3.0
>>> ghd('000', '111', 1.0, 2.0, 0.5)
6.0
:param ref: the reference segmentation
:type ref: str or list
:param hyp: the hypothetical segmentation
:type hyp: str or list
:param ins_cost: insertion cost
:type ins_cost: float
:param del_cost: deletion cost
:type del_cost: float
:param shift_cost_coeff: constant used to compute the cost of a shift.
shift cost = shift_cost_coeff * |i - j| where i and j are
the positions indicating the shift
:type shift_cost_coeff: float
:param boundary: boundary value
:type boundary: str or int or bool
:rtype: float
"""
ref_idx = [i for (i, val) in enumerate(ref) if val == boundary]
hyp_idx = [i for (i, val) in enumerate(hyp) if val == boundary]
nref_bound = len(ref_idx)
nhyp_bound = len(hyp_idx)
if nref_bound == 0 and nhyp_bound == 0:
return 0.0
elif nref_bound > 0 and nhyp_bound == 0:
return nref_bound * ins_cost
elif nref_bound == 0 and nhyp_bound > 0:
return nhyp_bound * del_cost
mat = _init_mat(nhyp_bound + 1, nref_bound + 1, ins_cost, del_cost)
_ghd_aux(mat, hyp_idx, ref_idx, ins_cost, del_cost, shift_cost_coeff)
return mat[-1, -1]
# Beeferman's Pk text segmentation evaluation metric
def pk(ref, hyp, k=None, boundary='1'):
"""
Compute the Pk metric for a pair of segmentations A segmentation
is any sequence over a vocabulary of two items (e.g. "0", "1"),
where the specified boundary value is used to mark the edge of a
segmentation.
>>> '%.2f' % pk('0100'*100, '1'*400, 2)
'0.50'
>>> '%.2f' % pk('0100'*100, '0'*400, 2)
'0.50'
>>> '%.2f' % pk('0100'*100, '0100'*100, 2)
'0.00'
:param ref: the reference segmentation
:type ref: str or list
:param hyp: the segmentation to evaluate
:type hyp: str or list
:param k: window size, if None, set to half of the average reference segment length
:type boundary: str or int or bool
:param boundary: boundary value
:type boundary: str or int or bool
:rtype: float
"""
if k is None:
k = int(round(len(ref) / (ref.count(boundary) * 2.)))
err = 0
for i in xrange(len(ref)-k +1):
r = ref[i:i+k].count(boundary) > 0
h = hyp[i:i+k].count(boundary) > 0
if r != h:
err += 1
return err / (len(ref)-k +1.)
# skip doctests if numpy is not installed
def setup_module(module):
from nose import SkipTest
try:
import numpy
except ImportError:
raise SkipTest("numpy is required for nltk.metrics.segmentation")
|
apache-2.0
|
jruiperezv/ANALYSE
|
cms/djangoapps/contentstore/features/course-outline.py
|
13
|
4617
|
# pylint: disable=C0111
# pylint: disable=W0621
from lettuce import world, step
from common import *
from nose.tools import assert_true, assert_false, assert_equal # pylint: disable=E0611
from logging import getLogger
logger = getLogger(__name__)
@step(u'I have a course with no sections$')
def have_a_course(step):
world.clear_courses()
course = world.CourseFactory.create()
@step(u'I have a course with 1 section$')
def have_a_course_with_1_section(step):
world.clear_courses()
course = world.CourseFactory.create()
section = world.ItemFactory.create(parent_location=course.location)
subsection1 = world.ItemFactory.create(
parent_location=section.location,
category='sequential',
display_name='Subsection One',)
@step(u'I have a course with multiple sections$')
def have_a_course_with_two_sections(step):
world.clear_courses()
course = world.CourseFactory.create()
section = world.ItemFactory.create(parent_location=course.location)
subsection1 = world.ItemFactory.create(
parent_location=section.location,
category='sequential',
display_name='Subsection One',)
section2 = world.ItemFactory.create(
parent_location=course.location,
display_name='Section Two',)
subsection2 = world.ItemFactory.create(
parent_location=section2.location,
category='sequential',
display_name='Subsection Alpha',)
subsection3 = world.ItemFactory.create(
parent_location=section2.location,
category='sequential',
display_name='Subsection Beta',)
@step(u'I navigate to the course outline page$')
def navigate_to_the_course_outline_page(step):
create_studio_user(is_staff=True)
log_into_studio()
course_locator = 'a.course-link'
world.css_click(course_locator)
@step(u'I navigate to the outline page of a course with multiple sections')
def nav_to_the_outline_page_of_a_course_with_multiple_sections(step):
step.given('I have a course with multiple sections')
step.given('I navigate to the course outline page')
@step(u'I add a section')
def i_add_a_section(step):
add_section()
@step(u'I press the section delete icon')
def i_press_the_section_delete_icon(step):
delete_locator = 'section .outline-section > .section-header a.delete-button'
world.css_click(delete_locator)
@step(u'I will confirm all alerts')
def i_confirm_all_alerts(step):
confirm_locator = '.prompt .nav-actions a.action-primary'
world.css_click(confirm_locator)
@step(u'I see the "([^"]*) All Sections" link$')
def i_see_the_collapse_expand_all_span(step, text):
if text == "Collapse":
span_locator = '.button-toggle-expand-collapse .collapse-all .label'
elif text == "Expand":
span_locator = '.button-toggle-expand-collapse .expand-all .label'
assert_true(world.css_visible(span_locator))
@step(u'I do not see the "([^"]*) All Sections" link$')
def i_do_not_see_the_collapse_expand_all_span(step, text):
if text == "Collapse":
span_locator = '.button-toggle-expand-collapse .collapse-all .label'
elif text == "Expand":
span_locator = '.button-toggle-expand-collapse .expand-all .label'
assert_false(world.css_visible(span_locator))
@step(u'I click the "([^"]*) All Sections" link$')
def i_click_the_collapse_expand_all_span(step, text):
if text == "Collapse":
span_locator = '.button-toggle-expand-collapse .collapse-all .label'
elif text == "Expand":
span_locator = '.button-toggle-expand-collapse .expand-all .label'
assert_true(world.browser.is_element_present_by_css(span_locator))
world.css_click(span_locator)
@step(u'I ([^"]*) the first section$')
def i_collapse_expand_a_section(step, text):
if text == "collapse":
locator = 'section .outline-section .ui-toggle-expansion'
elif text == "expand":
locator = 'section .outline-section .ui-toggle-expansion'
world.css_click(locator)
@step(u'all sections are ([^"]*)$')
def all_sections_are_collapsed_or_expanded(step, text):
subsection_locator = 'div.subsection-list'
subsections = world.css_find(subsection_locator)
for index in range(len(subsections)):
if text == "collapsed":
assert_false(world.css_visible(subsection_locator, index=index))
elif text == "expanded":
assert_true(world.css_visible(subsection_locator, index=index))
@step(u"I change an assignment's grading status")
def change_grading_status(step):
world.css_find('a.menu-toggle').click()
world.css_find('.menu li').first.click()
|
agpl-3.0
|
franciscodominguezmateos/DeepLearningNanoDegree
|
transfer-learning/tensorflow_vgg/test_vgg19_trainable.py
|
152
|
1435
|
"""
Simple tester for the vgg19_trainable
"""
import tensorflow as tf
from tensoflow_vgg import vgg19_trainable as vgg19
from tensoflow_vgg import utils
img1 = utils.load_image("./test_data/tiger.jpeg")
img1_true_result = [1 if i == 292 else 0 for i in range(1000)] # 1-hot result for tiger
batch1 = img1.reshape((1, 224, 224, 3))
with tf.device('/cpu:0'):
sess = tf.Session()
images = tf.placeholder(tf.float32, [1, 224, 224, 3])
true_out = tf.placeholder(tf.float32, [1, 1000])
train_mode = tf.placeholder(tf.bool)
vgg = vgg19.Vgg19('./vgg19.npy')
vgg.build(images, train_mode)
# print number of variables used: 143667240 variables, i.e. ideal size = 548MB
print(vgg.get_var_count())
sess.run(tf.global_variables_initializer())
# test classification
prob = sess.run(vgg.prob, feed_dict={images: batch1, train_mode: False})
utils.print_prob(prob[0], './synset.txt')
# simple 1-step training
cost = tf.reduce_sum((vgg.prob - true_out) ** 2)
train = tf.train.GradientDescentOptimizer(0.0001).minimize(cost)
sess.run(train, feed_dict={images: batch1, true_out: [img1_true_result], train_mode: True})
# test classification again, should have a higher probability about tiger
prob = sess.run(vgg.prob, feed_dict={images: batch1, train_mode: False})
utils.print_prob(prob[0], './synset.txt')
# test save
vgg.save_npy(sess, './test-save.npy')
|
mit
|
adaur/SickRage
|
lib/html5lib/treewalkers/lxmletree.py
|
618
|
6033
|
from __future__ import absolute_import, division, unicode_literals
from six import text_type
from lxml import etree
from ..treebuilders.etree import tag_regexp
from gettext import gettext
_ = gettext
from . import _base
from .. import ihatexml
def ensure_str(s):
if s is None:
return None
elif isinstance(s, text_type):
return s
else:
return s.decode("utf-8", "strict")
class Root(object):
def __init__(self, et):
self.elementtree = et
self.children = []
if et.docinfo.internalDTD:
self.children.append(Doctype(self,
ensure_str(et.docinfo.root_name),
ensure_str(et.docinfo.public_id),
ensure_str(et.docinfo.system_url)))
root = et.getroot()
node = root
while node.getprevious() is not None:
node = node.getprevious()
while node is not None:
self.children.append(node)
node = node.getnext()
self.text = None
self.tail = None
def __getitem__(self, key):
return self.children[key]
def getnext(self):
return None
def __len__(self):
return 1
class Doctype(object):
def __init__(self, root_node, name, public_id, system_id):
self.root_node = root_node
self.name = name
self.public_id = public_id
self.system_id = system_id
self.text = None
self.tail = None
def getnext(self):
return self.root_node.children[1]
class FragmentRoot(Root):
def __init__(self, children):
self.children = [FragmentWrapper(self, child) for child in children]
self.text = self.tail = None
def getnext(self):
return None
class FragmentWrapper(object):
def __init__(self, fragment_root, obj):
self.root_node = fragment_root
self.obj = obj
if hasattr(self.obj, 'text'):
self.text = ensure_str(self.obj.text)
else:
self.text = None
if hasattr(self.obj, 'tail'):
self.tail = ensure_str(self.obj.tail)
else:
self.tail = None
def __getattr__(self, name):
return getattr(self.obj, name)
def getnext(self):
siblings = self.root_node.children
idx = siblings.index(self)
if idx < len(siblings) - 1:
return siblings[idx + 1]
else:
return None
def __getitem__(self, key):
return self.obj[key]
def __bool__(self):
return bool(self.obj)
def getparent(self):
return None
def __str__(self):
return str(self.obj)
def __unicode__(self):
return str(self.obj)
def __len__(self):
return len(self.obj)
class TreeWalker(_base.NonRecursiveTreeWalker):
def __init__(self, tree):
if hasattr(tree, "getroot"):
tree = Root(tree)
elif isinstance(tree, list):
tree = FragmentRoot(tree)
_base.NonRecursiveTreeWalker.__init__(self, tree)
self.filter = ihatexml.InfosetFilter()
def getNodeDetails(self, node):
if isinstance(node, tuple): # Text node
node, key = node
assert key in ("text", "tail"), _("Text nodes are text or tail, found %s") % key
return _base.TEXT, ensure_str(getattr(node, key))
elif isinstance(node, Root):
return (_base.DOCUMENT,)
elif isinstance(node, Doctype):
return _base.DOCTYPE, node.name, node.public_id, node.system_id
elif isinstance(node, FragmentWrapper) and not hasattr(node, "tag"):
return _base.TEXT, node.obj
elif node.tag == etree.Comment:
return _base.COMMENT, ensure_str(node.text)
elif node.tag == etree.Entity:
return _base.ENTITY, ensure_str(node.text)[1:-1] # strip &;
else:
# This is assumed to be an ordinary element
match = tag_regexp.match(ensure_str(node.tag))
if match:
namespace, tag = match.groups()
else:
namespace = None
tag = ensure_str(node.tag)
attrs = {}
for name, value in list(node.attrib.items()):
name = ensure_str(name)
value = ensure_str(value)
match = tag_regexp.match(name)
if match:
attrs[(match.group(1), match.group(2))] = value
else:
attrs[(None, name)] = value
return (_base.ELEMENT, namespace, self.filter.fromXmlName(tag),
attrs, len(node) > 0 or node.text)
def getFirstChild(self, node):
assert not isinstance(node, tuple), _("Text nodes have no children")
assert len(node) or node.text, "Node has no children"
if node.text:
return (node, "text")
else:
return node[0]
def getNextSibling(self, node):
if isinstance(node, tuple): # Text node
node, key = node
assert key in ("text", "tail"), _("Text nodes are text or tail, found %s") % key
if key == "text":
# XXX: we cannot use a "bool(node) and node[0] or None" construct here
# because node[0] might evaluate to False if it has no child element
if len(node):
return node[0]
else:
return None
else: # tail
return node.getnext()
return (node, "tail") if node.tail else node.getnext()
def getParentNode(self, node):
if isinstance(node, tuple): # Text node
node, key = node
assert key in ("text", "tail"), _("Text nodes are text or tail, found %s") % key
if key == "text":
return node
# else: fallback to "normal" processing
return node.getparent()
|
gpl-3.0
|
samueldotj/TeeRISC-Simulator
|
src/cpu/CheckerCPU.py
|
69
|
2022
|
# Copyright (c) 2007 The Regents of The University of Michigan
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Nathan Binkert
from m5.params import *
from BaseCPU import BaseCPU
class CheckerCPU(BaseCPU):
type = 'CheckerCPU'
abstract = True
cxx_header = "cpu/checker/cpu.hh"
exitOnError = Param.Bool(False, "Exit on an error")
updateOnError = Param.Bool(False,
"Update the checker with the main CPU's state on an error")
warnOnlyOnLoadError = Param.Bool(True,
"If a load result is incorrect, only print a warning and do not exit")
|
bsd-3-clause
|
ddico/odoo
|
addons/sale_stock/tests/test_anglo_saxon_valuation.py
|
1
|
45077
|
# -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo.tests import Form, tagged
from odoo.tests.common import SavepointCase
from odoo.exceptions import UserError
@tagged('post_install', '-at_install')
class TestAngloSaxonValuation(SavepointCase):
@classmethod
def setUpClass(cls):
super(TestAngloSaxonValuation, cls).setUpClass()
cls.env.user.company_id.anglo_saxon_accounting = True
cls.product = cls.env['product.product'].create({
'name': 'product',
'type': 'product',
'categ_id': cls.env.ref('product.product_category_all').id,
})
cls.stock_input_account = cls.env['account.account'].create({
'name': 'Stock Input',
'code': 'StockIn',
'user_type_id': cls.env.ref('account.data_account_type_current_assets').id,
})
cls.stock_output_account = cls.env['account.account'].create({
'name': 'Stock Output',
'code': 'StockOut',
'reconcile': True,
'user_type_id': cls.env.ref('account.data_account_type_current_assets').id,
})
cls.stock_valuation_account = cls.env['account.account'].create({
'name': 'Stock Valuation',
'code': 'StockVal',
'user_type_id': cls.env.ref('account.data_account_type_current_assets').id,
})
cls.expense_account = cls.env['account.account'].create({
'name': 'Expense Account',
'code': 'Exp',
'user_type_id': cls.env.ref('account.data_account_type_expenses').id,
})
cls.income_account = cls.env['account.account'].create({
'name': 'Income Account',
'code': 'Inc',
'user_type_id': cls.env.ref('account.data_account_type_expenses').id,
})
cls.stock_journal = cls.env['account.journal'].create({
'name': 'Stock Journal',
'code': 'STJTEST',
'type': 'general',
})
cls.product.write({
'property_account_expense_id': cls.expense_account.id,
'property_account_income_id': cls.income_account.id,
})
cls.product.categ_id.write({
'property_stock_account_input_categ_id': cls.stock_input_account.id,
'property_stock_account_output_categ_id': cls.stock_output_account.id,
'property_stock_valuation_account_id': cls.stock_valuation_account.id,
'property_stock_journal': cls.stock_journal.id,
'property_valuation': 'real_time',
})
cls.stock_location = cls.env['stock.warehouse'].search([], limit=1).lot_stock_id
cls.recv_account = cls.env['account.account'].create({
'name': 'account receivable',
'code': 'RECV',
'user_type_id': cls.env.ref('account.data_account_type_receivable').id,
'reconcile': True,
})
cls.pay_account = cls.env['account.account'].create({
'name': 'account payable',
'code': 'PAY',
'user_type_id': cls.env.ref('account.data_account_type_payable').id,
'reconcile': True,
})
cls.customer = cls.env['res.partner'].create({
'name': 'customer',
'property_account_receivable_id': cls.recv_account.id,
'property_account_payable_id': cls.pay_account.id,
})
cls.journal_sale = cls.env['account.journal'].create({
'name': 'Sale Journal - Test',
'code': 'AJ-SALE',
'type': 'sale',
'company_id': cls.env.user.company_id.id,
})
cls.counterpart_account = cls.env['account.account'].create({
'name': 'Counterpart account',
'code': 'Count',
'user_type_id': cls.env.ref('account.data_account_type_expenses').id,
})
def _inv_adj_two_units(self):
inventory = self.env['stock.inventory'].create({
'name': 'test',
'location_ids': [(4, self.stock_location.id)],
'product_ids': [(4, self.product.id)],
})
inventory.action_start()
self.env['stock.inventory.line'].create({
'inventory_id': inventory.id,
'location_id': self.stock_location.id,
'product_id': self.product.id,
'product_qty': 2,
})
inventory.action_validate()
def _so_and_confirm_two_units(self):
sale_order = self.env['sale.order'].create({
'partner_id': self.customer.id,
'order_line': [
(0, 0, {
'name': self.product.name,
'product_id': self.product.id,
'product_uom_qty': 2.0,
'product_uom': self.product.uom_id.id,
'price_unit': 12,
'tax_id': False, # no love taxes amls
})],
})
sale_order.action_confirm()
return sale_order
def _fifo_in_one_eight_one_ten(self):
# Put two items in stock.
in_move_1 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 1,
'price_unit': 8,
})
in_move_1._action_confirm()
in_move_1.quantity_done = 1
in_move_1._action_done()
in_move_2 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 1,
'price_unit': 10,
})
in_move_2._action_confirm()
in_move_2.quantity_done = 1
in_move_2._action_done()
# -------------------------------------------------------------------------
# Standard Ordered
# -------------------------------------------------------------------------
def test_standard_ordered_invoice_pre_delivery(self):
"""Standard price set to 10. Get 2 units in stock. Sale order 2@12. Standard price set
to 14. Invoice 2 without delivering. The amount in Stock OUT and COGS should be 14*2.
"""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'order'
self.product.standard_price = 10.0
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# standard price to 14
self.product.standard_price = 14.0
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 28)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 28)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_standard_ordered_invoice_post_partial_delivery_1(self):
"""Standard price set to 10. Get 2 units in stock. Sale order 2@12. Deliver 1, invoice 1,
change the standard price to 14, deliver one, change the standard price to 16, invoice 1.
The amounts used in Stock OUT and COGS should be 10 then 14."""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'order'
self.product.standard_price = 10.0
# Put two items in stock.
sale_order = self._so_and_confirm_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# Invoice 1
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice_form = Form(invoice)
with invoice_form.invoice_line_ids.edit(0) as invoice_line:
invoice_line.quantity = 1
invoice_form.save()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 10)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 10)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 12)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 12)
# change the standard price to 14
self.product.standard_price = 14.0
# deliver the backorder
sale_order.picking_ids[0].move_lines.quantity_done = 1
sale_order.picking_ids[0].button_validate()
# change the standard price to 16
self.product.standard_price = 16.0
# invoice 1
invoice2 = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice2.post()
amls = invoice2.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 14)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 14)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 12)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 12)
def test_standard_ordered_invoice_post_delivery(self):
"""Standard price set to 10. Get 2 units in stock. Sale order 2@12. Deliver 1, change the
standard price to 14, deliver one, invoice 2. The amounts used in Stock OUT and COGS should
be 12*2."""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'order'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# change the standard price to 14
self.product.standard_price = 14.0
# deliver the backorder
sale_order.picking_ids.filtered('backorder_id').move_lines.quantity_done = 1
sale_order.picking_ids.filtered('backorder_id').button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 24)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 24)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
# -------------------------------------------------------------------------
# Standard Delivered
# -------------------------------------------------------------------------
def test_standard_delivered_invoice_pre_delivery(self):
"""Not possible to invoice pre delivery."""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Invoice the sale order.
# Nothing delivered = nothing to invoice.
with self.assertRaises(UserError):
sale_order._create_invoices()
def test_standard_delivered_invoice_post_partial_delivery(self):
"""Standard price set to 10. Get 2 units in stock. Sale order 2@12. Deliver 1, invoice 1,
change the standard price to 14, deliver one, change the standard price to 16, invoice 1.
The amounts used in Stock OUT and COGS should be 10 then 14."""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
sale_order = self._so_and_confirm_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# Invoice 1
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice_form = Form(invoice)
with invoice_form.invoice_line_ids.edit(0) as invoice_line:
invoice_line.quantity = 1
invoice_form.save()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 10)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 10)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 12)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 12)
# change the standard price to 14
self.product.standard_price = 14.0
# deliver the backorder
sale_order.picking_ids[0].move_lines.quantity_done = 1
sale_order.picking_ids[0].button_validate()
# change the standard price to 16
self.product.standard_price = 16.0
# invoice 1
invoice2 = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice2.post()
amls = invoice2.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 14)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 14)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 12)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 12)
def test_standard_delivered_invoice_post_delivery(self):
"""Standard price set to 10. Get 2 units in stock. Sale order 2@12. Deliver 1, change the
standard price to 14, deliver one, invoice 2. The amounts used in Stock OUT and COGS should
be 12*2."""
self.product.categ_id.property_cost_method = 'standard'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# change the standard price to 14
self.product.standard_price = 14.0
# deliver the backorder
sale_order.picking_ids.filtered('backorder_id').move_lines.quantity_done = 1
sale_order.picking_ids.filtered('backorder_id').button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 24)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 24)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
# -------------------------------------------------------------------------
# AVCO Ordered
# -------------------------------------------------------------------------
def test_avco_ordered_invoice_pre_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice without delivering."""
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'order'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_avco_ordered_invoice_post_partial_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice after delivering 1."""
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'order'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_avco_ordered_invoice_post_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice after full delivery."""
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'order'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 2
sale_order.picking_ids.button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
# -------------------------------------------------------------------------
# AVCO Delivered
# -------------------------------------------------------------------------
def test_avco_delivered_invoice_pre_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice without delivering. """
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Invoice the sale order.
# Nothing delivered = nothing to invoice.
with self.assertRaises(UserError):
sale_order._create_invoices()
def test_avco_delivered_invoice_post_partial_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice after delivering 1."""
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 10)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 10)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 12)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 12)
def test_avco_delivered_invoice_post_delivery(self):
"""Standard price set to 10. Sale order 2@12. Invoice after full delivery."""
self.product.categ_id.property_cost_method = 'average'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
# Put two items in stock.
self._inv_adj_two_units()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 2
sale_order.picking_ids.button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
# -------------------------------------------------------------------------
# FIFO Ordered
# -------------------------------------------------------------------------
def test_fifo_ordered_invoice_pre_delivery(self):
"""Receive at 8 then at 10. Sale order 2@12. Invoice without delivering.
As no standard price is set, the Stock OUT and COGS amounts are 0."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'order'
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertAlmostEqual(stock_out_aml.credit, 16)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertAlmostEqual(cogs_aml.debit, 16)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_fifo_ordered_invoice_post_partial_delivery(self):
"""Receive 1@8, 1@10, so 2@12, standard price 12, deliver 1, invoice 2: the COGS amount
should be 20: 1 really delivered at 10 and the other valued at the standard price 10."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'order'
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# upate the standard price to 12
self.product.standard_price = 12
# Invoice 2
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice_form = Form(invoice)
with invoice_form.invoice_line_ids.edit(0) as invoice_line:
invoice_line.quantity = 2
invoice_form.save()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_fifo_ordered_invoice_post_delivery(self):
"""Receive at 8 then at 10. Sale order 2@12. Invoice after delivering everything."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'order'
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 2
sale_order.picking_ids.button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 18)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 18)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
# -------------------------------------------------------------------------
# FIFO Delivered
# -------------------------------------------------------------------------
def test_fifo_delivered_invoice_pre_delivery(self):
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Invoice the sale order.
# Nothing delivered = nothing to invoice.
with self.assertRaises(UserError):
invoice_id = sale_order._create_invoices()
def test_fifo_delivered_invoice_post_partial_delivery(self):
"""Receive 1@8, 1@10, so 2@12, standard price 12, deliver 1, invoice 2: the price used should be 10:
one at 8 and one at 10."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'delivery'
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 1
wiz = sale_order.picking_ids.button_validate()
wiz = Form(self.env[wiz['res_model']].with_context(wiz['context'])).save()
wiz.process()
# upate the standard price to 12
self.product.standard_price = 12
# Invoice 2
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice_form = Form(invoice)
with invoice_form.invoice_line_ids.edit(0) as invoice_line:
invoice_line.quantity = 2
invoice_form.save()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 20)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 20)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_fifo_delivered_invoice_post_delivery(self):
"""Receive at 8 then at 10. Sale order 2@12. Invoice after delivering everything."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
self._fifo_in_one_eight_one_ten()
# Create and confirm a sale order for 2@12
sale_order = self._so_and_confirm_two_units()
# Deliver one.
sale_order.picking_ids.move_lines.quantity_done = 2
sale_order.picking_ids.button_validate()
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 18)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 18)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 24)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 24)
def test_fifo_delivered_invoice_post_delivery_2(self):
"""Receive at 8 then at 10. Sale order 10@12 and deliver without receiving the 2 missing.
receive 2@12. Invoice."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'delivery'
self.product.standard_price = 10
in_move_1 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 8,
'price_unit': 10,
})
in_move_1._action_confirm()
in_move_1.quantity_done = 8
in_move_1._action_done()
# Create and confirm a sale order for 2@12
sale_order = self.env['sale.order'].create({
'partner_id': self.customer.id,
'order_line': [
(0, 0, {
'name': self.product.name,
'product_id': self.product.id,
'product_uom_qty': 10.0,
'product_uom': self.product.uom_id.id,
'price_unit': 12,
'tax_id': False, # no love taxes amls
})],
})
sale_order.action_confirm()
# Deliver 10
sale_order.picking_ids.move_lines.quantity_done = 10
sale_order.picking_ids.button_validate()
# Make the second receipt
in_move_2 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 2,
'price_unit': 12,
})
in_move_2._action_confirm()
in_move_2.quantity_done = 2
in_move_2._action_done()
self.assertEqual(self.product.stock_valuation_layer_ids[-1].value, -4) # we sent two at 10 but they should have been sent at 12
self.assertEqual(self.product.stock_valuation_layer_ids[-1].quantity, 0)
self.assertEqual(sale_order.order_line.move_ids.stock_valuation_layer_ids[-1].quantity, 0)
# Invoice the sale order.
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# Check the resulting accounting entries
amls = invoice.line_ids
self.assertEqual(len(amls), 4)
stock_out_aml = amls.filtered(lambda aml: aml.account_id == self.stock_output_account)
self.assertEqual(stock_out_aml.debit, 0)
self.assertEqual(stock_out_aml.credit, 104)
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 104)
self.assertEqual(cogs_aml.credit, 0)
receivable_aml = amls.filtered(lambda aml: aml.account_id == self.recv_account)
self.assertEqual(receivable_aml.debit, 120)
self.assertEqual(receivable_aml.credit, 0)
income_aml = amls.filtered(lambda aml: aml.account_id == self.income_account)
self.assertEqual(income_aml.debit, 0)
self.assertEqual(income_aml.credit, 120)
def test_fifo_delivered_invoice_post_delivery_3(self):
"""Receive 5@8, receive 8@12, sale 1@20, deliver, sale 6@20, deliver. Make sure no rouding
issues appear on the second invoice."""
self.product.categ_id.property_cost_method = 'fifo'
self.product.invoice_policy = 'delivery'
# +5@8
in_move_1 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 5,
'price_unit': 8,
})
in_move_1._action_confirm()
in_move_1.quantity_done = 5
in_move_1._action_done()
# +8@12
in_move_2 = self.env['stock.move'].create({
'name': 'a',
'product_id': self.product.id,
'location_id': self.env.ref('stock.stock_location_suppliers').id,
'location_dest_id': self.stock_location.id,
'product_uom': self.product.uom_id.id,
'product_uom_qty': 8,
'price_unit': 12,
})
in_move_2._action_confirm()
in_move_2.quantity_done = 8
in_move_2._action_done()
# sale 1@20, deliver, invoice
sale_order = self.env['sale.order'].create({
'partner_id': self.customer.id,
'order_line': [
(0, 0, {
'name': self.product.name,
'product_id': self.product.id,
'product_uom_qty': 1,
'product_uom': self.product.uom_id.id,
'price_unit': 20,
'tax_id': False,
})],
})
sale_order.action_confirm()
sale_order.picking_ids.move_lines.quantity_done = 1
sale_order.picking_ids.button_validate()
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# sale 6@20, deliver, invoice
sale_order = self.env['sale.order'].create({
'partner_id': self.customer.id,
'order_line': [
(0, 0, {
'name': self.product.name,
'product_id': self.product.id,
'product_uom_qty': 6,
'product_uom': self.product.uom_id.id,
'price_unit': 20,
'tax_id': False,
})],
})
sale_order.action_confirm()
sale_order.picking_ids.move_lines.quantity_done = 6
sale_order.picking_ids.button_validate()
invoice = sale_order.with_context(default_journal_id=self.journal_sale.id)._create_invoices()
invoice.post()
# check the last anglo saxon invoice line
amls = invoice.line_ids
cogs_aml = amls.filtered(lambda aml: aml.account_id == self.expense_account)
self.assertEqual(cogs_aml.debit, 56)
self.assertEqual(cogs_aml.credit, 0)
|
agpl-3.0
|
spotify/luigi
|
luigi/contrib/hadoop.py
|
4
|
37516
|
# -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Run Hadoop Mapreduce jobs using Hadoop Streaming. To run a job, you need
to subclass :py:class:`luigi.contrib.hadoop.JobTask` and implement a
``mapper`` and ``reducer`` methods. See :doc:`/example_top_artists` for
an example of how to run a Hadoop job.
"""
import abc
import datetime
import glob
import logging
import os
import pickle
import random
import re
import shutil
import signal
from io import StringIO
import subprocess
import sys
import tempfile
import warnings
from hashlib import md5
from itertools import groupby
from luigi import configuration
import luigi
import luigi.task
import luigi.contrib.gcs
import luigi.contrib.hdfs
import luigi.contrib.s3
from luigi.contrib import mrrunner
try:
# See benchmark at https://gist.github.com/mvj3/02dca2bcc8b0ef1bbfb5
import ujson as json
except ImportError:
import json
logger = logging.getLogger('luigi-interface')
_attached_packages = []
TRACKING_RE = re.compile(r'(tracking url|the url to track the job):\s+(?P<url>.+)$')
class hadoop(luigi.task.Config):
pool = luigi.OptionalParameter(
default=None,
description=(
'Hadoop pool so use for Hadoop tasks. To specify pools per tasks, '
'see BaseHadoopJobTask.pool'
),
)
def attach(*packages):
"""
Attach a python package to hadoop map reduce tarballs to make those packages available
on the hadoop cluster.
"""
_attached_packages.extend(packages)
def dereference(f):
if os.path.islink(f):
# by joining with the dirname we are certain to get the absolute path
return dereference(os.path.join(os.path.dirname(f), os.readlink(f)))
else:
return f
def get_extra_files(extra_files):
result = []
for f in extra_files:
if isinstance(f, str):
src, dst = f, os.path.basename(f)
elif isinstance(f, tuple):
src, dst = f
else:
raise Exception()
if os.path.isdir(src):
src_prefix = os.path.join(src, '')
for base, dirs, files in os.walk(src):
for f in files:
f_src = os.path.join(base, f)
f_src_stripped = f_src[len(src_prefix):]
f_dst = os.path.join(dst, f_src_stripped)
result.append((f_src, f_dst))
else:
result.append((src, dst))
return result
def create_packages_archive(packages, filename):
"""
Create a tar archive which will contain the files for the packages listed in packages.
"""
import tarfile
tar = tarfile.open(filename, "w")
def add(src, dst):
logger.debug('adding to tar: %s -> %s', src, dst)
tar.add(src, dst)
def add_files_for_package(sub_package_path, root_package_path, root_package_name):
for root, dirs, files in os.walk(sub_package_path):
if '.svn' in dirs:
dirs.remove('.svn')
for f in files:
if not f.endswith(".pyc") and not f.startswith("."):
add(dereference(root + "/" + f), root.replace(root_package_path, root_package_name) + "/" + f)
for package in packages:
# Put a submodule's entire package in the archive. This is the
# magic that usually packages everything you need without
# having to attach packages/modules explicitly
if not getattr(package, "__path__", None) and '.' in package.__name__:
package = __import__(package.__name__.rpartition('.')[0], None, None, 'non_empty')
n = package.__name__.replace(".", "/")
if getattr(package, "__path__", None):
# TODO: (BUG) picking only the first path does not
# properly deal with namespaced packages in different
# directories
p = package.__path__[0]
if p.endswith('.egg') and os.path.isfile(p):
raise 'egg files not supported!!!'
# Add the entire egg file
# p = p[:p.find('.egg') + 4]
# add(dereference(p), os.path.basename(p))
else:
# include __init__ files from parent projects
root = []
for parent in package.__name__.split('.')[0:-1]:
root.append(parent)
module_name = '.'.join(root)
directory = '/'.join(root)
add(dereference(__import__(module_name, None, None, 'non_empty').__path__[0] + "/__init__.py"),
directory + "/__init__.py")
add_files_for_package(p, p, n)
# include egg-info directories that are parallel:
for egg_info_path in glob.glob(p + '*.egg-info'):
logger.debug(
'Adding package metadata to archive for "%s" found at "%s"',
package.__name__,
egg_info_path
)
add_files_for_package(egg_info_path, p, n)
else:
f = package.__file__
if f.endswith("pyc"):
f = f[:-3] + "py"
if n.find(".") == -1:
add(dereference(f), os.path.basename(f))
else:
add(dereference(f), n + ".py")
tar.close()
def flatten(sequence):
"""
A simple generator which flattens a sequence.
Only one level is flattened.
.. code-block:: python
(1, (2, 3), 4) -> (1, 2, 3, 4)
"""
for item in sequence:
if hasattr(item, "__iter__") and not isinstance(item, str) and not isinstance(item, bytes):
for i in item:
yield i
else:
yield item
class HadoopRunContext:
def __init__(self):
self.job_id = None
self.application_id = None
def __enter__(self):
self.__old_signal = signal.getsignal(signal.SIGTERM)
signal.signal(signal.SIGTERM, self.kill_job)
return self
def kill_job(self, captured_signal=None, stack_frame=None):
if self.application_id:
logger.info('Job interrupted, killing application %s' % self.application_id)
subprocess.call(['yarn', 'application', '-kill', self.application_id])
elif self.job_id:
logger.info('Job interrupted, killing job %s', self.job_id)
subprocess.call(['mapred', 'job', '-kill', self.job_id])
if captured_signal is not None:
# adding 128 gives the exit code corresponding to a signal
sys.exit(128 + captured_signal)
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is KeyboardInterrupt:
self.kill_job()
signal.signal(signal.SIGTERM, self.__old_signal)
class HadoopJobError(RuntimeError):
def __init__(self, message, out=None, err=None):
super(HadoopJobError, self).__init__(message, out, err)
self.message = message
self.out = out
self.err = err
def __str__(self):
return self.message
def run_and_track_hadoop_job(arglist, tracking_url_callback=None, env=None):
"""
Runs the job by invoking the command from the given arglist.
Finds tracking urls from the output and attempts to fetch errors using those urls if the job fails.
Throws HadoopJobError with information about the error
(including stdout and stderr from the process)
on failure and returns normally otherwise.
:param arglist:
:param tracking_url_callback:
:param env:
:return:
"""
logger.info('%s', subprocess.list2cmdline(arglist))
def write_luigi_history(arglist, history):
"""
Writes history to a file in the job's output directory in JSON format.
Currently just for tracking the job ID in a configuration where
no history is stored in the output directory by Hadoop.
"""
history_filename = configuration.get_config().get('core', 'history-filename', '')
if history_filename and '-output' in arglist:
output_dir = arglist[arglist.index('-output') + 1]
f = luigi.contrib.hdfs.HdfsTarget(os.path.join(output_dir, history_filename)).open('w')
f.write(json.dumps(history))
f.close()
def track_process(arglist, tracking_url_callback, env=None):
# Dump stdout to a temp file, poll stderr and log it
temp_stdout = tempfile.TemporaryFile('w+t')
proc = subprocess.Popen(arglist, stdout=temp_stdout, stderr=subprocess.PIPE, env=env, close_fds=True, universal_newlines=True)
# We parse the output to try to find the tracking URL.
# This URL is useful for fetching the logs of the job.
tracking_url = None
job_id = None
application_id = None
err_lines = []
with HadoopRunContext() as hadoop_context:
while proc.poll() is None:
err_line = proc.stderr.readline()
err_lines.append(err_line)
err_line = err_line.strip()
if err_line:
logger.info('%s', err_line)
err_line = err_line.lower()
tracking_url_match = TRACKING_RE.search(err_line)
if tracking_url_match:
tracking_url = tracking_url_match.group('url')
try:
tracking_url_callback(tracking_url)
except Exception as e:
logger.error("Error in tracking_url_callback, disabling! %s", e)
def tracking_url_callback(x):
return None
if err_line.find('running job') != -1:
# hadoop jar output
job_id = err_line.split('running job: ')[-1]
if err_line.find('submitted hadoop job:') != -1:
# scalding output
job_id = err_line.split('submitted hadoop job: ')[-1]
if err_line.find('submitted application ') != -1:
application_id = err_line.split('submitted application ')[-1]
hadoop_context.job_id = job_id
hadoop_context.application_id = application_id
# Read the rest + stdout
err = ''.join(err_lines + [an_err_line for an_err_line in proc.stderr])
temp_stdout.seek(0)
out = ''.join(temp_stdout.readlines())
if proc.returncode == 0:
write_luigi_history(arglist, {'job_id': job_id})
return (out, err)
# Try to fetch error logs if possible
message = 'Streaming job failed with exit code %d. ' % proc.returncode
if not tracking_url:
raise HadoopJobError(message + 'Also, no tracking url found.', out, err)
try:
task_failures = fetch_task_failures(tracking_url)
except Exception as e:
raise HadoopJobError(message + 'Additionally, an error occurred when fetching data from %s: %s' %
(tracking_url, e), out, err)
if not task_failures:
raise HadoopJobError(message + 'Also, could not fetch output from tasks.', out, err)
else:
raise HadoopJobError(message + 'Output from tasks below:\n%s' % task_failures, out, err)
if tracking_url_callback is None:
def tracking_url_callback(x): return None
return track_process(arglist, tracking_url_callback, env)
def fetch_task_failures(tracking_url):
"""
Uses mechanize to fetch the actual task logs from the task tracker.
This is highly opportunistic, and we might not succeed.
So we set a low timeout and hope it works.
If it does not, it's not the end of the world.
TODO: Yarn has a REST API that we should probably use instead:
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html
"""
import mechanize
timeout = 3.0
failures_url = tracking_url.replace('jobdetails.jsp', 'jobfailures.jsp') + '&cause=failed'
logger.debug('Fetching data from %s', failures_url)
b = mechanize.Browser()
b.open(failures_url, timeout=timeout)
links = list(b.links(text_regex='Last 4KB')) # For some reason text_regex='All' doesn't work... no idea why
links = random.sample(links, min(10, len(links))) # Fetch a random subset of all failed tasks, so not to be biased towards the early fails
error_text = []
for link in links:
task_url = link.url.replace('&start=-4097', '&start=-100000') # Increase the offset
logger.debug('Fetching data from %s', task_url)
b2 = mechanize.Browser()
try:
r = b2.open(task_url, timeout=timeout)
data = r.read()
except Exception as e:
logger.debug('Error fetching data from %s: %s', task_url, e)
continue
# Try to get the hex-encoded traceback back from the output
for exc in re.findall(r'luigi-exc-hex=[0-9a-f]+', data):
error_text.append('---------- %s:' % task_url)
error_text.append(exc.split('=')[-1].decode('hex'))
return '\n'.join(error_text)
class JobRunner:
run_job = NotImplemented
class HadoopJobRunner(JobRunner):
"""
Takes care of uploading & executing a Hadoop job using Hadoop streaming.
TODO: add code to support Elastic Mapreduce (using boto) and local execution.
"""
def __init__(self, streaming_jar, modules=None, streaming_args=None,
libjars=None, libjars_in_hdfs=None, jobconfs=None,
input_format=None, output_format=None,
end_job_with_atomic_move_dir=True, archives=None):
def get(x, default):
return x is not None and x or default
self.streaming_jar = streaming_jar
self.modules = get(modules, [])
self.streaming_args = get(streaming_args, [])
self.libjars = get(libjars, [])
self.libjars_in_hdfs = get(libjars_in_hdfs, [])
self.archives = get(archives, [])
self.jobconfs = get(jobconfs, {})
self.input_format = input_format
self.output_format = output_format
self.end_job_with_atomic_move_dir = end_job_with_atomic_move_dir
self.tmp_dir = False
def run_job(self, job, tracking_url_callback=None):
if tracking_url_callback is not None:
warnings.warn("tracking_url_callback argument is deprecated, task.set_tracking_url is "
"used instead.", DeprecationWarning)
packages = [luigi] + self.modules + job.extra_modules() + list(_attached_packages)
# find the module containing the job
packages.append(__import__(job.__module__, None, None, 'dummy'))
# find the path to out runner.py
runner_path = mrrunner.__file__
# assume source is next to compiled
if runner_path.endswith("pyc"):
runner_path = runner_path[:-3] + "py"
base_tmp_dir = configuration.get_config().get('core', 'tmp-dir', None)
if base_tmp_dir:
warnings.warn("The core.tmp-dir configuration item is"
" deprecated, please use the TMPDIR"
" environment variable if you wish"
" to control where luigi.contrib.hadoop may"
" create temporary files and directories.")
self.tmp_dir = os.path.join(base_tmp_dir, 'hadoop_job_%016x' % random.getrandbits(64))
os.makedirs(self.tmp_dir)
else:
self.tmp_dir = tempfile.mkdtemp()
logger.debug("Tmp dir: %s", self.tmp_dir)
# build arguments
config = configuration.get_config()
python_executable = config.get('hadoop', 'python-executable', 'python')
runner_arg = 'mrrunner.pex' if job.package_binary is not None else 'mrrunner.py'
command = '{0} {1} {{step}}'.format(python_executable, runner_arg)
map_cmd = command.format(step='map')
cmb_cmd = command.format(step='combiner')
red_cmd = command.format(step='reduce')
output_final = job.output().path
# atomic output: replace output with a temporary work directory
if self.end_job_with_atomic_move_dir:
illegal_targets = (
luigi.contrib.s3.S3FlagTarget, luigi.contrib.gcs.GCSFlagTarget)
if isinstance(job.output(), illegal_targets):
raise TypeError("end_job_with_atomic_move_dir is not supported"
" for {}".format(illegal_targets))
output_hadoop = '{output}-temp-{time}'.format(
output=output_final,
time=datetime.datetime.now().isoformat().replace(':', '-'))
else:
output_hadoop = output_final
arglist = luigi.contrib.hdfs.load_hadoop_cmd() + ['jar', self.streaming_jar]
# 'libjars' is a generic option, so place it first
libjars = [libjar for libjar in self.libjars]
for libjar in self.libjars_in_hdfs:
run_cmd = luigi.contrib.hdfs.load_hadoop_cmd() + ['fs', '-get', libjar, self.tmp_dir]
logger.debug(subprocess.list2cmdline(run_cmd))
subprocess.call(run_cmd)
libjars.append(os.path.join(self.tmp_dir, os.path.basename(libjar)))
if libjars:
arglist += ['-libjars', ','.join(libjars)]
# 'archives' is also a generic option
archives = []
extra_archives = job.extra_archives()
if self.archives:
archives = self.archives
if extra_archives:
archives += extra_archives
if archives:
arglist += ['-archives', ','.join(archives)]
# Add static files and directories
extra_files = get_extra_files(job.extra_files())
files = []
for src, dst in extra_files:
dst_tmp = '%s_%09d' % (dst.replace('/', '_'), random.randint(0, 999999999))
files += ['%s#%s' % (src, dst_tmp)]
# -files doesn't support subdirectories, so we need to create the dst_tmp -> dst manually
job.add_link(dst_tmp, dst)
if files:
arglist += ['-files', ','.join(files)]
jobconfs = job.jobconfs()
for k, v in self.jobconfs.items():
jobconfs.append('%s=%s' % (k, v))
for conf in jobconfs:
arglist += ['-D', conf]
arglist += self.streaming_args
# Add additional non-generic per-job streaming args
extra_streaming_args = job.extra_streaming_arguments()
for (arg, value) in extra_streaming_args:
if not arg.startswith('-'): # safety first
arg = '-' + arg
arglist += [arg, value]
arglist += ['-mapper', map_cmd]
if job.combiner != NotImplemented:
arglist += ['-combiner', cmb_cmd]
if job.reducer != NotImplemented:
arglist += ['-reducer', red_cmd]
packages_fn = 'mrrunner.pex' if job.package_binary is not None else 'packages.tar'
files = [
runner_path if job.package_binary is None else None,
os.path.join(self.tmp_dir, packages_fn),
os.path.join(self.tmp_dir, 'job-instance.pickle'),
]
for f in filter(None, files):
arglist += ['-file', f]
if self.output_format:
arglist += ['-outputformat', self.output_format]
if self.input_format:
arglist += ['-inputformat', self.input_format]
allowed_input_targets = (
luigi.contrib.hdfs.HdfsTarget,
luigi.contrib.s3.S3Target,
luigi.contrib.gcs.GCSTarget)
for target in luigi.task.flatten(job.input_hadoop()):
if not isinstance(target, allowed_input_targets):
raise TypeError('target must one of: {}'.format(
allowed_input_targets))
arglist += ['-input', target.path]
allowed_output_targets = (
luigi.contrib.hdfs.HdfsTarget,
luigi.contrib.s3.S3FlagTarget,
luigi.contrib.gcs.GCSFlagTarget)
if not isinstance(job.output(), allowed_output_targets):
raise TypeError('output must be one of: {}'.format(
allowed_output_targets))
arglist += ['-output', output_hadoop]
# submit job
if job.package_binary is not None:
shutil.copy(job.package_binary, os.path.join(self.tmp_dir, 'mrrunner.pex'))
else:
create_packages_archive(packages, os.path.join(self.tmp_dir, 'packages.tar'))
job.dump(self.tmp_dir)
run_and_track_hadoop_job(arglist, tracking_url_callback=job.set_tracking_url)
if self.end_job_with_atomic_move_dir:
luigi.contrib.hdfs.HdfsTarget(output_hadoop).move_dir(output_final)
self.finish()
def finish(self):
# FIXME: check for isdir?
if self.tmp_dir and os.path.exists(self.tmp_dir):
logger.debug('Removing directory %s', self.tmp_dir)
shutil.rmtree(self.tmp_dir)
def __del__(self):
self.finish()
class DefaultHadoopJobRunner(HadoopJobRunner):
"""
The default job runner just reads from config and sets stuff.
"""
def __init__(self):
config = configuration.get_config()
streaming_jar = config.get('hadoop', 'streaming-jar')
super(DefaultHadoopJobRunner, self).__init__(streaming_jar=streaming_jar)
# TODO: add more configurable options
class LocalJobRunner(JobRunner):
"""
Will run the job locally.
This is useful for debugging and also unit testing. Tries to mimic Hadoop Streaming.
TODO: integrate with JobTask
"""
def __init__(self, samplelines=None):
self.samplelines = samplelines
def sample(self, input_stream, n, output):
for i, line in enumerate(input_stream):
if n is not None and i >= n:
break
output.write(line)
def group(self, input_stream):
output = StringIO()
lines = []
for i, line in enumerate(input_stream):
parts = line.rstrip('\n').split('\t')
blob = md5(str(i).encode('ascii')).hexdigest() # pseudo-random blob to make sure the input isn't sorted
lines.append((parts[:-1], blob, line))
for _, _, line in sorted(lines):
output.write(line)
output.seek(0)
return output
def run_job(self, job):
map_input = StringIO()
for i in luigi.task.flatten(job.input_hadoop()):
self.sample(i.open('r'), self.samplelines, map_input)
map_input.seek(0)
if job.reducer == NotImplemented:
# Map only job; no combiner, no reducer
map_output = job.output().open('w')
job.run_mapper(map_input, map_output)
map_output.close()
return
# run job now...
map_output = StringIO()
job.run_mapper(map_input, map_output)
map_output.seek(0)
if job.combiner == NotImplemented:
reduce_input = self.group(map_output)
else:
combine_input = self.group(map_output)
combine_output = StringIO()
job.run_combiner(combine_input, combine_output)
combine_output.seek(0)
reduce_input = self.group(combine_output)
reduce_output = job.output().open('w')
job.run_reducer(reduce_input, reduce_output)
reduce_output.close()
class BaseHadoopJobTask(luigi.Task):
pool = luigi.OptionalParameter(default=None, significant=False, positional=False)
# This value can be set to change the default batching increment. Default is 1 for backwards compatibility.
batch_counter_default = 1
final_mapper = NotImplemented
final_combiner = NotImplemented
final_reducer = NotImplemented
mr_priority = NotImplemented
package_binary = None
_counter_dict = {}
task_id = None
def _get_pool(self):
""" Protected method """
if self.pool:
return self.pool
if hadoop().pool:
return hadoop().pool
@abc.abstractmethod
def job_runner(self):
pass
def jobconfs(self):
jcs = []
jcs.append('mapred.job.name=%s' % self)
if self.mr_priority != NotImplemented:
jcs.append('mapred.job.priority=%s' % self.mr_priority())
pool = self._get_pool()
if pool is not None:
# Supporting two schedulers: fair (default) and capacity using the same option
scheduler_type = configuration.get_config().get('hadoop', 'scheduler', 'fair')
if scheduler_type == 'fair':
jcs.append('mapred.fairscheduler.pool=%s' % pool)
elif scheduler_type == 'capacity':
jcs.append('mapred.job.queue.name=%s' % pool)
return jcs
def init_local(self):
"""
Implement any work to setup any internal datastructure etc here.
You can add extra input using the requires_local/input_local methods.
Anything you set on the object will be pickled and available on the Hadoop nodes.
"""
pass
def init_hadoop(self):
pass
# available formats are "python" and "json".
data_interchange_format = "python"
def run(self):
# The best solution is to store them as lazy `cached_property`, but it
# has extraneous dependency. And `property` is slow (need to be
# calculated every time when called), so we save them as attributes
# directly.
self.serialize = DataInterchange[self.data_interchange_format]['serialize']
self.internal_serialize = DataInterchange[self.data_interchange_format]['internal_serialize']
self.deserialize = DataInterchange[self.data_interchange_format]['deserialize']
self.init_local()
self.job_runner().run_job(self)
def requires_local(self):
"""
Default impl - override this method if you need any local input to be accessible in init().
"""
return []
def requires_hadoop(self):
return self.requires() # default impl
def input_local(self):
return luigi.task.getpaths(self.requires_local())
def input_hadoop(self):
return luigi.task.getpaths(self.requires_hadoop())
def deps(self):
# Overrides the default implementation
return luigi.task.flatten(self.requires_hadoop()) + luigi.task.flatten(self.requires_local())
def on_failure(self, exception):
if isinstance(exception, HadoopJobError):
return """Hadoop job failed with message: {message}
stdout:
{stdout}
stderr:
{stderr}
""".format(message=exception.message, stdout=exception.out, stderr=exception.err)
else:
return super(BaseHadoopJobTask, self).on_failure(exception)
DataInterchange = {
"python": {"serialize": str,
"internal_serialize": repr,
"deserialize": eval},
"json": {"serialize": json.dumps,
"internal_serialize": json.dumps,
"deserialize": json.loads}
}
class JobTask(BaseHadoopJobTask):
jobconf_truncate = 20000
n_reduce_tasks = 25
reducer = NotImplemented
def jobconfs(self):
jcs = super(JobTask, self).jobconfs()
if self.reducer == NotImplemented:
jcs.append('mapred.reduce.tasks=0')
else:
jcs.append('mapred.reduce.tasks=%s' % self.n_reduce_tasks)
if self.jobconf_truncate >= 0:
jcs.append('stream.jobconf.truncate.limit=%i' % self.jobconf_truncate)
return jcs
def init_mapper(self):
pass
def init_combiner(self):
pass
def init_reducer(self):
pass
def _setup_remote(self):
self._setup_links()
def job_runner(self):
# We recommend that you define a subclass, override this method and set up your own config
"""
Get the MapReduce runner for this job.
If all outputs are HdfsTargets, the DefaultHadoopJobRunner will be used.
Otherwise, the LocalJobRunner which streams all data through the local machine
will be used (great for testing).
"""
outputs = luigi.task.flatten(self.output())
for output in outputs:
if not isinstance(output, luigi.contrib.hdfs.HdfsTarget):
warnings.warn("Job is using one or more non-HdfsTarget outputs" +
" so it will be run in local mode")
return LocalJobRunner()
else:
return DefaultHadoopJobRunner()
def reader(self, input_stream):
"""
Reader is a method which iterates over input lines and outputs records.
The default implementation yields one argument containing the line for each line in the input."""
for line in input_stream:
yield line,
def writer(self, outputs, stdout, stderr=sys.stderr):
"""
Writer format is a method which iterates over the output records
from the reducer and formats them for output.
The default implementation outputs tab separated items.
"""
for output in outputs:
try:
output = flatten(output)
if self.data_interchange_format == "json":
# Only dump one json string, and skip another one, maybe key or value.
output = filter(lambda x: x, output)
else:
# JSON is already serialized, so we put `self.serialize` in a else statement.
output = map(self.serialize, output)
print("\t".join(output), file=stdout)
except BaseException:
print(output, file=stderr)
raise
def mapper(self, item):
"""
Re-define to process an input item (usually a line of input data).
Defaults to identity mapper that sends all lines to the same reducer.
"""
yield None, item
combiner = NotImplemented
def incr_counter(self, *args, **kwargs):
"""
Increments a Hadoop counter.
Since counters can be a bit slow to update, this batches the updates.
"""
threshold = kwargs.get("threshold", self.batch_counter_default)
if len(args) == 2:
# backwards compatibility with existing hadoop jobs
group_name, count = args
key = (group_name,)
else:
group, name, count = args
key = (group, name)
ct = self._counter_dict.get(key, 0)
ct += count
if ct >= threshold:
new_arg = list(key) + [ct]
self._incr_counter(*new_arg)
ct = 0
self._counter_dict[key] = ct
def _flush_batch_incr_counter(self):
"""
Increments any unflushed counter values.
"""
for key, count in self._counter_dict.items():
if count == 0:
continue
args = list(key) + [count]
self._incr_counter(*args)
self._counter_dict[key] = 0
def _incr_counter(self, *args):
"""
Increments a Hadoop counter.
Note that this seems to be a bit slow, ~1 ms
Don't overuse this function by updating very frequently.
"""
if len(args) == 2:
# backwards compatibility with existing hadoop jobs
group_name, count = args
print('reporter:counter:%s,%s' % (group_name, count), file=sys.stderr)
else:
group, name, count = args
print('reporter:counter:%s,%s,%s' % (group, name, count), file=sys.stderr)
def extra_modules(self):
return [] # can be overridden in subclass
def extra_files(self):
"""
Can be overriden in subclass.
Each element is either a string, or a pair of two strings (src, dst).
* `src` can be a directory (in which case everything will be copied recursively).
* `dst` can include subdirectories (foo/bar/baz.txt etc)
Uses Hadoop's -files option so that the same file is reused across tasks.
"""
return []
def extra_streaming_arguments(self):
"""
Extra arguments to Hadoop command line.
Return here a list of (parameter, value) tuples.
"""
return []
def extra_archives(self):
"""List of paths to archives """
return []
def add_link(self, src, dst):
if not hasattr(self, '_links'):
self._links = []
self._links.append((src, dst))
def _setup_links(self):
if hasattr(self, '_links'):
missing = []
for src, dst in self._links:
d = os.path.dirname(dst)
if d:
try:
os.makedirs(d)
except OSError:
pass
if not os.path.exists(src):
missing.append(src)
continue
if not os.path.exists(dst):
# If the combiner runs, the file might already exist,
# so no reason to create the link again
os.link(src, dst)
if missing:
raise HadoopJobError(
'Missing files for distributed cache: ' +
', '.join(missing))
def dump(self, directory=''):
"""
Dump instance to file.
"""
with self.no_unpicklable_properties():
file_name = os.path.join(directory, 'job-instance.pickle')
if self.__module__ == '__main__':
d = pickle.dumps(self)
module_name = os.path.basename(sys.argv[0]).rsplit('.', 1)[0]
d = d.replace(b'(c__main__', "(c" + module_name)
open(file_name, "wb").write(d)
else:
pickle.dump(self, open(file_name, "wb"))
def _map_input(self, input_stream):
"""
Iterate over input and call the mapper for each item.
If the job has a parser defined, the return values from the parser will
be passed as arguments to the mapper.
If the input is coded output from a previous run,
the arguments will be splitted in key and value.
"""
for record in self.reader(input_stream):
for output in self.mapper(*record):
yield output
if self.final_mapper != NotImplemented:
for output in self.final_mapper():
yield output
self._flush_batch_incr_counter()
def _reduce_input(self, inputs, reducer, final=NotImplemented):
"""
Iterate over input, collect values with the same key, and call the reducer for each unique key.
"""
for key, values in groupby(inputs, key=lambda x: self.internal_serialize(x[0])):
for output in reducer(self.deserialize(key), (v[1] for v in values)):
yield output
if final != NotImplemented:
for output in final():
yield output
self._flush_batch_incr_counter()
def run_mapper(self, stdin=sys.stdin, stdout=sys.stdout):
"""
Run the mapper on the hadoop node.
"""
self.init_hadoop()
self.init_mapper()
outputs = self._map_input((line[:-1] for line in stdin))
if self.reducer == NotImplemented:
self.writer(outputs, stdout)
else:
self.internal_writer(outputs, stdout)
def run_reducer(self, stdin=sys.stdin, stdout=sys.stdout):
"""
Run the reducer on the hadoop node.
"""
self.init_hadoop()
self.init_reducer()
outputs = self._reduce_input(self.internal_reader((line[:-1] for line in stdin)), self.reducer, self.final_reducer)
self.writer(outputs, stdout)
def run_combiner(self, stdin=sys.stdin, stdout=sys.stdout):
self.init_hadoop()
self.init_combiner()
outputs = self._reduce_input(self.internal_reader((line[:-1] for line in stdin)), self.combiner, self.final_combiner)
self.internal_writer(outputs, stdout)
def internal_reader(self, input_stream):
"""
Reader which uses python eval on each part of a tab separated string.
Yields a tuple of python objects.
"""
for input_line in input_stream:
yield list(map(self.deserialize, input_line.split("\t")))
def internal_writer(self, outputs, stdout):
"""
Writer which outputs the python repr for each item.
"""
for output in outputs:
print("\t".join(map(self.internal_serialize, output)), file=stdout)
|
apache-2.0
|
foss-transportationmodeling/rettina-server
|
flask/local/lib/python2.7/site-packages/whoosh/idsets.py
|
52
|
19132
|
"""
An implementation of an object that acts like a collection of on/off bits.
"""
import operator
from array import array
from bisect import bisect_left, bisect_right, insort
from whoosh.compat import integer_types, izip, izip_longest, next, xrange
from whoosh.util.numeric import bytes_for_bits
# Number of '1' bits in each byte (0-255)
_1SPERBYTE = array('B', [0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2,
2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4,
3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3,
3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5,
5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4,
3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5,
5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4,
4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7,
6, 7, 7, 8])
class DocIdSet(object):
"""Base class for a set of positive integers, implementing a subset of the
built-in ``set`` type's interface with extra docid-related methods.
This is a superclass for alternative set implementations to the built-in
``set`` which are more memory-efficient and specialized toward storing
sorted lists of positive integers, though they will inevitably be slower
than ``set`` for most operations since they're pure Python.
"""
def __eq__(self, other):
for a, b in izip(self, other):
if a != b:
return False
return True
def __neq__(self, other):
return not self.__eq__(other)
def __len__(self):
raise NotImplementedError
def __iter__(self):
raise NotImplementedError
def __contains__(self, i):
raise NotImplementedError
def __or__(self, other):
return self.union(other)
def __and__(self, other):
return self.intersection(other)
def __sub__(self, other):
return self.difference(other)
def copy(self):
raise NotImplementedError
def add(self, n):
raise NotImplementedError
def discard(self, n):
raise NotImplementedError
def update(self, other):
add = self.add
for i in other:
add(i)
def intersection_update(self, other):
for n in self:
if n not in other:
self.discard(n)
def difference_update(self, other):
for n in other:
self.discard(n)
def invert_update(self, size):
"""Updates the set in-place to contain numbers in the range
``[0 - size)`` except numbers that are in this set.
"""
for i in xrange(size):
if i in self:
self.discard(i)
else:
self.add(i)
def intersection(self, other):
c = self.copy()
c.intersection_update(other)
return c
def union(self, other):
c = self.copy()
c.update(other)
return c
def difference(self, other):
c = self.copy()
c.difference_update(other)
return c
def invert(self, size):
c = self.copy()
c.invert_update(size)
return c
def isdisjoint(self, other):
a = self
b = other
if len(other) < len(self):
a, b = other, self
for num in a:
if num in b:
return False
return True
def before(self, i):
"""Returns the previous integer in the set before ``i``, or None.
"""
raise NotImplementedError
def after(self, i):
"""Returns the next integer in the set after ``i``, or None.
"""
raise NotImplementedError
def first(self):
"""Returns the first (lowest) integer in the set.
"""
raise NotImplementedError
def last(self):
"""Returns the last (highest) integer in the set.
"""
raise NotImplementedError
class BaseBitSet(DocIdSet):
# Methods to override
def byte_count(self):
raise NotImplementedError
def _get_byte(self, i):
raise NotImplementedError
def _iter_bytes(self):
raise NotImplementedError
# Base implementations
def __len__(self):
return sum(_1SPERBYTE[b] for b in self._iter_bytes())
def __iter__(self):
base = 0
for byte in self._iter_bytes():
for i in xrange(8):
if byte & (1 << i):
yield base + i
base += 8
def __nonzero__(self):
return any(n for n in self._iter_bytes())
__bool__ = __nonzero__
def __contains__(self, i):
bucket = i // 8
if bucket >= self.byte_count():
return False
return bool(self._get_byte(bucket) & (1 << (i & 7)))
def first(self):
return self.after(-1)
def last(self):
return self.before(self.byte_count() * 8 + 1)
def before(self, i):
_get_byte = self._get_byte
size = self.byte_count() * 8
if i <= 0:
return None
elif i >= size:
i = size - 1
else:
i -= 1
bucket = i // 8
while i >= 0:
byte = _get_byte(bucket)
if not byte:
bucket -= 1
i = bucket * 8 + 7
continue
if byte & (1 << (i & 7)):
return i
if i % 8 == 0:
bucket -= 1
i -= 1
return None
def after(self, i):
_get_byte = self._get_byte
size = self.byte_count() * 8
if i >= size:
return None
elif i < 0:
i = 0
else:
i += 1
bucket = i // 8
while i < size:
byte = _get_byte(bucket)
if not byte:
bucket += 1
i = bucket * 8
continue
if byte & (1 << (i & 7)):
return i
i += 1
if i % 8 == 0:
bucket += 1
return None
class OnDiskBitSet(BaseBitSet):
"""A DocIdSet backed by an array of bits on disk.
>>> st = RamStorage()
>>> f = st.create_file("test.bin")
>>> bs = BitSet([1, 10, 15, 7, 2])
>>> bytecount = bs.to_disk(f)
>>> f.close()
>>> # ...
>>> f = st.open_file("test.bin")
>>> odbs = OnDiskBitSet(f, bytecount)
>>> list(odbs)
[1, 2, 7, 10, 15]
"""
def __init__(self, dbfile, basepos, bytecount):
"""
:param dbfile: a :class:`~whoosh.filedb.structfile.StructFile` object
to read from.
:param basepos: the base position of the bytes in the given file.
:param bytecount: the number of bytes to use for the bit array.
"""
self._dbfile = dbfile
self._basepos = basepos
self._bytecount = bytecount
def __repr__(self):
return "%s(%s, %d, %d)" % (self.__class__.__name__, self.dbfile,
self._basepos, self.bytecount)
def byte_count(self):
return self._bytecount
def _get_byte(self, n):
return self._dbfile.get_byte(self._basepos + n)
def _iter_bytes(self):
dbfile = self._dbfile
dbfile.seek(self._basepos)
for _ in xrange(self._bytecount):
yield dbfile.read_byte()
class BitSet(BaseBitSet):
"""A DocIdSet backed by an array of bits. This can also be useful as a bit
array (e.g. for a Bloom filter). It is much more memory efficient than a
large built-in set of integers, but wastes memory for sparse sets.
"""
def __init__(self, source=None, size=0):
"""
:param maxsize: the maximum size of the bit array.
:param source: an iterable of positive integers to add to this set.
:param bits: an array of unsigned bytes ("B") to use as the underlying
bit array. This is used by some of the object's methods.
"""
# If the source is a list, tuple, or set, we can guess the size
if not size and isinstance(source, (list, tuple, set, frozenset)):
size = max(source)
bytecount = bytes_for_bits(size)
self.bits = array("B", (0 for _ in xrange(bytecount)))
if source:
add = self.add
for num in source:
add(num)
def __repr__(self):
return "%s(%r)" % (self.__class__.__name__, list(self))
def byte_count(self):
return len(self.bits)
def _get_byte(self, n):
return self.bits[n]
def _iter_bytes(self):
return iter(self.bits)
def _trim(self):
bits = self.bits
last = len(self.bits) - 1
while last >= 0 and not bits[last]:
last -= 1
del self.bits[last + 1:]
def _resize(self, tosize):
curlength = len(self.bits)
newlength = bytes_for_bits(tosize)
if newlength > curlength:
self.bits.extend((0,) * (newlength - curlength))
elif newlength < curlength:
del self.bits[newlength + 1:]
def _zero_extra_bits(self, size):
bits = self.bits
spill = size - ((len(bits) - 1) * 8)
if spill:
mask = 2 ** spill - 1
bits[-1] = bits[-1] & mask
def _logic(self, obj, op, other):
objbits = obj.bits
for i, (byte1, byte2) in enumerate(izip_longest(objbits, other.bits,
fillvalue=0)):
value = op(byte1, byte2) & 0xFF
if i >= len(objbits):
objbits.append(value)
else:
objbits[i] = value
obj._trim()
return obj
def to_disk(self, dbfile):
dbfile.write_array(self.bits)
return len(self.bits)
@classmethod
def from_bytes(cls, bs):
b = cls()
b.bits = array("B", bs)
return b
@classmethod
def from_disk(cls, dbfile, bytecount):
return cls.from_bytes(dbfile.read_array("B", bytecount))
def copy(self):
b = self.__class__()
b.bits = array("B", iter(self.bits))
return b
def clear(self):
for i in xrange(len(self.bits)):
self.bits[i] = 0
def add(self, i):
bucket = i >> 3
if bucket >= len(self.bits):
self._resize(i + 1)
self.bits[bucket] |= 1 << (i & 7)
def discard(self, i):
bucket = i >> 3
self.bits[bucket] &= ~(1 << (i & 7))
def _resize_to_other(self, other):
if isinstance(other, (list, tuple, set, frozenset)):
maxbit = max(other)
if maxbit // 8 > len(self.bits):
self._resize(maxbit)
def update(self, iterable):
self._resize_to_other(iterable)
DocIdSet.update(self, iterable)
def intersection_update(self, other):
if isinstance(other, BitSet):
return self._logic(self, operator.__and__, other)
discard = self.discard
for n in self:
if n not in other:
discard(n)
def difference_update(self, other):
if isinstance(other, BitSet):
return self._logic(self, lambda x, y: x & ~y, other)
discard = self.discard
for n in other:
discard(n)
def invert_update(self, size):
bits = self.bits
for i in xrange(len(bits)):
bits[i] = ~bits[i] & 0xFF
self._zero_extra_bits(size)
def union(self, other):
if isinstance(other, BitSet):
return self._logic(self.copy(), operator.__or__, other)
b = self.copy()
b.update(other)
return b
def intersection(self, other):
if isinstance(other, BitSet):
return self._logic(self.copy(), operator.__and__, other)
return BitSet(source=(n for n in self if n in other))
def difference(self, other):
if isinstance(other, BitSet):
return self._logic(self.copy(), lambda x, y: x & ~y, other)
return BitSet(source=(n for n in self if n not in other))
class SortedIntSet(DocIdSet):
"""A DocIdSet backed by a sorted array of integers.
"""
def __init__(self, source=None, typecode="I"):
if source:
self.data = array(typecode, sorted(source))
else:
self.data = array(typecode)
self.typecode = typecode
def copy(self):
sis = SortedIntSet()
sis.data = array(self.typecode, self.data)
return sis
def size(self):
return len(self.data) * self.data.itemsize
def __repr__(self):
return "%s(%r)" % (self.__class__.__name__, self.data)
def __len__(self):
return len(self.data)
def __iter__(self):
return iter(self.data)
def __nonzero__(self):
return bool(self.data)
__bool__ = __nonzero__
def __contains__(self, i):
data = self.data
if not data or i < data[0] or i > data[-1]:
return False
pos = bisect_left(data, i)
if pos == len(data):
return False
return data[pos] == i
def add(self, i):
data = self.data
if not data or i > data[-1]:
data.append(i)
else:
mn = data[0]
mx = data[-1]
if i == mn or i == mx:
return
elif i > mx:
data.append(i)
elif i < mn:
data.insert(0, i)
else:
pos = bisect_left(data, i)
if data[pos] != i:
data.insert(pos, i)
def discard(self, i):
data = self.data
pos = bisect_left(data, i)
if data[pos] == i:
data.pop(pos)
def clear(self):
self.data = array(self.typecode)
def intersection_update(self, other):
self.data = array(self.typecode, (num for num in self if num in other))
def difference_update(self, other):
self.data = array(self.typecode,
(num for num in self if num not in other))
def intersection(self, other):
return SortedIntSet((num for num in self if num in other))
def difference(self, other):
return SortedIntSet((num for num in self if num not in other))
def first(self):
return self.data[0]
def last(self):
return self.data[-1]
def before(self, i):
data = self.data
pos = bisect_left(data, i)
if pos < 1:
return None
else:
return data[pos - 1]
def after(self, i):
data = self.data
if not data or i >= data[-1]:
return None
elif i < data[0]:
return data[0]
pos = bisect_right(data, i)
return data[pos]
class ReverseIdSet(DocIdSet):
"""
Wraps a DocIdSet object and reverses its semantics, so docs in the wrapped
set are not in this set, and vice-versa.
"""
def __init__(self, idset, limit):
"""
:param idset: the DocIdSet object to wrap.
:param limit: the highest possible ID plus one.
"""
self.idset = idset
self.limit = limit
def __len__(self):
return self.limit - len(self.idset)
def __contains__(self, i):
return i not in self.idset
def __iter__(self):
ids = iter(self.idset)
try:
nx = next(ids)
except StopIteration:
nx = -1
for i in xrange(self.limit):
if i == nx:
try:
nx = next(ids)
except StopIteration:
nx = -1
else:
yield i
def add(self, n):
self.idset.discard(n)
def discard(self, n):
self.idset.add(n)
def first(self):
for i in self:
return i
def last(self):
idset = self.idset
maxid = self.limit - 1
if idset.last() < maxid - 1:
return maxid
for i in xrange(maxid, -1, -1):
if i not in idset:
return i
ROARING_CUTOFF = 1 << 12
class RoaringIdSet(DocIdSet):
"""
Separates IDs into ranges of 2^16 bits, and stores each range in the most
efficient type of doc set, either a BitSet (if the range has >= 2^12 IDs)
or a sorted ID set of 16-bit shorts.
"""
cutoff = 2**12
def __init__(self, source=None):
self.idsets = []
if source:
self.update(source)
def __len__(self):
if not self.idsets:
return 0
return sum(len(idset) for idset in self.idsets)
def __contains__(self, n):
bucket = n >> 16
if bucket >= len(self.idsets):
return False
return (n - (bucket << 16)) in self.idsets[bucket]
def __iter__(self):
for i, idset in self.idsets:
floor = i << 16
for n in idset:
yield floor + n
def _find(self, n):
bucket = n >> 16
floor = n << 16
if bucket >= len(self.idsets):
self.idsets.extend([SortedIntSet() for _
in xrange(len(self.idsets), bucket + 1)])
idset = self.idsets[bucket]
return bucket, floor, idset
def add(self, n):
bucket, floor, idset = self._find(n)
oldlen = len(idset)
idset.add(n - floor)
if oldlen <= ROARING_CUTOFF < len(idset):
self.idsets[bucket] = BitSet(idset)
def discard(self, n):
bucket, floor, idset = self._find(n)
oldlen = len(idset)
idset.discard(n - floor)
if oldlen > ROARING_CUTOFF >= len(idset):
self.idsets[bucket] = SortedIntSet(idset)
class MultiIdSet(DocIdSet):
"""Wraps multiple SERIAL sub-DocIdSet objects and presents them as an
aggregated, read-only set.
"""
def __init__(self, idsets, offsets):
"""
:param idsets: a list of DocIdSet objects.
:param offsets: a list of offsets corresponding to the DocIdSet objects
in ``idsets``.
"""
assert len(idsets) == len(offsets)
self.idsets = idsets
self.offsets = offsets
def _document_set(self, n):
offsets = self.offsets
return max(bisect_left(offsets, n), len(self.offsets) - 1)
def _set_and_docnum(self, n):
setnum = self._document_set(n)
offset = self.offsets[setnum]
return self.idsets[setnum], n - offset
def __len__(self):
return sum(len(idset) for idset in self.idsets)
def __iter__(self):
for idset, offset in izip(self.idsets, self.offsets):
for docnum in idset:
yield docnum + offset
def __contains__(self, item):
idset, n = self._set_and_docnum(item)
return n in idset
|
apache-2.0
|
foss-transportationmodeling/rettina-server
|
.env/lib/python2.7/site-packages/whoosh/compat.py
|
72
|
5322
|
import array, sys
# Run time aliasing of Python2/3 differences
def htmlescape(s, quote=True):
# this is html.escape reimplemented with cgi.escape,
# so it works for python 2.x, 3.0 and 3.1
import cgi
s = cgi.escape(s, quote)
if quote:
# python 3.2 also replaces the single quotes:
s = s.replace("'", "'")
return s
if sys.version_info[0] < 3:
PY3 = False
def b(s):
return s
import cStringIO as StringIO
StringIO = BytesIO = StringIO.StringIO
callable = callable
integer_types = (int, long)
iteritems = lambda o: o.iteritems()
itervalues = lambda o: o.itervalues()
iterkeys = lambda o: o.iterkeys()
from itertools import izip
long_type = long
next = lambda o: o.next()
import cPickle as pickle
from cPickle import dumps, loads, dump, load
string_type = basestring
text_type = unicode
bytes_type = str
unichr = unichr
from urllib import urlretrieve
def byte(num):
return chr(num)
def u(s):
return unicode(s, "unicode_escape")
def with_metaclass(meta, base=object):
class _WhooshBase(base):
__metaclass__ = meta
return _WhooshBase
xrange = xrange
zip_ = zip
def memoryview_(source, offset=None, length=None):
if offset or length:
return buffer(source, offset, length)
else:
return buffer(source)
else:
PY3 = True
import collections
def b(s):
return s.encode("latin-1")
import io
BytesIO = io.BytesIO
callable = lambda o: isinstance(o, collections.Callable)
exec_ = eval("exec")
integer_types = (int,)
iteritems = lambda o: o.items()
itervalues = lambda o: o.values()
iterkeys = lambda o: iter(o.keys())
izip = zip
long_type = int
next = next
import pickle
from pickle import dumps, loads, dump, load
StringIO = io.StringIO
string_type = str
text_type = str
bytes_type = bytes
unichr = chr
from urllib.request import urlretrieve
def byte(num):
return bytes((num,))
def u(s):
if isinstance(s, bytes):
return s.decode("ascii")
return s
def with_metaclass(meta, base=object):
ns = dict(base=base, meta=meta)
exec_("""class _WhooshBase(base, metaclass=meta):
pass""", ns)
return ns["_WhooshBase"]
xrange = range
zip_ = lambda * args: list(zip(*args))
def memoryview_(source, offset=None, length=None):
mv = memoryview(source)
if offset or length:
return mv[offset:offset + length]
else:
return mv
try:
# for python >= 3.2, avoid DeprecationWarning for cgi.escape
from html import escape as htmlescape
except ImportError:
pass
if hasattr(array.array, "tobytes"):
def array_tobytes(arry):
return arry.tobytes()
def array_frombytes(arry, bs):
return arry.frombytes(bs)
else:
def array_tobytes(arry):
return arry.tostring()
def array_frombytes(arry, bs):
return arry.fromstring(bs)
# Implementations missing from older versions of Python
try:
from itertools import permutations # @UnusedImport
except ImportError:
# Python 2.5
def permutations(iterable, r=None):
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
if r > n:
return
indices = range(n)
cycles = range(n, n - r, -1)
yield tuple(pool[i] for i in indices[:r])
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
indices[i:] = indices[i + 1:] + indices[i:i + 1]
cycles[i] = n - i
else:
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
yield tuple(pool[i] for i in indices[:r])
break
else:
return
try:
# Python 2.6-2.7
from itertools import izip_longest # @UnusedImport
except ImportError:
try:
# Python 3.0
from itertools import zip_longest as izip_longest # @UnusedImport
except ImportError:
# Python 2.5
from itertools import chain, izip, repeat
def izip_longest(*args, **kwds):
fillvalue = kwds.get('fillvalue')
def sentinel(counter=([fillvalue] * (len(args) - 1)).pop):
yield counter()
fillers = repeat(fillvalue)
iters = [chain(it, sentinel(), fillers) for it in args]
try:
for tup in izip(*iters):
yield tup
except IndexError:
pass
try:
from operator import methodcaller # @UnusedImport
except ImportError:
# Python 2.5
def methodcaller(name, *args, **kwargs):
def caller(obj):
return getattr(obj, name)(*args, **kwargs)
return caller
try:
from abc import abstractmethod # @UnusedImport
except ImportError:
# Python 2.5
def abstractmethod(funcobj):
"""A decorator indicating abstract methods.
"""
funcobj.__isabstractmethod__ = True
return funcobj
|
apache-2.0
|
vbocan/Voltcraft-Data-Analyzer
|
DataExport.py
|
1
|
12612
|
#!python3
"""
Project: Voltcraft Data Analyzer
Author: Valer Bocan, PhD <[email protected]>
Last updated: September 14th, 2014
Module
description: The VoltcraftDataFile module processes data files containing history of voltage, current and power factor,
as generated by the Voltcraft Energy-Logger 4000.
License: This project is placed in the public domain, hoping that it will be useful to people tinkering with Voltcraft products.
Reference: Voltcraft File Format: http://www2.produktinfo.conrad.com/datenblaetter/125000-149999/125323-da-01-en-Datenprotokoll_SD_card_file_Formatv1_2.pdf
"""
import csv
from datetime import timedelta
from datetime import datetime
def WriteInfoData(filename, info, powerdata, blackoutdata):
"""
Write informational data to a text file
"""
try:
with open(filename, "wt") as fout:
fout.write("Voltcraft Data Analyzer v1.2\n")
fout.write("Valer Bocan, PhD <[email protected]>\n")
fout.write("\n")
fout.write("Initial time on device: {0}\n".format(info["InitialDateTime"]))
fout.write("Unit number: {0}\n".format(info["UnitNumber"]))
fout.write("\n")
fout.write("--- DEVICE BASED STATISTICS (may not be accurate)\n")
fout.write("Total power consumed: {0:.3f} kWh\n".format(info["TotalPowerConsumed"]))
fout.write("History:\n")
fout.write(" Today: {0:.3f} kWh\n".format(info["ConsumptionHistory"][0]))
fout.write(" Yesterday: {0:.3f} kWh\n".format(info["ConsumptionHistory"][1]))
fout.write(" 2 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][2]))
fout.write(" 3 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][3]))
fout.write(" 4 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][4]))
fout.write(" 5 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][5]))
fout.write(" 6 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][6]))
fout.write(" 7 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][7]))
fout.write(" 8 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][8]))
fout.write(" 9 days ago: {0:.3f} kWh\n".format(info["ConsumptionHistory"][9]))
fout.write("\n")
fout.write("Total recorded time: {0}\n".format(GetDurationStringFromMinutes(info["TotalRecordedTime"])))
fout.write("History:\n")
fout.write(" Today: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][0])))
fout.write(" Yesterday: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][1])))
fout.write(" 2 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][2])))
fout.write(" 3 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][3])))
fout.write(" 4 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][4])))
fout.write(" 5 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][5])))
fout.write(" 6 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][6])))
fout.write(" 7 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][7])))
fout.write(" 8 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][8])))
fout.write(" 9 days ago: {0}\n".format(GetDurationStringFromHours(info["RecordedTimeHistory"][9])))
fout.write("\n")
fout.write("Total time with power consumption: {0}\n".format(GetDurationStringFromMinutes(info["TotalOnTime"])))
fout.write("History:\n")
fout.write(" Today: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][0])))
fout.write(" Yesterday: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][1])))
fout.write(" 2 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][2])))
fout.write(" 3 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][3])))
fout.write(" 4 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][4])))
fout.write(" 5 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][5])))
fout.write(" 6 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][6])))
fout.write(" 7 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][7])))
fout.write(" 8 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][8])))
fout.write(" 9 days ago: {0}\n".format(GetDurationStringFromHours(info["OnTimeHistory"][9])))
fout.write("\n")
fout.write("Tariff 1: {0}\n".format(info["Tariff1"]))
fout.write("Tariff 2: {0}\n".format(info["Tariff2"]))
fout.write("\n")
fout.write("--- PARAMETER HISTORY\n")
for d in powerdata:
fout.write("[{0}] U={1:02}V I={2:.3f}A cosPHI={3:.2f} P={4:.3f}kW S={4:.3f}kVA\n".format(d["Timestamp"].strftime("%Y-%m-%d %H:%M"), d["Voltage"], d["Current"], d["PowerFactor"], d["Power"], d["ApparentPower"]))
stats1 = GetDataStatistics(powerdata)
fout.write("\n")
fout.write("--- VOLTAGE AND POWER\n")
fout.write("Minimum voltage : {0:.1f}V ({1} occurences, first on {2})\n".format(stats1["MinVoltage"], len(stats1["MinVoltageTimestamps"]), stats1["MinVoltageTimestamps"][0].strftime("%Y-%m-%d %H:%M")))
fout.write("Maximum voltage : {0:.1f}V ({1} occurences, first on {2})\n".format(stats1["MaxVoltage"], len(stats1["MaxVoltageTimestamps"]), stats1["MaxVoltageTimestamps"][0].strftime("%Y-%m-%d %H:%M")))
fout.write("Average voltage : {0:.1f}V\n".format(stats1["AvgVoltage"]))
fout.write("Maximum power : {0:.3f}kW ({1} occurences, first on {2})\n".format(stats1["MaxPower"], len(stats1["MaxPowerTimestamps"]), stats1["MaxPowerTimestamps"][0].strftime("%Y-%m-%d %H:%M")))
fout.write("Maximum apparent power : {0:.3f}kVA ({1} occurences, first on {2})\n".format(stats1["MaxApparentPower"], len(stats1["MaxApparentPowerTimestamps"]), stats1["MaxApparentPowerTimestamps"][0].strftime("%Y-%m-%d %H:%M")))
stats2 = GetBlackoutStatistics(blackoutdata)
fout.write("\n")
fout.write("--- BLACKOUTS\n")
fout.write("{0} blackout(s) for a total of {1}\n".format(stats2["Count"], GetDurationString(stats2["TotalDuration"])))
for b in blackoutdata:
fout.write("[{0}] Blackout for {1}\n".format(b["Timestamp"].strftime("%Y-%m-%d %H:%M"), GetDurationString(b["Duration"])))
fout.write("\n")
stats3 = list(GetPowerStatistics(powerdata))
fout.write("--- POWER CONSUMPTION\n")
for c in stats3:
fout.write("[{0}] - {1:.3f}kWh\n".format(c["Day"].strftime("%Y-%m-%d"), c["Consumption"]))
fout.write(" Recorded: {0}\n".format(GetDurationStringFromMinutes(c["TotalMinutes"])))
fout.write(" Power on: {0} ({1:.1f}%)\n".format(GetDurationStringFromMinutes(c["TotalMinutesWithPowerConsumption"]), c["TotalMinutesWithPowerConsumption"] / c["TotalMinutes"] * 100))
TotalPowerConsumption = sum(item['Consumption'] for item in stats3)
fout.write("\nTOTAL CONSUMPTION : {0:.3f}kWh (avg. {1:.3f}kWh/day)\n".format(TotalPowerConsumption, TotalPowerConsumption / len(stats3)))
TotalRecordedTime = len(powerdata) # minutes
TotalTimeWithPowerConsumption = len(tuple(item for item in powerdata if item['Power'] > 0))
fout.write("Total recorded time : {0}\n".format(GetDurationStringFromMinutes(TotalRecordedTime)))
fout.write("Total time with power consumption : {0} ({1:.1f}%)\n".format(GetDurationStringFromMinutes(TotalTimeWithPowerConsumption), TotalTimeWithPowerConsumption / TotalRecordedTime * 100 ))
fout.write("\nFile generated on: {0}\n".format(datetime.now().strftime("%Y-%m-%d %H:%M:%S")))
except IOError:
raise Exception('Could not write out information file.')
def WriteHistoricData(filename, data):
"""
Write historic data to a CSV file
"""
with open(filename, 'w', newline='') as fp:
wr = csv.writer(fp, delimiter=';')
header = [['Timestamp', 'Voltage (V)', 'Current (A)', 'Power (kW)', 'Apparent power (kVA)']]
wr.writerows(header) # Write header
for d in data:
str = [[d["Timestamp"], d["Voltage"], d["Current"], round(d["Power"], 3), round(d["ApparentPower"], 3)]]
wr.writerows(str)
def WriteBlackoutData(filename, data):
"""
Write blackout data to a CSV file
"""
with open(filename, 'w', newline='') as fp:
wr = csv.writer(fp, delimiter=';')
header = [['Timestamp', 'Duration']]
wr.writerows(header) # Write header
for d in data:
str = [[d["Timestamp"], d["Duration"]]]
wr.writerows(str)
def GetDurationString(duration):
"""
Convert the duration timedelta in a day:hour:min:sec string representation
"""
total_days, total_hours, total_minutes = duration.days, duration.seconds // 3600, duration.seconds // 60 % 60
if total_days >= 1:
return "{0:02}d {1:02}h {2:02}m".format(total_days, total_hours, total_minutes)
elif total_hours >= 1:
return "{0:02}h {1:02}m".format(total_hours, total_minutes)
else:
return "{0:02}m".format(total_minutes)
def GetDurationStringFromMinutes(duration):
return GetDurationString(timedelta(minutes=duration))
def GetDurationStringFromHours(duration):
return GetDurationString(timedelta(hours=duration))
def GetDataStatistics(data):
# Compute minimum voltage and its occurence times
MinVoltage = min(item['Voltage'] for item in data)
MinVoltageTimestamps = tuple(item['Timestamp'] for item in data if item["Voltage"] == MinVoltage)
# Compute maximum voltage and its occurence times
MaxVoltage = max(item['Voltage'] for item in data)
MaxVoltageTimestamps = tuple(item['Timestamp'] for item in data if item["Voltage"] == MaxVoltage)
# Compute average voltage
AvgVoltage = sum(item['Voltage'] for item in data) / len(data)
# Compute maximum power and its occurence times
MaxPower = max(item['Power'] for item in data)
MaxPowerTimestamps = tuple(item['Timestamp'] for item in data if item["Power"] == MaxPower)
MaxApparentPower = max(item['ApparentPower'] for item in data)
MaxApparentPowerTimestamps = tuple(item['Timestamp'] for item in data if item["ApparentPower"] == MaxApparentPower)
return {
"AvgVoltage": AvgVoltage,
"MinVoltage":MinVoltage,
"MinVoltageTimestamps":MinVoltageTimestamps,
"MaxVoltage":MaxVoltage,
"MaxVoltageTimestamps":MaxVoltageTimestamps,
"MaxPower":MaxPower,
"MaxPowerTimestamps":MaxPowerTimestamps,
"MaxApparentPower":MaxApparentPower,
"MaxApparentPowerTimestamps":MaxApparentPowerTimestamps
}
def GetBlackoutStatistics(data):
Count = len(data)
TotalDuration = sum((item['Duration'] for item in data), timedelta())
return {"Count":Count, "TotalDuration":TotalDuration}
def GetPowerStatistics(data):
# Determine the unique days in the data log
UniqueOrdinalDays = set(item['Timestamp'].toordinal() for item in data)
for day in sorted(UniqueOrdinalDays):
ConsumptionPerDay = sum(item['Power'] * 1 / 60 for item in data if item['Timestamp'].toordinal() == day)
TotalMinutes = len(tuple(item for item in data if item['Timestamp'].toordinal() == day))
TotalMinutesWithPowerConsumption = len(tuple(item for item in data if item['Timestamp'].toordinal() == day and item['Power'] > 0))
Day = datetime.fromordinal(day)
yield { "Day" : Day, "Consumption" : ConsumptionPerDay, "TotalMinutes": TotalMinutes, "TotalMinutesWithPowerConsumption": TotalMinutesWithPowerConsumption }
|
mit
|
PulsePod/evepod
|
lib/python2.7/site-packages/pip/vendor/html5lib/treewalkers/genshistream.py
|
1730
|
2278
|
from __future__ import absolute_import, division, unicode_literals
from genshi.core import QName
from genshi.core import START, END, XML_NAMESPACE, DOCTYPE, TEXT
from genshi.core import START_NS, END_NS, START_CDATA, END_CDATA, PI, COMMENT
from . import _base
from ..constants import voidElements, namespaces
class TreeWalker(_base.TreeWalker):
def __iter__(self):
# Buffer the events so we can pass in the following one
previous = None
for event in self.tree:
if previous is not None:
for token in self.tokens(previous, event):
yield token
previous = event
# Don't forget the final event!
if previous is not None:
for token in self.tokens(previous, None):
yield token
def tokens(self, event, next):
kind, data, pos = event
if kind == START:
tag, attribs = data
name = tag.localname
namespace = tag.namespace
converted_attribs = {}
for k, v in attribs:
if isinstance(k, QName):
converted_attribs[(k.namespace, k.localname)] = v
else:
converted_attribs[(None, k)] = v
if namespace == namespaces["html"] and name in voidElements:
for token in self.emptyTag(namespace, name, converted_attribs,
not next or next[0] != END
or next[1] != tag):
yield token
else:
yield self.startTag(namespace, name, converted_attribs)
elif kind == END:
name = data.localname
namespace = data.namespace
if name not in voidElements:
yield self.endTag(namespace, name)
elif kind == COMMENT:
yield self.comment(data)
elif kind == TEXT:
for token in self.text(data):
yield token
elif kind == DOCTYPE:
yield self.doctype(*data)
elif kind in (XML_NAMESPACE, DOCTYPE, START_NS, END_NS,
START_CDATA, END_CDATA, PI):
pass
else:
yield self.unknown(kind)
|
apache-2.0
|
systers/postorius
|
src/postorius/tests/test_urls.py
|
1
|
1544
|
# -*- coding: utf-8 -*-
# Copyright (C) 2016-2018 by the Free Software Foundation, Inc.
#
# This file is part of Postorius.
#
# Postorius is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation, either version 3 of the License, or (at your option)
# any later version.
# Postorius is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# Postorius. If not, see <http://www.gnu.org/licenses/>.
from django.test import TestCase
from django.urls import reverse, NoReverseMatch
class URLTest(TestCase):
def test_email_allows_slash(self):
try:
reverse('list_member_options', kwargs={
'list_id': 'test.example.com',
'email': 'slashed/are/[email protected]',
})
reverse('remove_role', kwargs={
'list_id': 'test.example.com',
'role': 'subscriber',
'address': 'slashed/are/[email protected]',
})
except NoReverseMatch as e:
self.fail(e)
def test_held_message_url_ends_with_slash(self):
url = reverse('rest_held_message', args=('foo', 0))
self.assertEqual(url[-2:], '0/')
|
gpl-3.0
|
maweki/more-collections
|
setup.py
|
1
|
1949
|
from distutils.core import setup
setup(
name="more_collections",
packages = ['more_collections'],
version="0.3.0",
author="Mario Wenzel",
author_email="[email protected]",
url="https://github.com/maweki/more-collections",
description="more_collections is a Python library providing more collections (multisets, orderable multisets, hashable dictionaries, ...).",
license="MIT",
classifiers=[
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.2",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries :: Python Modules"
],
keywords='collections multiset frozendict development',
long_description=
"""
This package provides some more collections than the standard collections package.
The package currently provides:
- **puredict**/**frozendict** - a functionally **pure** and **immutable dictionary** that is even **hashable**,
if all keys and values are hashable.
- **multiset**/**frozenmultiset** - a multiset implementation
- **orderable_multiset**/**orderable_frozenmultiset** - a multiset implementation for orderable carriers so that
multisets of those elements themselves are orderable, even including **nestable_orderable_frozenmultiset**
which is a multiset-ordering-extension that gives a total ordering for arbitrarily nested multisets over an orderable carrier.
If you want to see any more collections, contact me, open a ticket (I'll happily implement it) or send in a patch.
See https://github.com/maweki/more-collections for a full guide and more information.
"""
)
|
mit
|
Fusion-Rom/android_external_chromium_org
|
tools/telemetry/telemetry/timeline/process.py
|
45
|
2957
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import telemetry.timeline.counter as tracing_counter
import telemetry.timeline.event as event_module
import telemetry.timeline.event_container as event_container
import telemetry.timeline.thread as tracing_thread
class Process(event_container.TimelineEventContainer):
''' The Process represents a single userland process in the trace.
'''
def __init__(self, parent, pid):
super(Process, self).__init__('process %s' % pid, parent)
self.pid = pid
self._threads = {}
self._counters = {}
self._trace_buffer_overflow_event = None
@property
def trace_buffer_did_overflow(self):
return self._trace_buffer_overflow_event is not None
@property
def trace_buffer_overflow_event(self):
return self._trace_buffer_overflow_event
@property
def threads(self):
return self._threads
@property
def counters(self):
return self._counters
def IterChildContainers(self):
for thread in self._threads.itervalues():
yield thread
for counter in self._counters.itervalues():
yield counter
def IterEventsInThisContainer(self, event_type_predicate, event_predicate):
if (not self.trace_buffer_did_overflow or
not event_type_predicate(event_module.TimelineEvent) or
not event_predicate(self._trace_buffer_overflow_event)):
return
yield # pylint: disable=W0101
yield self._trace_buffer_overflow_event
def GetOrCreateThread(self, tid):
thread = self.threads.get(tid, None)
if thread:
return thread
thread = tracing_thread.Thread(self, tid)
self._threads[tid] = thread
return thread
def GetCounter(self, category, name):
counter_id = category + '.' + name
if counter_id in self.counters:
return self.counters[counter_id]
raise ValueError(
'Counter %s not found in process with id %s.' % (counter_id,
self.pid))
def GetOrCreateCounter(self, category, name):
try:
return self.GetCounter(category, name)
except ValueError:
ctr = tracing_counter.Counter(self, category, name)
self._counters[ctr.full_name] = ctr
return ctr
def AutoCloseOpenSlices(self, max_timestamp, thread_time_bounds):
for thread in self._threads.itervalues():
thread.AutoCloseOpenSlices(max_timestamp, thread_time_bounds[thread].max)
def SetTraceBufferOverflowTimestamp(self, timestamp):
# TODO: use instant event for trace_buffer_overflow_event
self._trace_buffer_overflow_event = event_module.TimelineEvent(
"TraceBufferInfo", "trace_buffer_overflowed", timestamp, 0)
def FinalizeImport(self):
for thread in self._threads.itervalues():
thread.FinalizeImport()
for counter in self._counters.itervalues():
counter.FinalizeImport()
|
bsd-3-clause
|
brianzelip/militarization
|
css/basscss/node_modules/pygmentize-bundled/vendor/pygments/build-2.7/pygments/formatters/terminal.py
|
50
|
5401
|
# -*- coding: utf-8 -*-
"""
pygments.formatters.terminal
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Formatter for terminal output with ANSI sequences.
:copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import sys
from pygments.formatter import Formatter
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Token, Whitespace
from pygments.console import ansiformat
from pygments.util import get_choice_opt
__all__ = ['TerminalFormatter']
#: Map token types to a tuple of color values for light and dark
#: backgrounds.
TERMINAL_COLORS = {
Token: ('', ''),
Whitespace: ('lightgray', 'darkgray'),
Comment: ('lightgray', 'darkgray'),
Comment.Preproc: ('teal', 'turquoise'),
Keyword: ('darkblue', 'blue'),
Keyword.Type: ('teal', 'turquoise'),
Operator.Word: ('purple', 'fuchsia'),
Name.Builtin: ('teal', 'turquoise'),
Name.Function: ('darkgreen', 'green'),
Name.Namespace: ('_teal_', '_turquoise_'),
Name.Class: ('_darkgreen_', '_green_'),
Name.Exception: ('teal', 'turquoise'),
Name.Decorator: ('darkgray', 'lightgray'),
Name.Variable: ('darkred', 'red'),
Name.Constant: ('darkred', 'red'),
Name.Attribute: ('teal', 'turquoise'),
Name.Tag: ('blue', 'blue'),
String: ('brown', 'brown'),
Number: ('darkblue', 'blue'),
Generic.Deleted: ('red', 'red'),
Generic.Inserted: ('darkgreen', 'green'),
Generic.Heading: ('**', '**'),
Generic.Subheading: ('*purple*', '*fuchsia*'),
Generic.Error: ('red', 'red'),
Error: ('_red_', '_red_'),
}
class TerminalFormatter(Formatter):
r"""
Format tokens with ANSI color sequences, for output in a text console.
Color sequences are terminated at newlines, so that paging the output
works correctly.
The `get_style_defs()` method doesn't do anything special since there is
no support for common styles.
Options accepted:
`bg`
Set to ``"light"`` or ``"dark"`` depending on the terminal's background
(default: ``"light"``).
`colorscheme`
A dictionary mapping token types to (lightbg, darkbg) color names or
``None`` (default: ``None`` = use builtin colorscheme).
`linenos`
Set to ``True`` to have line numbers on the terminal output as well
(default: ``False`` = no line numbers).
"""
name = 'Terminal'
aliases = ['terminal', 'console']
filenames = []
def __init__(self, **options):
Formatter.__init__(self, **options)
self.darkbg = get_choice_opt(options, 'bg',
['light', 'dark'], 'light') == 'dark'
self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS
self.linenos = options.get('linenos', False)
self._lineno = 0
def format(self, tokensource, outfile):
# hack: if the output is a terminal and has an encoding set,
# use that to avoid unicode encode problems
if not self.encoding and hasattr(outfile, "encoding") and \
hasattr(outfile, "isatty") and outfile.isatty() and \
sys.version_info < (3,):
self.encoding = outfile.encoding
return Formatter.format(self, tokensource, outfile)
def _write_lineno(self, outfile):
self._lineno += 1
outfile.write("\n%04d: " % self._lineno)
def _format_unencoded_with_lineno(self, tokensource, outfile):
self._write_lineno(outfile)
for ttype, value in tokensource:
if value.endswith("\n"):
self._write_lineno(outfile)
value = value[:-1]
color = self.colorscheme.get(ttype)
while color is None:
ttype = ttype[:-1]
color = self.colorscheme.get(ttype)
if color:
color = color[self.darkbg]
spl = value.split('\n')
for line in spl[:-1]:
self._write_lineno(outfile)
if line:
outfile.write(ansiformat(color, line[:-1]))
if spl[-1]:
outfile.write(ansiformat(color, spl[-1]))
else:
outfile.write(value)
outfile.write("\n")
def format_unencoded(self, tokensource, outfile):
if self.linenos:
self._format_unencoded_with_lineno(tokensource, outfile)
return
for ttype, value in tokensource:
color = self.colorscheme.get(ttype)
while color is None:
ttype = ttype[:-1]
color = self.colorscheme.get(ttype)
if color:
color = color[self.darkbg]
spl = value.split('\n')
for line in spl[:-1]:
if line:
outfile.write(ansiformat(color, line))
outfile.write('\n')
if spl[-1]:
outfile.write(ansiformat(color, spl[-1]))
else:
outfile.write(value)
|
gpl-2.0
|
sbidoul/odoo
|
addons/account_check_writing/report/__init__.py
|
446
|
1066
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import check_print
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
Dhivyap/ansible
|
lib/ansible/modules/network/panos/_panos_query_rules.py
|
41
|
18602
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Ansible module to manage PaloAltoNetworks Firewall
# (c) 2016, techbizdev <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# limitations under the License.
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['deprecated'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: panos_query_rules
short_description: PANOS module that allows search for security rules in PANW NGFW devices.
description: >
- Security policies allow you to enforce rules and take action, and can be as general or specific as needed. The
policy rules are compared against the incoming traffic in sequence, and because the first rule that matches the
traffic is applied, the more specific rules must precede the more general ones.
author: "Bob Hagen (@rnh556)"
version_added: "2.5"
requirements:
- pan-python can be obtained from PyPI U(https://pypi.org/project/pan-python/)
- pandevice can be obtained from PyPI U(https://pypi.org/project/pandevice/)
- xmltodict can be obtains from PyPI U(https://pypi.org/project/xmltodict/)
deprecated:
alternative: Use U(https://galaxy.ansible.com/PaloAltoNetworks/paloaltonetworks) instead.
removed_in: "2.12"
why: Consolidating code base.
notes:
- Checkmode is not supported.
- Panorama is supported.
options:
ip_address:
description:
- IP address (or hostname) of PAN-OS firewall or Panorama management console being queried.
required: true
username:
description:
- Username credentials to use for authentication.
default: "admin"
password:
description:
- Password credentials to use for authentication.
required: true
api_key:
description:
- API key that can be used instead of I(username)/I(password) credentials.
application:
description:
- Name of the application or application group to be queried.
source_zone:
description:
- Name of the source security zone to be queried.
source_ip:
description:
- The source IP address to be queried.
source_port:
description:
- The source port to be queried.
destination_zone:
description:
- Name of the destination security zone to be queried.
destination_ip:
description:
- The destination IP address to be queried.
destination_port:
description:
- The destination port to be queried.
protocol:
description:
- The protocol used to be queried. Must be either I(tcp) or I(udp).
choices:
- tcp
- udp
tag_name:
description:
- Name of the rule tag to be queried.
devicegroup:
description:
- The Panorama device group in which to conduct the query.
'''
EXAMPLES = '''
- name: search for rules with tcp/3306
panos_query_rules:
ip_address: '{{ ip_address }}'
username: '{{ username }}'
password: '{{ password }}'
source_zone: 'DevNet'
destination_zone: 'DevVPC'
destination_port: '3306'
protocol: 'tcp'
- name: search devicegroup for inbound rules to dmz host
panos_query_rules:
ip_address: '{{ ip_address }}'
api_key: '{{ api_key }}'
destination_zone: 'DMZ'
destination_ip: '10.100.42.18'
address: 'DeviceGroupA'
- name: search for rules containing a specified rule tag
panos_query_rules:
ip_address: '{{ ip_address }}'
username: '{{ username }}'
password: '{{ password }}'
tag_name: 'ProjectX'
'''
RETURN = '''
# Default return values
'''
from ansible.module_utils.basic import AnsibleModule
try:
import pan.xapi
from pan.xapi import PanXapiError
import pandevice
from pandevice import base
from pandevice import firewall
from pandevice import panorama
from pandevice import objects
from pandevice import policies
import ipaddress
import xmltodict
import json
HAS_LIB = True
except ImportError:
HAS_LIB = False
def get_devicegroup(device, devicegroup):
dg_list = device.refresh_devices()
for group in dg_list:
if isinstance(group, pandevice.panorama.DeviceGroup):
if group.name == devicegroup:
return group
return False
def get_rulebase(device, devicegroup):
# Build the rulebase
if isinstance(device, firewall.Firewall):
rulebase = policies.Rulebase()
device.add(rulebase)
elif isinstance(device, panorama.Panorama):
dg = panorama.DeviceGroup(devicegroup)
device.add(dg)
rulebase = policies.PreRulebase()
dg.add(rulebase)
else:
return False
policies.SecurityRule.refreshall(rulebase)
return rulebase
def get_object(device, dev_group, obj_name):
# Search global address objects
match = device.find(obj_name, objects.AddressObject)
if match:
return match
# Search global address groups
match = device.find(obj_name, objects.AddressGroup)
if match:
return match
# Search Panorama device group
if isinstance(device, pandevice.panorama.Panorama):
# Search device group address objects
match = dev_group.find(obj_name, objects.AddressObject)
if match:
return match
# Search device group address groups
match = dev_group.find(obj_name, objects.AddressGroup)
if match:
return match
return False
def addr_in_obj(addr, obj):
ip = ipaddress.ip_address(addr)
# Process address objects
if isinstance(obj, objects.AddressObject):
if obj.type == 'ip-netmask':
net = ipaddress.ip_network(obj.value)
if ip in net:
return True
if obj.type == 'ip-range':
ip_range = obj.value.split('-')
lower = ipaddress.ip_address(ip_range[0])
upper = ipaddress.ip_address(ip_range[1])
if lower < ip < upper:
return True
return False
def get_services(device, dev_group, svc_list, obj_list):
for svc in svc_list:
# Search global address objects
global_obj_match = device.find(svc, objects.ServiceObject)
if global_obj_match:
obj_list.append(global_obj_match)
# Search global address groups
global_grp_match = device.find(svc, objects.ServiceGroup)
if global_grp_match:
get_services(device, dev_group, global_grp_match.value, obj_list)
# Search Panorama device group
if isinstance(device, pandevice.panorama.Panorama):
# Search device group address objects
dg_obj_match = dev_group.find(svc, objects.ServiceObject)
if dg_obj_match:
obj_list.append(dg_obj_match)
# Search device group address groups
dg_grp_match = dev_group.find(svc, objects.ServiceGroup)
if dg_grp_match:
get_services(device, dev_group, dg_grp_match.value, obj_list)
return obj_list
def port_in_svc(orientation, port, protocol, obj):
# Process address objects
if orientation == 'source':
for x in obj.source_port.split(','):
if '-' in x:
port_range = x.split('-')
lower = int(port_range[0])
upper = int(port_range[1])
if (lower <= int(port) <= upper) and (obj.protocol == protocol):
return True
else:
if port == x and obj.protocol == protocol:
return True
elif orientation == 'destination':
for x in obj.destination_port.split(','):
if '-' in x:
port_range = x.split('-')
lower = int(port_range[0])
upper = int(port_range[1])
if (lower <= int(port) <= upper) and (obj.protocol == protocol):
return True
else:
if port == x and obj.protocol == protocol:
return True
return False
def get_tag(device, dev_group, tag_name):
# Search global address objects
match = device.find(tag_name, objects.Tag)
if match:
return match
# Search Panorama device group
if isinstance(device, panorama.Panorama):
# Search device group address objects
match = dev_group.find(tag_name, objects.Tag)
if match:
return match
return False
def main():
argument_spec = dict(
ip_address=dict(required=True),
password=dict(no_log=True),
username=dict(default='admin'),
api_key=dict(no_log=True),
application=dict(default=None),
source_zone=dict(default=None),
destination_zone=dict(default=None),
source_ip=dict(default=None),
destination_ip=dict(default=None),
source_port=dict(default=None),
destination_port=dict(default=None),
protocol=dict(default=None, choices=['tcp', 'udp']),
tag_name=dict(default=None),
devicegroup=dict(default=None)
)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False,
required_one_of=[['api_key', 'password']]
)
if not HAS_LIB:
module.fail_json(msg='Missing required libraries.')
ip_address = module.params["ip_address"]
password = module.params["password"]
username = module.params['username']
api_key = module.params['api_key']
application = module.params['application']
source_zone = module.params['source_zone']
source_ip = module.params['source_ip']
source_port = module.params['source_port']
destination_zone = module.params['destination_zone']
destination_ip = module.params['destination_ip']
destination_port = module.params['destination_port']
protocol = module.params['protocol']
tag_name = module.params['tag_name']
devicegroup = module.params['devicegroup']
# Create the device with the appropriate pandevice type
device = base.PanDevice.create_from_device(ip_address, username, password, api_key=api_key)
# Grab the global objects
objects.AddressObject.refreshall(device)
objects.AddressGroup.refreshall(device)
objects.ServiceObject.refreshall(device)
objects.ServiceGroup.refreshall(device)
objects.Tag.refreshall(device)
# If Panorama, validate the devicegroup and grab the devicegroup objects
dev_group = None
if devicegroup and isinstance(device, panorama.Panorama):
dev_group = get_devicegroup(device, devicegroup)
if dev_group:
device.add(dev_group)
objects.AddressObject.refreshall(dev_group)
objects.AddressGroup.refreshall(dev_group)
objects.ServiceObject.refreshall(dev_group)
objects.ServiceGroup.refreshall(dev_group)
objects.Tag.refreshall(dev_group)
else:
module.fail_json(
failed=1,
msg='\'%s\' device group not found in Panorama. Is the name correct?' % devicegroup
)
# Build the rulebase and produce list
rulebase = get_rulebase(device, dev_group)
rulelist = rulebase.children
hitbase = policies.Rulebase()
loose_match = True
# Process each rule
for rule in rulelist:
hitlist = []
if source_zone:
source_zone_match = False
if loose_match and 'any' in rule.fromzone:
source_zone_match = True
else:
for object_string in rule.fromzone:
if object_string == source_zone:
source_zone_match = True
hitlist.append(source_zone_match)
if destination_zone:
destination_zone_match = False
if loose_match and 'any' in rule.tozone:
destination_zone_match = True
else:
for object_string in rule.tozone:
if object_string == destination_zone:
destination_zone_match = True
hitlist.append(destination_zone_match)
if source_ip:
source_ip_match = False
if loose_match and 'any' in rule.source:
source_ip_match = True
else:
for object_string in rule.source:
# Get a valid AddressObject or AddressGroup
obj = get_object(device, dev_group, object_string)
# Otherwise the object_string is not an object and should be handled differently
if obj is False:
if '-' in object_string:
obj = ipaddress.ip_address(source_ip)
source_range = object_string.split('-')
source_lower = ipaddress.ip_address(source_range[0])
source_upper = ipaddress.ip_address(source_range[1])
if source_lower <= obj <= source_upper:
source_ip_match = True
else:
if source_ip == object_string:
source_ip_match = True
if isinstance(obj, objects.AddressObject) and addr_in_obj(source_ip, obj):
source_ip_match = True
elif isinstance(obj, objects.AddressGroup) and obj.static_value:
for member_string in obj.static_value:
member = get_object(device, dev_group, member_string)
if addr_in_obj(source_ip, member):
source_ip_match = True
hitlist.append(source_ip_match)
if destination_ip:
destination_ip_match = False
if loose_match and 'any' in rule.destination:
destination_ip_match = True
else:
for object_string in rule.destination:
# Get a valid AddressObject or AddressGroup
obj = get_object(device, dev_group, object_string)
# Otherwise the object_string is not an object and should be handled differently
if obj is False:
if '-' in object_string:
obj = ipaddress.ip_address(destination_ip)
destination_range = object_string.split('-')
destination_lower = ipaddress.ip_address(destination_range[0])
destination_upper = ipaddress.ip_address(destination_range[1])
if destination_lower <= obj <= destination_upper:
destination_ip_match = True
else:
if destination_ip == object_string:
destination_ip_match = True
if isinstance(obj, objects.AddressObject) and addr_in_obj(destination_ip, obj):
destination_ip_match = True
elif isinstance(obj, objects.AddressGroup) and obj.static_value:
for member_string in obj.static_value:
member = get_object(device, dev_group, member_string)
if addr_in_obj(destination_ip, member):
destination_ip_match = True
hitlist.append(destination_ip_match)
if source_port:
source_port_match = False
orientation = 'source'
if loose_match and (rule.service[0] == 'any'):
source_port_match = True
elif rule.service[0] == 'application-default':
source_port_match = False # Fix this once apps are supported
else:
service_list = []
service_list = get_services(device, dev_group, rule.service, service_list)
for obj in service_list:
if port_in_svc(orientation, source_port, protocol, obj):
source_port_match = True
break
hitlist.append(source_port_match)
if destination_port:
destination_port_match = False
orientation = 'destination'
if loose_match and (rule.service[0] == 'any'):
destination_port_match = True
elif rule.service[0] == 'application-default':
destination_port_match = False # Fix this once apps are supported
else:
service_list = []
service_list = get_services(device, dev_group, rule.service, service_list)
for obj in service_list:
if port_in_svc(orientation, destination_port, protocol, obj):
destination_port_match = True
break
hitlist.append(destination_port_match)
if tag_name:
tag_match = False
if rule.tag:
for object_string in rule.tag:
obj = get_tag(device, dev_group, object_string)
if obj and (obj.name == tag_name):
tag_match = True
hitlist.append(tag_match)
# Add to hit rulebase
if False not in hitlist:
hitbase.add(rule)
# Dump the hit rulebase
if hitbase.children:
output_string = xmltodict.parse(hitbase.element_str())
module.exit_json(
stdout_lines=json.dumps(output_string, indent=2),
msg='%s of %s rules matched' % (hitbase.children.__len__(), rulebase.children.__len__())
)
else:
module.fail_json(msg='No matching rules found.')
if __name__ == '__main__':
main()
|
gpl-3.0
|
jarrahwu/tornado
|
maint/vm/windows/bootstrap.py
|
99
|
3423
|
r"""Installs files needed for tornado testing on windows.
These instructions are compatible with the VMs provided by http://modern.ie.
The bootstrapping script works on the WinXP/IE6 and Win8/IE10 configurations,
although tornado's tests do not pass on XP.
1) Install virtualbox guest additions (from the device menu in virtualbox)
2) Set up a shared folder to the root of your tornado repo. It must be a
read-write mount to use tox, although the tests can be run directly
in a read-only mount. This will probably assign drive letter E:.
3) Install Python 2.7 from python.org.
4) Run this script by double-clicking it, or running
"c:\python27\python.exe bootstrap.py" in a shell.
To run the tests by hand, cd to e:\ and run
c:\python27\python.exe -m tornado.test.runtests
To run the tests with tox, cd to e:\maint\vm\windows and run
c:\python27\scripts\tox
To run under cygwin (which must be installed separately), run
cd /cygdrive/e; python -m tornado.test.runtests
"""
import os
import subprocess
import sys
import urllib
TMPDIR = r'c:\tornado_bootstrap'
PYTHON_VERSIONS = [
(r'c:\python26\python.exe', 'http://www.python.org/ftp/python/2.6.6/python-2.6.6.msi'),
(r'c:\python27\python.exe', 'http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi'),
(r'c:\python32\python.exe', 'http://www.python.org/ftp/python/3.2.3/python-3.2.3.msi'),
(r'c:\python33\python.exe', 'http://www.python.org/ftp/python/3.3.0/python-3.3.0.msi'),
]
SCRIPTS_DIR = r'c:\python27\scripts'
EASY_INSTALL = os.path.join(SCRIPTS_DIR, 'easy_install.exe')
PY_PACKAGES = ['tox', 'virtualenv', 'pip']
def download_to_cache(url, local_name=None):
if local_name is None:
local_name = url.split('/')[-1]
filename = os.path.join(TMPDIR, local_name)
if not os.path.exists(filename):
data = urllib.urlopen(url).read()
with open(filename, 'wb') as f:
f.write(data)
return filename
def main():
if not os.path.exists(TMPDIR):
os.mkdir(TMPDIR)
os.chdir(TMPDIR)
for exe, url in PYTHON_VERSIONS:
if os.path.exists(exe):
print "%s already exists, skipping" % exe
continue
print "Installing %s" % url
filename = download_to_cache(url)
# http://blog.jaraco.com/2012/01/how-i-install-python-on-windows.html
subprocess.check_call(['msiexec', '/i', filename,
'ALLUSERS=1', '/passive'])
if not os.path.exists(EASY_INSTALL):
filename = download_to_cache('http://python-distribute.org/distribute_setup.py')
subprocess.check_call([sys.executable, filename])
subprocess.check_call([EASY_INSTALL] + PY_PACKAGES)
# cygwin's setup.exe doesn't like being run from a script (looks
# UAC-related). If it did, something like this might install it.
# (install python, python-setuptools, python3, and easy_install
# unittest2 (cygwin's python 2 is 2.6))
#filename = download_to_cache('http://cygwin.com/setup.exe')
#CYGTMPDIR = os.path.join(TMPDIR, 'cygwin')
#if not os.path.exists(CYGTMPDIR):
# os.mkdir(CYGTMPDIR)
## http://www.jbmurphy.com/2011/06/16/powershell-script-to-install-cygwin/
#CYGWIN_ARGS = [filename, '-q', '-l', CYGTMPDIR,
# '-s', 'http://mirror.nyi.net/cygwin/', '-R', r'c:\cygwin']
#subprocess.check_call(CYGWIN_ARGS)
if __name__ == '__main__':
main()
|
apache-2.0
|
akintolga/superdesk-core
|
superdesk/macros/update_to_pass_validation.py
|
1
|
2481
|
# -*- coding: utf-8; -*-
#
# This file is part of Superdesk.
#
# Copyright 2013, 2014 Sourcefabric z.u. and contributors.
#
# For the full copyright and license information, please see the
# AUTHORS and LICENSE files distributed with this source code, or
# at https://www.sourcefabric.org/superdesk/license
from superdesk.locators.locators import find_cities
import logging
from apps.archive.common import format_dateline_to_locmmmddsrc
from superdesk.utc import get_date
import superdesk
from superdesk.metadata.item import CONTENT_TYPE
from apps.publish.content.common import ITEM_PUBLISH
logger = logging.getLogger(__name__)
def update_to_pass_validation(item, **kwargs):
"""
This is a test macro that does what is required to ensure that a text item will pass publication validation.
It is intended to be used to test auto publishing, that is publishing directly from ingest.
At the moment virtually all content received from Reuters fails validation.
:param item:
:param kwargs:
:return:
"""
try:
lookup = {'act': ITEM_PUBLISH, 'type': CONTENT_TYPE.TEXT}
validators = superdesk.get_resource_service('validators').get(req=None, lookup=lookup)
if validators.count():
max_slugline_len = validators[0]['schema']['slugline']['maxlength']
max_headline_len = validators[0]['schema']['headline']['maxlength']
item['slugline'] = item['slugline'][:max_slugline_len] \
if len(item['slugline']) > max_slugline_len else item['slugline']
item['headline'] = item['headline'][:max_headline_len] \
if len(item['headline']) > max_headline_len else item['headline']
if 'dateline' not in item:
cities = find_cities(country_code='AU', state_code='NSW')
located = [c for c in cities if c['city'].lower() == 'sydney']
if located:
item['dateline'] = {'date': item['firstcreated'], 'located': located[0]}
item['dateline']['source'] = item['source']
item['dateline']['text'] = format_dateline_to_locmmmddsrc(located[0], get_date(item['firstcreated']),
source=item['source'])
return item
except:
logging.exception('Test update to pass validation macro exception')
name = 'update to pass validation'
callback = update_to_pass_validation
access_type = 'backend'
action_type = 'direct'
|
agpl-3.0
|
bancek/egradebook
|
src/lib/django/template/__init__.py
|
561
|
3247
|
"""
This is the Django template system.
How it works:
The Lexer.tokenize() function converts a template string (i.e., a string containing
markup with custom template tags) to tokens, which can be either plain text
(TOKEN_TEXT), variables (TOKEN_VAR) or block statements (TOKEN_BLOCK).
The Parser() class takes a list of tokens in its constructor, and its parse()
method returns a compiled template -- which is, under the hood, a list of
Node objects.
Each Node is responsible for creating some sort of output -- e.g. simple text
(TextNode), variable values in a given context (VariableNode), results of basic
logic (IfNode), results of looping (ForNode), or anything else. The core Node
types are TextNode, VariableNode, IfNode and ForNode, but plugin modules can
define their own custom node types.
Each Node has a render() method, which takes a Context and returns a string of
the rendered node. For example, the render() method of a Variable Node returns
the variable's value as a string. The render() method of an IfNode returns the
rendered output of whatever was inside the loop, recursively.
The Template class is a convenient wrapper that takes care of template
compilation and rendering.
Usage:
The only thing you should ever use directly in this file is the Template class.
Create a compiled template object with a template_string, then call render()
with a context. In the compilation stage, the TemplateSyntaxError exception
will be raised if the template doesn't have proper syntax.
Sample code:
>>> from django import template
>>> s = u'<html>{% if test %}<h1>{{ varvalue }}</h1>{% endif %}</html>'
>>> t = template.Template(s)
(t is now a compiled template, and its render() method can be called multiple
times with multiple contexts)
>>> c = template.Context({'test':True, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html><h1>Hello</h1></html>'
>>> c = template.Context({'test':False, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html></html>'
"""
# Template lexing symbols
from django.template.base import (ALLOWED_VARIABLE_CHARS, BLOCK_TAG_END,
BLOCK_TAG_START, COMMENT_TAG_END, COMMENT_TAG_START,
FILTER_ARGUMENT_SEPARATOR, FILTER_SEPARATOR, SINGLE_BRACE_END,
SINGLE_BRACE_START, TOKEN_BLOCK, TOKEN_COMMENT, TOKEN_TEXT, TOKEN_VAR,
TRANSLATOR_COMMENT_MARK, UNKNOWN_SOURCE, VARIABLE_ATTRIBUTE_SEPARATOR,
VARIABLE_TAG_END, VARIABLE_TAG_START, filter_re, tag_re)
# Exceptions
from django.template.base import (ContextPopException, InvalidTemplateLibrary,
TemplateDoesNotExist, TemplateEncodingError, TemplateSyntaxError,
VariableDoesNotExist)
# Template parts
from django.template.base import (Context, FilterExpression, Lexer, Node,
NodeList, Parser, RequestContext, Origin, StringOrigin, Template,
TextNode, Token, TokenParser, Variable, VariableNode, constant_string,
filter_raw_string)
# Compiling templates
from django.template.base import (compile_string, resolve_variable,
unescape_string_literal, generic_tag_compiler)
# Library management
from django.template.base import (Library, add_to_builtins, builtins,
get_library, get_templatetags_modules, get_text_list, import_library,
libraries)
__all__ = ('Template', 'Context', 'RequestContext', 'compile_string')
|
gpl-3.0
|
wyom/sympy
|
sympy/utilities/pkgdata.py
|
109
|
1872
|
"""
pkgdata is a simple, extensible way for a package to acquire data file
resources.
The getResource function is equivalent to the standard idioms, such as
the following minimal implementation::
import sys, os
def getResource(identifier, pkgname=__name__):
pkgpath = os.path.dirname(sys.modules[pkgname].__file__)
path = os.path.join(pkgpath, identifier)
return open(os.path.normpath(path), mode='rb')
When a __loader__ is present on the module given by __name__, it will defer
getResource to its get_data implementation and return it as a file-like
object (such as StringIO).
"""
from __future__ import print_function, division
import sys
import os
from sympy.core.compatibility import cStringIO as StringIO
def get_resource(identifier, pkgname=__name__):
"""
Acquire a readable object for a given package name and identifier.
An IOError will be raised if the resource can not be found.
For example::
mydata = get_resource('mypkgdata.jpg').read()
Note that the package name must be fully qualified, if given, such
that it would be found in sys.modules.
In some cases, getResource will return a real file object. In that
case, it may be useful to use its name attribute to get the path
rather than use it as a file-like object. For example, you may
be handing data off to a C API.
"""
mod = sys.modules[pkgname]
fn = getattr(mod, '__file__', None)
if fn is None:
raise IOError("%r has no __file__!")
path = os.path.join(os.path.dirname(fn), identifier)
loader = getattr(mod, '__loader__', None)
if loader is not None:
try:
data = loader.get_data(path)
except (IOError,AttributeError):
pass
else:
return StringIO(data.decode('utf-8'))
return open(os.path.normpath(path), 'rb')
|
bsd-3-clause
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.