repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
---|---|---|---|---|---|
40223110/2015cd_midterm- | static/Brython3.1.0-20150301-090019/Lib/difflib.py | 737 | 82544 | #! /usr/bin/env python3
"""
Module difflib -- helpers for computing deltas between objects.
Function get_close_matches(word, possibilities, n=3, cutoff=0.6):
Use SequenceMatcher to return list of the best "good enough" matches.
Function context_diff(a, b):
For two lists of strings, return a delta in context diff format.
Function ndiff(a, b):
Return a delta: the difference between `a` and `b` (lists of strings).
Function restore(delta, which):
Return one of the two sequences that generated an ndiff delta.
Function unified_diff(a, b):
For two lists of strings, return a delta in unified diff format.
Class SequenceMatcher:
A flexible class for comparing pairs of sequences of any type.
Class Differ:
For producing human-readable deltas from sequences of lines of text.
Class HtmlDiff:
For producing HTML side by side comparison with change highlights.
"""
__all__ = ['get_close_matches', 'ndiff', 'restore', 'SequenceMatcher',
'Differ','IS_CHARACTER_JUNK', 'IS_LINE_JUNK', 'context_diff',
'unified_diff', 'HtmlDiff', 'Match']
import warnings
import heapq
from collections import namedtuple as _namedtuple
Match = _namedtuple('Match', 'a b size')
def _calculate_ratio(matches, length):
if length:
return 2.0 * matches / length
return 1.0
class SequenceMatcher:
"""
SequenceMatcher is a flexible class for comparing pairs of sequences of
any type, so long as the sequence elements are hashable. The basic
algorithm predates, and is a little fancier than, an algorithm
published in the late 1980's by Ratcliff and Obershelp under the
hyperbolic name "gestalt pattern matching". The basic idea is to find
the longest contiguous matching subsequence that contains no "junk"
elements (R-O doesn't address junk). The same idea is then applied
recursively to the pieces of the sequences to the left and to the right
of the matching subsequence. This does not yield minimal edit
sequences, but does tend to yield matches that "look right" to people.
SequenceMatcher tries to compute a "human-friendly diff" between two
sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the
longest *contiguous* & junk-free matching subsequence. That's what
catches peoples' eyes. The Windows(tm) windiff has another interesting
notion, pairing up elements that appear uniquely in each sequence.
That, and the method here, appear to yield more intuitive difference
reports than does diff. This method appears to be the least vulnerable
to synching up on blocks of "junk lines", though (like blank lines in
ordinary text files, or maybe "<P>" lines in HTML files). That may be
because this is the only method of the 3 that has a *concept* of
"junk" <wink>.
Example, comparing two strings, and considering blanks to be "junk":
>>> s = SequenceMatcher(lambda x: x == " ",
... "private Thread currentThread;",
... "private volatile Thread currentThread;")
>>>
.ratio() returns a float in [0, 1], measuring the "similarity" of the
sequences. As a rule of thumb, a .ratio() value over 0.6 means the
sequences are close matches:
>>> print(round(s.ratio(), 3))
0.866
>>>
If you're only interested in where the sequences match,
.get_matching_blocks() is handy:
>>> for block in s.get_matching_blocks():
... print("a[%d] and b[%d] match for %d elements" % block)
a[0] and b[0] match for 8 elements
a[8] and b[17] match for 21 elements
a[29] and b[38] match for 0 elements
Note that the last tuple returned by .get_matching_blocks() is always a
dummy, (len(a), len(b), 0), and this is the only case in which the last
tuple element (number of elements matched) is 0.
If you want to know how to change the first sequence into the second,
use .get_opcodes():
>>> for opcode in s.get_opcodes():
... print("%6s a[%d:%d] b[%d:%d]" % opcode)
equal a[0:8] b[0:8]
insert a[8:8] b[8:17]
equal a[8:29] b[17:38]
See the Differ class for a fancy human-friendly file differencer, which
uses SequenceMatcher both to compare sequences of lines, and to compare
sequences of characters within similar (near-matching) lines.
See also function get_close_matches() in this module, which shows how
simple code building on SequenceMatcher can be used to do useful work.
Timing: Basic R-O is cubic time worst case and quadratic time expected
case. SequenceMatcher is quadratic time for the worst case and has
expected-case behavior dependent in a complicated way on how many
elements the sequences have in common; best case time is linear.
Methods:
__init__(isjunk=None, a='', b='')
Construct a SequenceMatcher.
set_seqs(a, b)
Set the two sequences to be compared.
set_seq1(a)
Set the first sequence to be compared.
set_seq2(b)
Set the second sequence to be compared.
find_longest_match(alo, ahi, blo, bhi)
Find longest matching block in a[alo:ahi] and b[blo:bhi].
get_matching_blocks()
Return list of triples describing matching subsequences.
get_opcodes()
Return list of 5-tuples describing how to turn a into b.
ratio()
Return a measure of the sequences' similarity (float in [0,1]).
quick_ratio()
Return an upper bound on .ratio() relatively quickly.
real_quick_ratio()
Return an upper bound on ratio() very quickly.
"""
def __init__(self, isjunk=None, a='', b='', autojunk=True):
"""Construct a SequenceMatcher.
Optional arg isjunk is None (the default), or a one-argument
function that takes a sequence element and returns true iff the
element is junk. None is equivalent to passing "lambda x: 0", i.e.
no elements are considered to be junk. For example, pass
lambda x: x in " \\t"
if you're comparing lines as sequences of characters, and don't
want to synch up on blanks or hard tabs.
Optional arg a is the first of two sequences to be compared. By
default, an empty string. The elements of a must be hashable. See
also .set_seqs() and .set_seq1().
Optional arg b is the second of two sequences to be compared. By
default, an empty string. The elements of b must be hashable. See
also .set_seqs() and .set_seq2().
Optional arg autojunk should be set to False to disable the
"automatic junk heuristic" that treats popular elements as junk
(see module documentation for more information).
"""
# Members:
# a
# first sequence
# b
# second sequence; differences are computed as "what do
# we need to do to 'a' to change it into 'b'?"
# b2j
# for x in b, b2j[x] is a list of the indices (into b)
# at which x appears; junk and popular elements do not appear
# fullbcount
# for x in b, fullbcount[x] == the number of times x
# appears in b; only materialized if really needed (used
# only for computing quick_ratio())
# matching_blocks
# a list of (i, j, k) triples, where a[i:i+k] == b[j:j+k];
# ascending & non-overlapping in i and in j; terminated by
# a dummy (len(a), len(b), 0) sentinel
# opcodes
# a list of (tag, i1, i2, j1, j2) tuples, where tag is
# one of
# 'replace' a[i1:i2] should be replaced by b[j1:j2]
# 'delete' a[i1:i2] should be deleted
# 'insert' b[j1:j2] should be inserted
# 'equal' a[i1:i2] == b[j1:j2]
# isjunk
# a user-supplied function taking a sequence element and
# returning true iff the element is "junk" -- this has
# subtle but helpful effects on the algorithm, which I'll
# get around to writing up someday <0.9 wink>.
# DON'T USE! Only __chain_b uses this. Use "in self.bjunk".
# bjunk
# the items in b for which isjunk is True.
# bpopular
# nonjunk items in b treated as junk by the heuristic (if used).
self.isjunk = isjunk
self.a = self.b = None
self.autojunk = autojunk
self.set_seqs(a, b)
def set_seqs(self, a, b):
"""Set the two sequences to be compared.
>>> s = SequenceMatcher()
>>> s.set_seqs("abcd", "bcde")
>>> s.ratio()
0.75
"""
self.set_seq1(a)
self.set_seq2(b)
def set_seq1(self, a):
"""Set the first sequence to be compared.
The second sequence to be compared is not changed.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.set_seq1("bcde")
>>> s.ratio()
1.0
>>>
SequenceMatcher computes and caches detailed information about the
second sequence, so if you want to compare one sequence S against
many sequences, use .set_seq2(S) once and call .set_seq1(x)
repeatedly for each of the other sequences.
See also set_seqs() and set_seq2().
"""
if a is self.a:
return
self.a = a
self.matching_blocks = self.opcodes = None
def set_seq2(self, b):
"""Set the second sequence to be compared.
The first sequence to be compared is not changed.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.set_seq2("abcd")
>>> s.ratio()
1.0
>>>
SequenceMatcher computes and caches detailed information about the
second sequence, so if you want to compare one sequence S against
many sequences, use .set_seq2(S) once and call .set_seq1(x)
repeatedly for each of the other sequences.
See also set_seqs() and set_seq1().
"""
if b is self.b:
return
self.b = b
self.matching_blocks = self.opcodes = None
self.fullbcount = None
self.__chain_b()
# For each element x in b, set b2j[x] to a list of the indices in
# b where x appears; the indices are in increasing order; note that
# the number of times x appears in b is len(b2j[x]) ...
# when self.isjunk is defined, junk elements don't show up in this
# map at all, which stops the central find_longest_match method
# from starting any matching block at a junk element ...
# b2j also does not contain entries for "popular" elements, meaning
# elements that account for more than 1 + 1% of the total elements, and
# when the sequence is reasonably large (>= 200 elements); this can
# be viewed as an adaptive notion of semi-junk, and yields an enormous
# speedup when, e.g., comparing program files with hundreds of
# instances of "return NULL;" ...
# note that this is only called when b changes; so for cross-product
# kinds of matches, it's best to call set_seq2 once, then set_seq1
# repeatedly
def __chain_b(self):
# Because isjunk is a user-defined (not C) function, and we test
# for junk a LOT, it's important to minimize the number of calls.
# Before the tricks described here, __chain_b was by far the most
# time-consuming routine in the whole module! If anyone sees
# Jim Roskind, thank him again for profile.py -- I never would
# have guessed that.
# The first trick is to build b2j ignoring the possibility
# of junk. I.e., we don't call isjunk at all yet. Throwing
# out the junk later is much cheaper than building b2j "right"
# from the start.
b = self.b
self.b2j = b2j = {}
for i, elt in enumerate(b):
indices = b2j.setdefault(elt, [])
indices.append(i)
# Purge junk elements
self.bjunk = junk = set()
isjunk = self.isjunk
if isjunk:
for elt in b2j.keys():
if isjunk(elt):
junk.add(elt)
for elt in junk: # separate loop avoids separate list of keys
del b2j[elt]
# Purge popular elements that are not junk
self.bpopular = popular = set()
n = len(b)
if self.autojunk and n >= 200:
ntest = n // 100 + 1
for elt, idxs in b2j.items():
if len(idxs) > ntest:
popular.add(elt)
for elt in popular: # ditto; as fast for 1% deletion
del b2j[elt]
def isbjunk(self, item):
"Deprecated; use 'item in SequenceMatcher().bjunk'."
warnings.warn("'SequenceMatcher().isbjunk(item)' is deprecated;\n"
"use 'item in SMinstance.bjunk' instead.",
DeprecationWarning, 2)
return item in self.bjunk
def isbpopular(self, item):
"Deprecated; use 'item in SequenceMatcher().bpopular'."
warnings.warn("'SequenceMatcher().isbpopular(item)' is deprecated;\n"
"use 'item in SMinstance.bpopular' instead.",
DeprecationWarning, 2)
return item in self.bpopular
def find_longest_match(self, alo, ahi, blo, bhi):
"""Find longest matching block in a[alo:ahi] and b[blo:bhi].
If isjunk is not defined:
Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
alo <= i <= i+k <= ahi
blo <= j <= j+k <= bhi
and for all (i',j',k') meeting those conditions,
k >= k'
i <= i'
and if i == i', j <= j'
In other words, of all maximal matching blocks, return one that
starts earliest in a, and of all those maximal matching blocks that
start earliest in a, return the one that starts earliest in b.
>>> s = SequenceMatcher(None, " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
Match(a=0, b=4, size=5)
If isjunk is defined, first the longest matching block is
determined as above, but with the additional restriction that no
junk element appears in the block. Then that block is extended as
far as possible by matching (only) junk elements on both sides. So
the resulting block never matches on junk except as identical junk
happens to be adjacent to an "interesting" match.
Here's the same example as before, but considering blanks to be
junk. That prevents " abcd" from matching the " abcd" at the tail
end of the second sequence directly. Instead only the "abcd" can
match, and matches the leftmost "abcd" in the second sequence:
>>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
Match(a=1, b=0, size=4)
If no blocks match, return (alo, blo, 0).
>>> s = SequenceMatcher(None, "ab", "c")
>>> s.find_longest_match(0, 2, 0, 1)
Match(a=0, b=0, size=0)
"""
# CAUTION: stripping common prefix or suffix would be incorrect.
# E.g.,
# ab
# acab
# Longest matching block is "ab", but if common prefix is
# stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
# strip, so ends up claiming that ab is changed to acab by
# inserting "ca" in the middle. That's minimal but unintuitive:
# "it's obvious" that someone inserted "ac" at the front.
# Windiff ends up at the same place as diff, but by pairing up
# the unique 'b's and then matching the first two 'a's.
a, b, b2j, isbjunk = self.a, self.b, self.b2j, self.bjunk.__contains__
besti, bestj, bestsize = alo, blo, 0
# find longest junk-free match
# during an iteration of the loop, j2len[j] = length of longest
# junk-free match ending with a[i-1] and b[j]
j2len = {}
nothing = []
for i in range(alo, ahi):
# look at all instances of a[i] in b; note that because
# b2j has no junk keys, the loop is skipped if a[i] is junk
j2lenget = j2len.get
newj2len = {}
for j in b2j.get(a[i], nothing):
# a[i] matches b[j]
if j < blo:
continue
if j >= bhi:
break
k = newj2len[j] = j2lenget(j-1, 0) + 1
if k > bestsize:
besti, bestj, bestsize = i-k+1, j-k+1, k
j2len = newj2len
# Extend the best by non-junk elements on each end. In particular,
# "popular" non-junk elements aren't in b2j, which greatly speeds
# the inner loop above, but also means "the best" match so far
# doesn't contain any junk *or* popular non-junk elements.
while besti > alo and bestj > blo and \
not isbjunk(b[bestj-1]) and \
a[besti-1] == b[bestj-1]:
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
while besti+bestsize < ahi and bestj+bestsize < bhi and \
not isbjunk(b[bestj+bestsize]) and \
a[besti+bestsize] == b[bestj+bestsize]:
bestsize += 1
# Now that we have a wholly interesting match (albeit possibly
# empty!), we may as well suck up the matching junk on each
# side of it too. Can't think of a good reason not to, and it
# saves post-processing the (possibly considerable) expense of
# figuring out what to do with it. In the case of an empty
# interesting match, this is clearly the right thing to do,
# because no other kind of match is possible in the regions.
while besti > alo and bestj > blo and \
isbjunk(b[bestj-1]) and \
a[besti-1] == b[bestj-1]:
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
while besti+bestsize < ahi and bestj+bestsize < bhi and \
isbjunk(b[bestj+bestsize]) and \
a[besti+bestsize] == b[bestj+bestsize]:
bestsize = bestsize + 1
return Match(besti, bestj, bestsize)
def get_matching_blocks(self):
"""Return list of triples describing matching subsequences.
Each triple is of the form (i, j, n), and means that
a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
i and in j. New in Python 2.5, it's also guaranteed that if
(i, j, n) and (i', j', n') are adjacent triples in the list, and
the second is not the last triple in the list, then i+n != i' or
j+n != j'. IOW, adjacent triples never describe adjacent equal
blocks.
The last triple is a dummy, (len(a), len(b), 0), and is the only
triple with n==0.
>>> s = SequenceMatcher(None, "abxcd", "abcd")
>>> list(s.get_matching_blocks())
[Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]
"""
if self.matching_blocks is not None:
return self.matching_blocks
la, lb = len(self.a), len(self.b)
# This is most naturally expressed as a recursive algorithm, but
# at least one user bumped into extreme use cases that exceeded
# the recursion limit on their box. So, now we maintain a list
# ('queue`) of blocks we still need to look at, and append partial
# results to `matching_blocks` in a loop; the matches are sorted
# at the end.
queue = [(0, la, 0, lb)]
matching_blocks = []
while queue:
alo, ahi, blo, bhi = queue.pop()
i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi)
# a[alo:i] vs b[blo:j] unknown
# a[i:i+k] same as b[j:j+k]
# a[i+k:ahi] vs b[j+k:bhi] unknown
if k: # if k is 0, there was no matching block
matching_blocks.append(x)
if alo < i and blo < j:
queue.append((alo, i, blo, j))
if i+k < ahi and j+k < bhi:
queue.append((i+k, ahi, j+k, bhi))
matching_blocks.sort()
# It's possible that we have adjacent equal blocks in the
# matching_blocks list now. Starting with 2.5, this code was added
# to collapse them.
i1 = j1 = k1 = 0
non_adjacent = []
for i2, j2, k2 in matching_blocks:
# Is this block adjacent to i1, j1, k1?
if i1 + k1 == i2 and j1 + k1 == j2:
# Yes, so collapse them -- this just increases the length of
# the first block by the length of the second, and the first
# block so lengthened remains the block to compare against.
k1 += k2
else:
# Not adjacent. Remember the first block (k1==0 means it's
# the dummy we started with), and make the second block the
# new block to compare against.
if k1:
non_adjacent.append((i1, j1, k1))
i1, j1, k1 = i2, j2, k2
if k1:
non_adjacent.append((i1, j1, k1))
non_adjacent.append( (la, lb, 0) )
self.matching_blocks = non_adjacent
return map(Match._make, self.matching_blocks)
def get_opcodes(self):
"""Return list of 5-tuples describing how to turn a into b.
Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
tuple preceding it, and likewise for j1 == the previous j2.
The tags are strings, with these meanings:
'replace': a[i1:i2] should be replaced by b[j1:j2]
'delete': a[i1:i2] should be deleted.
Note that j1==j2 in this case.
'insert': b[j1:j2] should be inserted at a[i1:i1].
Note that i1==i2 in this case.
'equal': a[i1:i2] == b[j1:j2]
>>> a = "qabxcd"
>>> b = "abycdf"
>>> s = SequenceMatcher(None, a, b)
>>> for tag, i1, i2, j1, j2 in s.get_opcodes():
... print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" %
... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2])))
delete a[0:1] (q) b[0:0] ()
equal a[1:3] (ab) b[0:2] (ab)
replace a[3:4] (x) b[2:3] (y)
equal a[4:6] (cd) b[3:5] (cd)
insert a[6:6] () b[5:6] (f)
"""
if self.opcodes is not None:
return self.opcodes
i = j = 0
self.opcodes = answer = []
for ai, bj, size in self.get_matching_blocks():
# invariant: we've pumped out correct diffs to change
# a[:i] into b[:j], and the next matching block is
# a[ai:ai+size] == b[bj:bj+size]. So we need to pump
# out a diff to change a[i:ai] into b[j:bj], pump out
# the matching block, and move (i,j) beyond the match
tag = ''
if i < ai and j < bj:
tag = 'replace'
elif i < ai:
tag = 'delete'
elif j < bj:
tag = 'insert'
if tag:
answer.append( (tag, i, ai, j, bj) )
i, j = ai+size, bj+size
# the list of matching blocks is terminated by a
# sentinel with size 0
if size:
answer.append( ('equal', ai, i, bj, j) )
return answer
def get_grouped_opcodes(self, n=3):
""" Isolate change clusters by eliminating ranges with no changes.
Return a generator of groups with up to n lines of context.
Each group is in the same format as returned by get_opcodes().
>>> from pprint import pprint
>>> a = list(map(str, range(1,40)))
>>> b = a[:]
>>> b[8:8] = ['i'] # Make an insertion
>>> b[20] += 'x' # Make a replacement
>>> b[23:28] = [] # Make a deletion
>>> b[30] += 'y' # Make another replacement
>>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes()))
[[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)],
[('equal', 16, 19, 17, 20),
('replace', 19, 20, 20, 21),
('equal', 20, 22, 21, 23),
('delete', 22, 27, 23, 23),
('equal', 27, 30, 23, 26)],
[('equal', 31, 34, 27, 30),
('replace', 34, 35, 30, 31),
('equal', 35, 38, 31, 34)]]
"""
codes = self.get_opcodes()
if not codes:
codes = [("equal", 0, 1, 0, 1)]
# Fixup leading and trailing groups if they show no changes.
if codes[0][0] == 'equal':
tag, i1, i2, j1, j2 = codes[0]
codes[0] = tag, max(i1, i2-n), i2, max(j1, j2-n), j2
if codes[-1][0] == 'equal':
tag, i1, i2, j1, j2 = codes[-1]
codes[-1] = tag, i1, min(i2, i1+n), j1, min(j2, j1+n)
nn = n + n
group = []
for tag, i1, i2, j1, j2 in codes:
# End the current group and start a new one whenever
# there is a large range with no changes.
if tag == 'equal' and i2-i1 > nn:
group.append((tag, i1, min(i2, i1+n), j1, min(j2, j1+n)))
yield group
group = []
i1, j1 = max(i1, i2-n), max(j1, j2-n)
group.append((tag, i1, i2, j1 ,j2))
if group and not (len(group)==1 and group[0][0] == 'equal'):
yield group
def ratio(self):
"""Return a measure of the sequences' similarity (float in [0,1]).
Where T is the total number of elements in both sequences, and
M is the number of matches, this is 2.0*M / T.
Note that this is 1 if the sequences are identical, and 0 if
they have nothing in common.
.ratio() is expensive to compute if you haven't already computed
.get_matching_blocks() or .get_opcodes(), in which case you may
want to try .quick_ratio() or .real_quick_ratio() first to get an
upper bound.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.quick_ratio()
0.75
>>> s.real_quick_ratio()
1.0
"""
matches = sum(triple[-1] for triple in self.get_matching_blocks())
return _calculate_ratio(matches, len(self.a) + len(self.b))
def quick_ratio(self):
"""Return an upper bound on ratio() relatively quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute.
"""
# viewing a and b as multisets, set matches to the cardinality
# of their intersection; this counts the number of matches
# without regard to order, so is clearly an upper bound
if self.fullbcount is None:
self.fullbcount = fullbcount = {}
for elt in self.b:
fullbcount[elt] = fullbcount.get(elt, 0) + 1
fullbcount = self.fullbcount
# avail[x] is the number of times x appears in 'b' less the
# number of times we've seen it in 'a' so far ... kinda
avail = {}
availhas, matches = avail.__contains__, 0
for elt in self.a:
if availhas(elt):
numb = avail[elt]
else:
numb = fullbcount.get(elt, 0)
avail[elt] = numb - 1
if numb > 0:
matches = matches + 1
return _calculate_ratio(matches, len(self.a) + len(self.b))
def real_quick_ratio(self):
"""Return an upper bound on ratio() very quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute than either .ratio() or .quick_ratio().
"""
la, lb = len(self.a), len(self.b)
# can't have more matches than the number of elements in the
# shorter sequence
return _calculate_ratio(min(la, lb), la + lb)
def get_close_matches(word, possibilities, n=3, cutoff=0.6):
"""Use SequenceMatcher to return list of the best "good enough" matches.
word is a sequence for which close matches are desired (typically a
string).
possibilities is a list of sequences against which to match word
(typically a list of strings).
Optional arg n (default 3) is the maximum number of close matches to
return. n must be > 0.
Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities
that don't score at least that similar to word are ignored.
The best (no more than n) matches among the possibilities are returned
in a list, sorted by similarity score, most similar first.
>>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"])
['apple', 'ape']
>>> import keyword as _keyword
>>> get_close_matches("wheel", _keyword.kwlist)
['while']
>>> get_close_matches("Apple", _keyword.kwlist)
[]
>>> get_close_matches("accept", _keyword.kwlist)
['except']
"""
if not n > 0:
raise ValueError("n must be > 0: %r" % (n,))
if not 0.0 <= cutoff <= 1.0:
raise ValueError("cutoff must be in [0.0, 1.0]: %r" % (cutoff,))
result = []
s = SequenceMatcher()
s.set_seq2(word)
for x in possibilities:
s.set_seq1(x)
if s.real_quick_ratio() >= cutoff and \
s.quick_ratio() >= cutoff and \
s.ratio() >= cutoff:
result.append((s.ratio(), x))
# Move the best scorers to head of list
result = heapq.nlargest(n, result)
# Strip scores for the best n matches
return [x for score, x in result]
def _count_leading(line, ch):
"""
Return number of `ch` characters at the start of `line`.
Example:
>>> _count_leading(' abc', ' ')
3
"""
i, n = 0, len(line)
while i < n and line[i] == ch:
i += 1
return i
class Differ:
r"""
Differ is a class for comparing sequences of lines of text, and
producing human-readable differences or deltas. Differ uses
SequenceMatcher both to compare sequences of lines, and to compare
sequences of characters within similar (near-matching) lines.
Each line of a Differ delta begins with a two-letter code:
'- ' line unique to sequence 1
'+ ' line unique to sequence 2
' ' line common to both sequences
'? ' line not present in either input sequence
Lines beginning with '? ' attempt to guide the eye to intraline
differences, and were not present in either input sequence. These lines
can be confusing if the sequences contain tab characters.
Note that Differ makes no claim to produce a *minimal* diff. To the
contrary, minimal diffs are often counter-intuitive, because they synch
up anywhere possible, sometimes accidental matches 100 pages apart.
Restricting synch points to contiguous matches preserves some notion of
locality, at the occasional cost of producing a longer diff.
Example: Comparing two texts.
First we set up the texts, sequences of individual single-line strings
ending with newlines (such sequences can also be obtained from the
`readlines()` method of file-like objects):
>>> text1 = ''' 1. Beautiful is better than ugly.
... 2. Explicit is better than implicit.
... 3. Simple is better than complex.
... 4. Complex is better than complicated.
... '''.splitlines(keepends=True)
>>> len(text1)
4
>>> text1[0][-1]
'\n'
>>> text2 = ''' 1. Beautiful is better than ugly.
... 3. Simple is better than complex.
... 4. Complicated is better than complex.
... 5. Flat is better than nested.
... '''.splitlines(keepends=True)
Next we instantiate a Differ object:
>>> d = Differ()
Note that when instantiating a Differ object we may pass functions to
filter out line and character 'junk'. See Differ.__init__ for details.
Finally, we compare the two:
>>> result = list(d.compare(text1, text2))
'result' is a list of strings, so let's pretty-print it:
>>> from pprint import pprint as _pprint
>>> _pprint(result)
[' 1. Beautiful is better than ugly.\n',
'- 2. Explicit is better than implicit.\n',
'- 3. Simple is better than complex.\n',
'+ 3. Simple is better than complex.\n',
'? ++\n',
'- 4. Complex is better than complicated.\n',
'? ^ ---- ^\n',
'+ 4. Complicated is better than complex.\n',
'? ++++ ^ ^\n',
'+ 5. Flat is better than nested.\n']
As a single multi-line string it looks like this:
>>> print(''.join(result), end="")
1. Beautiful is better than ugly.
- 2. Explicit is better than implicit.
- 3. Simple is better than complex.
+ 3. Simple is better than complex.
? ++
- 4. Complex is better than complicated.
? ^ ---- ^
+ 4. Complicated is better than complex.
? ++++ ^ ^
+ 5. Flat is better than nested.
Methods:
__init__(linejunk=None, charjunk=None)
Construct a text differencer, with optional filters.
compare(a, b)
Compare two sequences of lines; generate the resulting delta.
"""
def __init__(self, linejunk=None, charjunk=None):
"""
Construct a text differencer, with optional filters.
The two optional keyword parameters are for filter functions:
- `linejunk`: A function that should accept a single string argument,
and return true iff the string is junk. The module-level function
`IS_LINE_JUNK` may be used to filter out lines without visible
characters, except for at most one splat ('#'). It is recommended
to leave linejunk None; as of Python 2.3, the underlying
SequenceMatcher class has grown an adaptive notion of "noise" lines
that's better than any static definition the author has ever been
able to craft.
- `charjunk`: A function that should accept a string of length 1. The
module-level function `IS_CHARACTER_JUNK` may be used to filter out
whitespace characters (a blank or tab; **note**: bad idea to include
newline in this!). Use of IS_CHARACTER_JUNK is recommended.
"""
self.linejunk = linejunk
self.charjunk = charjunk
def compare(self, a, b):
r"""
Compare two sequences of lines; generate the resulting delta.
Each sequence must contain individual single-line strings ending with
newlines. Such sequences can be obtained from the `readlines()` method
of file-like objects. The delta generated also consists of newline-
terminated strings, ready to be printed as-is via the writeline()
method of a file-like object.
Example:
>>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(True),
... 'ore\ntree\nemu\n'.splitlines(True))),
... end="")
- one
? ^
+ ore
? ^
- two
- three
? -
+ tree
+ emu
"""
cruncher = SequenceMatcher(self.linejunk, a, b)
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
if tag == 'replace':
g = self._fancy_replace(a, alo, ahi, b, blo, bhi)
elif tag == 'delete':
g = self._dump('-', a, alo, ahi)
elif tag == 'insert':
g = self._dump('+', b, blo, bhi)
elif tag == 'equal':
g = self._dump(' ', a, alo, ahi)
else:
raise ValueError('unknown tag %r' % (tag,))
for line in g:
yield line
def _dump(self, tag, x, lo, hi):
"""Generate comparison results for a same-tagged range."""
for i in range(lo, hi):
yield '%s %s' % (tag, x[i])
def _plain_replace(self, a, alo, ahi, b, blo, bhi):
assert alo < ahi and blo < bhi
# dump the shorter block first -- reduces the burden on short-term
# memory if the blocks are of very different sizes
if bhi - blo < ahi - alo:
first = self._dump('+', b, blo, bhi)
second = self._dump('-', a, alo, ahi)
else:
first = self._dump('-', a, alo, ahi)
second = self._dump('+', b, blo, bhi)
for g in first, second:
for line in g:
yield line
def _fancy_replace(self, a, alo, ahi, b, blo, bhi):
r"""
When replacing one block of lines with another, search the blocks
for *similar* lines; the best-matching pair (if any) is used as a
synch point, and intraline difference marking is done on the
similar pair. Lots of work, but often worth it.
Example:
>>> d = Differ()
>>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1,
... ['abcdefGhijkl\n'], 0, 1)
>>> print(''.join(results), end="")
- abcDefghiJkl
? ^ ^ ^
+ abcdefGhijkl
? ^ ^ ^
"""
# don't synch up unless the lines have a similarity score of at
# least cutoff; best_ratio tracks the best score seen so far
best_ratio, cutoff = 0.74, 0.75
cruncher = SequenceMatcher(self.charjunk)
eqi, eqj = None, None # 1st indices of equal lines (if any)
# search for the pair that matches best without being identical
# (identical lines must be junk lines, & we don't want to synch up
# on junk -- unless we have to)
for j in range(blo, bhi):
bj = b[j]
cruncher.set_seq2(bj)
for i in range(alo, ahi):
ai = a[i]
if ai == bj:
if eqi is None:
eqi, eqj = i, j
continue
cruncher.set_seq1(ai)
# computing similarity is expensive, so use the quick
# upper bounds first -- have seen this speed up messy
# compares by a factor of 3.
# note that ratio() is only expensive to compute the first
# time it's called on a sequence pair; the expensive part
# of the computation is cached by cruncher
if cruncher.real_quick_ratio() > best_ratio and \
cruncher.quick_ratio() > best_ratio and \
cruncher.ratio() > best_ratio:
best_ratio, best_i, best_j = cruncher.ratio(), i, j
if best_ratio < cutoff:
# no non-identical "pretty close" pair
if eqi is None:
# no identical pair either -- treat it as a straight replace
for line in self._plain_replace(a, alo, ahi, b, blo, bhi):
yield line
return
# no close pair, but an identical pair -- synch up on that
best_i, best_j, best_ratio = eqi, eqj, 1.0
else:
# there's a close pair, so forget the identical pair (if any)
eqi = None
# a[best_i] very similar to b[best_j]; eqi is None iff they're not
# identical
# pump out diffs from before the synch point
for line in self._fancy_helper(a, alo, best_i, b, blo, best_j):
yield line
# do intraline marking on the synch pair
aelt, belt = a[best_i], b[best_j]
if eqi is None:
# pump out a '-', '?', '+', '?' quad for the synched lines
atags = btags = ""
cruncher.set_seqs(aelt, belt)
for tag, ai1, ai2, bj1, bj2 in cruncher.get_opcodes():
la, lb = ai2 - ai1, bj2 - bj1
if tag == 'replace':
atags += '^' * la
btags += '^' * lb
elif tag == 'delete':
atags += '-' * la
elif tag == 'insert':
btags += '+' * lb
elif tag == 'equal':
atags += ' ' * la
btags += ' ' * lb
else:
raise ValueError('unknown tag %r' % (tag,))
for line in self._qformat(aelt, belt, atags, btags):
yield line
else:
# the synch pair is identical
yield ' ' + aelt
# pump out diffs from after the synch point
for line in self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi):
yield line
def _fancy_helper(self, a, alo, ahi, b, blo, bhi):
g = []
if alo < ahi:
if blo < bhi:
g = self._fancy_replace(a, alo, ahi, b, blo, bhi)
else:
g = self._dump('-', a, alo, ahi)
elif blo < bhi:
g = self._dump('+', b, blo, bhi)
for line in g:
yield line
def _qformat(self, aline, bline, atags, btags):
r"""
Format "?" output and deal with leading tabs.
Example:
>>> d = Differ()
>>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n',
... ' ^ ^ ^ ', ' ^ ^ ^ ')
>>> for line in results: print(repr(line))
...
'- \tabcDefghiJkl\n'
'? \t ^ ^ ^\n'
'+ \tabcdefGhijkl\n'
'? \t ^ ^ ^\n'
"""
# Can hurt, but will probably help most of the time.
common = min(_count_leading(aline, "\t"),
_count_leading(bline, "\t"))
common = min(common, _count_leading(atags[:common], " "))
common = min(common, _count_leading(btags[:common], " "))
atags = atags[common:].rstrip()
btags = btags[common:].rstrip()
yield "- " + aline
if atags:
yield "? %s%s\n" % ("\t" * common, atags)
yield "+ " + bline
if btags:
yield "? %s%s\n" % ("\t" * common, btags)
# With respect to junk, an earlier version of ndiff simply refused to
# *start* a match with a junk element. The result was cases like this:
# before: private Thread currentThread;
# after: private volatile Thread currentThread;
# If you consider whitespace to be junk, the longest contiguous match
# not starting with junk is "e Thread currentThread". So ndiff reported
# that "e volatil" was inserted between the 't' and the 'e' in "private".
# While an accurate view, to people that's absurd. The current version
# looks for matching blocks that are entirely junk-free, then extends the
# longest one of those as far as possible but only with matching junk.
# So now "currentThread" is matched, then extended to suck up the
# preceding blank; then "private" is matched, and extended to suck up the
# following blank; then "Thread" is matched; and finally ndiff reports
# that "volatile " was inserted before "Thread". The only quibble
# remaining is that perhaps it was really the case that " volatile"
# was inserted after "private". I can live with that <wink>.
import re
def IS_LINE_JUNK(line, pat=re.compile(r"\s*#?\s*$").match):
r"""
Return 1 for ignorable line: iff `line` is blank or contains a single '#'.
Examples:
>>> IS_LINE_JUNK('\n')
True
>>> IS_LINE_JUNK(' # \n')
True
>>> IS_LINE_JUNK('hello\n')
False
"""
return pat(line) is not None
def IS_CHARACTER_JUNK(ch, ws=" \t"):
r"""
Return 1 for ignorable character: iff `ch` is a space or tab.
Examples:
>>> IS_CHARACTER_JUNK(' ')
True
>>> IS_CHARACTER_JUNK('\t')
True
>>> IS_CHARACTER_JUNK('\n')
False
>>> IS_CHARACTER_JUNK('x')
False
"""
return ch in ws
########################################################################
### Unified Diff
########################################################################
def _format_range_unified(start, stop):
'Convert range to the "ed" format'
# Per the diff spec at http://www.unix.org/single_unix_specification/
beginning = start + 1 # lines start numbering with one
length = stop - start
if length == 1:
return '{}'.format(beginning)
if not length:
beginning -= 1 # empty ranges begin at line just before the range
return '{},{}'.format(beginning, length)
def unified_diff(a, b, fromfile='', tofile='', fromfiledate='',
tofiledate='', n=3, lineterm='\n'):
r"""
Compare two sequences of lines; generate the delta as a unified diff.
Unified diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.
By default, the diff control lines (those with ---, +++, or @@) are
created with a trailing newline. This is helpful so that inputs
created from file.readlines() result in diffs that are suitable for
file.writelines() since both the inputs and outputs have trailing
newlines.
For inputs that do not have trailing newlines, set the lineterm
argument to "" so that the output will be uniformly newline free.
The unidiff format normally has a header for filenames and modification
times. Any or all of these may be specified using strings for
'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
The modification times are normally expressed in the ISO 8601 format.
Example:
>>> for line in unified_diff('one two three four'.split(),
... 'zero one tree four'.split(), 'Original', 'Current',
... '2005-01-26 23:30:50', '2010-04-02 10:20:52',
... lineterm=''):
... print(line) # doctest: +NORMALIZE_WHITESPACE
--- Original 2005-01-26 23:30:50
+++ Current 2010-04-02 10:20:52
@@ -1,4 +1,4 @@
+zero
one
-two
-three
+tree
four
"""
started = False
for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n):
if not started:
started = True
fromdate = '\t{}'.format(fromfiledate) if fromfiledate else ''
todate = '\t{}'.format(tofiledate) if tofiledate else ''
yield '--- {}{}{}'.format(fromfile, fromdate, lineterm)
yield '+++ {}{}{}'.format(tofile, todate, lineterm)
first, last = group[0], group[-1]
file1_range = _format_range_unified(first[1], last[2])
file2_range = _format_range_unified(first[3], last[4])
yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm)
for tag, i1, i2, j1, j2 in group:
if tag == 'equal':
for line in a[i1:i2]:
yield ' ' + line
continue
if tag in {'replace', 'delete'}:
for line in a[i1:i2]:
yield '-' + line
if tag in {'replace', 'insert'}:
for line in b[j1:j2]:
yield '+' + line
########################################################################
### Context Diff
########################################################################
def _format_range_context(start, stop):
'Convert range to the "ed" format'
# Per the diff spec at http://www.unix.org/single_unix_specification/
beginning = start + 1 # lines start numbering with one
length = stop - start
if not length:
beginning -= 1 # empty ranges begin at line just before the range
if length <= 1:
return '{}'.format(beginning)
return '{},{}'.format(beginning, beginning + length - 1)
# See http://www.unix.org/single_unix_specification/
def context_diff(a, b, fromfile='', tofile='',
fromfiledate='', tofiledate='', n=3, lineterm='\n'):
r"""
Compare two sequences of lines; generate the delta as a context diff.
Context diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.
By default, the diff control lines (those with *** or ---) are
created with a trailing newline. This is helpful so that inputs
created from file.readlines() result in diffs that are suitable for
file.writelines() since both the inputs and outputs have trailing
newlines.
For inputs that do not have trailing newlines, set the lineterm
argument to "" so that the output will be uniformly newline free.
The context diff format normally has a header for filenames and
modification times. Any or all of these may be specified using
strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
The modification times are normally expressed in the ISO 8601 format.
If not specified, the strings default to blanks.
Example:
>>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(True),
... 'zero\none\ntree\nfour\n'.splitlines(True), 'Original', 'Current')),
... end="")
*** Original
--- Current
***************
*** 1,4 ****
one
! two
! three
four
--- 1,4 ----
+ zero
one
! tree
four
"""
prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ')
started = False
for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n):
if not started:
started = True
fromdate = '\t{}'.format(fromfiledate) if fromfiledate else ''
todate = '\t{}'.format(tofiledate) if tofiledate else ''
yield '*** {}{}{}'.format(fromfile, fromdate, lineterm)
yield '--- {}{}{}'.format(tofile, todate, lineterm)
first, last = group[0], group[-1]
yield '***************' + lineterm
file1_range = _format_range_context(first[1], last[2])
yield '*** {} ****{}'.format(file1_range, lineterm)
if any(tag in {'replace', 'delete'} for tag, _, _, _, _ in group):
for tag, i1, i2, _, _ in group:
if tag != 'insert':
for line in a[i1:i2]:
yield prefix[tag] + line
file2_range = _format_range_context(first[3], last[4])
yield '--- {} ----{}'.format(file2_range, lineterm)
if any(tag in {'replace', 'insert'} for tag, _, _, _, _ in group):
for tag, _, _, j1, j2 in group:
if tag != 'delete':
for line in b[j1:j2]:
yield prefix[tag] + line
def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK):
r"""
Compare `a` and `b` (lists of strings); return a `Differ`-style delta.
Optional keyword parameters `linejunk` and `charjunk` are for filter
functions (or None):
- linejunk: A function that should accept a single string argument, and
return true iff the string is junk. The default is None, and is
recommended; as of Python 2.3, an adaptive notion of "noise" lines is
used that does a good job on its own.
- charjunk: A function that should accept a string of length 1. The
default is module-level function IS_CHARACTER_JUNK, which filters out
whitespace characters (a blank or tab; note: bad idea to include newline
in this!).
Tools/scripts/ndiff.py is a command-line front-end to this function.
Example:
>>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True),
... 'ore\ntree\nemu\n'.splitlines(keepends=True))
>>> print(''.join(diff), end="")
- one
? ^
+ ore
? ^
- two
- three
? -
+ tree
+ emu
"""
return Differ(linejunk, charjunk).compare(a, b)
def _mdiff(fromlines, tolines, context=None, linejunk=None,
charjunk=IS_CHARACTER_JUNK):
r"""Returns generator yielding marked up from/to side by side differences.
Arguments:
fromlines -- list of text lines to compared to tolines
tolines -- list of text lines to be compared to fromlines
context -- number of context lines to display on each side of difference,
if None, all from/to text lines will be generated.
linejunk -- passed on to ndiff (see ndiff documentation)
charjunk -- passed on to ndiff (see ndiff documentation)
This function returns an iterator which returns a tuple:
(from line tuple, to line tuple, boolean flag)
from/to line tuple -- (line num, line text)
line num -- integer or None (to indicate a context separation)
line text -- original line text with following markers inserted:
'\0+' -- marks start of added text
'\0-' -- marks start of deleted text
'\0^' -- marks start of changed text
'\1' -- marks end of added/deleted/changed text
boolean flag -- None indicates context separation, True indicates
either "from" or "to" line contains a change, otherwise False.
This function/iterator was originally developed to generate side by side
file difference for making HTML pages (see HtmlDiff class for example
usage).
Note, this function utilizes the ndiff function to generate the side by
side difference markup. Optional ndiff arguments may be passed to this
function and they in turn will be passed to ndiff.
"""
import re
# regular expression for finding intraline change indices
change_re = re.compile('(\++|\-+|\^+)')
# create the difference iterator to generate the differences
diff_lines_iterator = ndiff(fromlines,tolines,linejunk,charjunk)
def _make_line(lines, format_key, side, num_lines=[0,0]):
"""Returns line of text with user's change markup and line formatting.
lines -- list of lines from the ndiff generator to produce a line of
text from. When producing the line of text to return, the
lines used are removed from this list.
format_key -- '+' return first line in list with "add" markup around
the entire line.
'-' return first line in list with "delete" markup around
the entire line.
'?' return first line in list with add/delete/change
intraline markup (indices obtained from second line)
None return first line in list with no markup
side -- indice into the num_lines list (0=from,1=to)
num_lines -- from/to current line number. This is NOT intended to be a
passed parameter. It is present as a keyword argument to
maintain memory of the current line numbers between calls
of this function.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
num_lines[side] += 1
# Handle case where no user markup is to be added, just return line of
# text with user's line format to allow for usage of the line number.
if format_key is None:
return (num_lines[side],lines.pop(0)[2:])
# Handle case of intraline changes
if format_key == '?':
text, markers = lines.pop(0), lines.pop(0)
# find intraline changes (store change type and indices in tuples)
sub_info = []
def record_sub_info(match_object,sub_info=sub_info):
sub_info.append([match_object.group(1)[0],match_object.span()])
return match_object.group(1)
change_re.sub(record_sub_info,markers)
# process each tuple inserting our special marks that won't be
# noticed by an xml/html escaper.
for key,(begin,end) in sub_info[::-1]:
text = text[0:begin]+'\0'+key+text[begin:end]+'\1'+text[end:]
text = text[2:]
# Handle case of add/delete entire line
else:
text = lines.pop(0)[2:]
# if line of text is just a newline, insert a space so there is
# something for the user to highlight and see.
if not text:
text = ' '
# insert marks that won't be noticed by an xml/html escaper.
text = '\0' + format_key + text + '\1'
# Return line of text, first allow user's line formatter to do its
# thing (such as adding the line number) then replace the special
# marks with what the user's change markup.
return (num_lines[side],text)
def _line_iterator():
"""Yields from/to lines of text with a change indication.
This function is an iterator. It itself pulls lines from a
differencing iterator, processes them and yields them. When it can
it yields both a "from" and a "to" line, otherwise it will yield one
or the other. In addition to yielding the lines of from/to text, a
boolean flag is yielded to indicate if the text line(s) have
differences in them.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
lines = []
num_blanks_pending, num_blanks_to_yield = 0, 0
while True:
# Load up next 4 lines so we can look ahead, create strings which
# are a concatenation of the first character of each of the 4 lines
# so we can do some very readable comparisons.
while len(lines) < 4:
try:
lines.append(next(diff_lines_iterator))
except StopIteration:
lines.append('X')
s = ''.join([line[0] for line in lines])
if s.startswith('X'):
# When no more lines, pump out any remaining blank lines so the
# corresponding add/delete lines get a matching blank line so
# all line pairs get yielded at the next level.
num_blanks_to_yield = num_blanks_pending
elif s.startswith('-?+?'):
# simple intraline change
yield _make_line(lines,'?',0), _make_line(lines,'?',1), True
continue
elif s.startswith('--++'):
# in delete block, add block coming: we do NOT want to get
# caught up on blank lines yet, just process the delete line
num_blanks_pending -= 1
yield _make_line(lines,'-',0), None, True
continue
elif s.startswith(('--?+', '--+', '- ')):
# in delete block and see a intraline change or unchanged line
# coming: yield the delete line and then blanks
from_line,to_line = _make_line(lines,'-',0), None
num_blanks_to_yield,num_blanks_pending = num_blanks_pending-1,0
elif s.startswith('-+?'):
# intraline change
yield _make_line(lines,None,0), _make_line(lines,'?',1), True
continue
elif s.startswith('-?+'):
# intraline change
yield _make_line(lines,'?',0), _make_line(lines,None,1), True
continue
elif s.startswith('-'):
# delete FROM line
num_blanks_pending -= 1
yield _make_line(lines,'-',0), None, True
continue
elif s.startswith('+--'):
# in add block, delete block coming: we do NOT want to get
# caught up on blank lines yet, just process the add line
num_blanks_pending += 1
yield None, _make_line(lines,'+',1), True
continue
elif s.startswith(('+ ', '+-')):
# will be leaving an add block: yield blanks then add line
from_line, to_line = None, _make_line(lines,'+',1)
num_blanks_to_yield,num_blanks_pending = num_blanks_pending+1,0
elif s.startswith('+'):
# inside an add block, yield the add line
num_blanks_pending += 1
yield None, _make_line(lines,'+',1), True
continue
elif s.startswith(' '):
# unchanged text, yield it to both sides
yield _make_line(lines[:],None,0),_make_line(lines,None,1),False
continue
# Catch up on the blank lines so when we yield the next from/to
# pair, they are lined up.
while(num_blanks_to_yield < 0):
num_blanks_to_yield += 1
yield None,('','\n'),True
while(num_blanks_to_yield > 0):
num_blanks_to_yield -= 1
yield ('','\n'),None,True
if s.startswith('X'):
raise StopIteration
else:
yield from_line,to_line,True
def _line_pair_iterator():
"""Yields from/to lines of text with a change indication.
This function is an iterator. It itself pulls lines from the line
iterator. Its difference from that iterator is that this function
always yields a pair of from/to text lines (with the change
indication). If necessary it will collect single from/to lines
until it has a matching pair from/to pair to yield.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
line_iterator = _line_iterator()
fromlines,tolines=[],[]
while True:
# Collecting lines of text until we have a from/to pair
while (len(fromlines)==0 or len(tolines)==0):
from_line, to_line, found_diff = next(line_iterator)
if from_line is not None:
fromlines.append((from_line,found_diff))
if to_line is not None:
tolines.append((to_line,found_diff))
# Once we have a pair, remove them from the collection and yield it
from_line, fromDiff = fromlines.pop(0)
to_line, to_diff = tolines.pop(0)
yield (from_line,to_line,fromDiff or to_diff)
# Handle case where user does not want context differencing, just yield
# them up without doing anything else with them.
line_pair_iterator = _line_pair_iterator()
if context is None:
while True:
yield next(line_pair_iterator)
# Handle case where user wants context differencing. We must do some
# storage of lines until we know for sure that they are to be yielded.
else:
context += 1
lines_to_write = 0
while True:
# Store lines up until we find a difference, note use of a
# circular queue because we only need to keep around what
# we need for context.
index, contextLines = 0, [None]*(context)
found_diff = False
while(found_diff is False):
from_line, to_line, found_diff = next(line_pair_iterator)
i = index % context
contextLines[i] = (from_line, to_line, found_diff)
index += 1
# Yield lines that we have collected so far, but first yield
# the user's separator.
if index > context:
yield None, None, None
lines_to_write = context
else:
lines_to_write = index
index = 0
while(lines_to_write):
i = index % context
index += 1
yield contextLines[i]
lines_to_write -= 1
# Now yield the context lines after the change
lines_to_write = context-1
while(lines_to_write):
from_line, to_line, found_diff = next(line_pair_iterator)
# If another change within the context, extend the context
if found_diff:
lines_to_write = context-1
else:
lines_to_write -= 1
yield from_line, to_line, found_diff
_file_template = """
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<meta http-equiv="Content-Type"
content="text/html; charset=ISO-8859-1" />
<title></title>
<style type="text/css">%(styles)s
</style>
</head>
<body>
%(table)s%(legend)s
</body>
</html>"""
_styles = """
table.diff {font-family:Courier; border:medium;}
.diff_header {background-color:#e0e0e0}
td.diff_header {text-align:right}
.diff_next {background-color:#c0c0c0}
.diff_add {background-color:#aaffaa}
.diff_chg {background-color:#ffff77}
.diff_sub {background-color:#ffaaaa}"""
_table_template = """
<table class="diff" id="difflib_chg_%(prefix)s_top"
cellspacing="0" cellpadding="0" rules="groups" >
<colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup>
<colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup>
%(header_row)s
<tbody>
%(data_rows)s </tbody>
</table>"""
_legend = """
<table class="diff" summary="Legends">
<tr> <th colspan="2"> Legends </th> </tr>
<tr> <td> <table border="" summary="Colors">
<tr><th> Colors </th> </tr>
<tr><td class="diff_add"> Added </td></tr>
<tr><td class="diff_chg">Changed</td> </tr>
<tr><td class="diff_sub">Deleted</td> </tr>
</table></td>
<td> <table border="" summary="Links">
<tr><th colspan="2"> Links </th> </tr>
<tr><td>(f)irst change</td> </tr>
<tr><td>(n)ext change</td> </tr>
<tr><td>(t)op</td> </tr>
</table></td> </tr>
</table>"""
class HtmlDiff(object):
"""For producing HTML side by side comparison with change highlights.
This class can be used to create an HTML table (or a complete HTML file
containing the table) showing a side by side, line by line comparison
of text with inter-line and intra-line change highlights. The table can
be generated in either full or contextual difference mode.
The following methods are provided for HTML generation:
make_table -- generates HTML for a single side by side table
make_file -- generates complete HTML file with a single side by side table
See tools/scripts/diff.py for an example usage of this class.
"""
_file_template = _file_template
_styles = _styles
_table_template = _table_template
_legend = _legend
_default_prefix = 0
def __init__(self,tabsize=8,wrapcolumn=None,linejunk=None,
charjunk=IS_CHARACTER_JUNK):
"""HtmlDiff instance initializer
Arguments:
tabsize -- tab stop spacing, defaults to 8.
wrapcolumn -- column number where lines are broken and wrapped,
defaults to None where lines are not wrapped.
linejunk,charjunk -- keyword arguments passed into ndiff() (used to by
HtmlDiff() to generate the side by side HTML differences). See
ndiff() documentation for argument default values and descriptions.
"""
self._tabsize = tabsize
self._wrapcolumn = wrapcolumn
self._linejunk = linejunk
self._charjunk = charjunk
def make_file(self,fromlines,tolines,fromdesc='',todesc='',context=False,
numlines=5):
"""Returns HTML file of side by side comparison with change highlights
Arguments:
fromlines -- list of "from" lines
tolines -- list of "to" lines
fromdesc -- "from" file column header string
todesc -- "to" file column header string
context -- set to True for contextual differences (defaults to False
which shows full differences).
numlines -- number of context lines. When context is set True,
controls number of lines displayed before and after the change.
When context is False, controls the number of lines to place
the "next" link anchors before the next change (so click of
"next" link jumps to just before the change).
"""
return self._file_template % dict(
styles = self._styles,
legend = self._legend,
table = self.make_table(fromlines,tolines,fromdesc,todesc,
context=context,numlines=numlines))
def _tab_newline_replace(self,fromlines,tolines):
"""Returns from/to line lists with tabs expanded and newlines removed.
Instead of tab characters being replaced by the number of spaces
needed to fill in to the next tab stop, this function will fill
the space with tab characters. This is done so that the difference
algorithms can identify changes in a file when tabs are replaced by
spaces and vice versa. At the end of the HTML generation, the tab
characters will be replaced with a nonbreakable space.
"""
def expand_tabs(line):
# hide real spaces
line = line.replace(' ','\0')
# expand tabs into spaces
line = line.expandtabs(self._tabsize)
# replace spaces from expanded tabs back into tab characters
# (we'll replace them with markup after we do differencing)
line = line.replace(' ','\t')
return line.replace('\0',' ').rstrip('\n')
fromlines = [expand_tabs(line) for line in fromlines]
tolines = [expand_tabs(line) for line in tolines]
return fromlines,tolines
def _split_line(self,data_list,line_num,text):
"""Builds list of text lines by splitting text lines at wrap point
This function will determine if the input text line needs to be
wrapped (split) into separate lines. If so, the first wrap point
will be determined and the first line appended to the output
text line list. This function is used recursively to handle
the second part of the split line to further split it.
"""
# if blank line or context separator, just add it to the output list
if not line_num:
data_list.append((line_num,text))
return
# if line text doesn't need wrapping, just add it to the output list
size = len(text)
max = self._wrapcolumn
if (size <= max) or ((size -(text.count('\0')*3)) <= max):
data_list.append((line_num,text))
return
# scan text looking for the wrap point, keeping track if the wrap
# point is inside markers
i = 0
n = 0
mark = ''
while n < max and i < size:
if text[i] == '\0':
i += 1
mark = text[i]
i += 1
elif text[i] == '\1':
i += 1
mark = ''
else:
i += 1
n += 1
# wrap point is inside text, break it up into separate lines
line1 = text[:i]
line2 = text[i:]
# if wrap point is inside markers, place end marker at end of first
# line and start marker at beginning of second line because each
# line will have its own table tag markup around it.
if mark:
line1 = line1 + '\1'
line2 = '\0' + mark + line2
# tack on first line onto the output list
data_list.append((line_num,line1))
# use this routine again to wrap the remaining text
self._split_line(data_list,'>',line2)
def _line_wrapper(self,diffs):
"""Returns iterator that splits (wraps) mdiff text lines"""
# pull from/to data and flags from mdiff iterator
for fromdata,todata,flag in diffs:
# check for context separators and pass them through
if flag is None:
yield fromdata,todata,flag
continue
(fromline,fromtext),(toline,totext) = fromdata,todata
# for each from/to line split it at the wrap column to form
# list of text lines.
fromlist,tolist = [],[]
self._split_line(fromlist,fromline,fromtext)
self._split_line(tolist,toline,totext)
# yield from/to line in pairs inserting blank lines as
# necessary when one side has more wrapped lines
while fromlist or tolist:
if fromlist:
fromdata = fromlist.pop(0)
else:
fromdata = ('',' ')
if tolist:
todata = tolist.pop(0)
else:
todata = ('',' ')
yield fromdata,todata,flag
def _collect_lines(self,diffs):
"""Collects mdiff output into separate lists
Before storing the mdiff from/to data into a list, it is converted
into a single line of text with HTML markup.
"""
fromlist,tolist,flaglist = [],[],[]
# pull from/to data and flags from mdiff style iterator
for fromdata,todata,flag in diffs:
try:
# store HTML markup of the lines into the lists
fromlist.append(self._format_line(0,flag,*fromdata))
tolist.append(self._format_line(1,flag,*todata))
except TypeError:
# exceptions occur for lines where context separators go
fromlist.append(None)
tolist.append(None)
flaglist.append(flag)
return fromlist,tolist,flaglist
def _format_line(self,side,flag,linenum,text):
"""Returns HTML markup of "from" / "to" text lines
side -- 0 or 1 indicating "from" or "to" text
flag -- indicates if difference on line
linenum -- line number (used for line number column)
text -- line text to be marked up
"""
try:
linenum = '%d' % linenum
id = ' id="%s%s"' % (self._prefix[side],linenum)
except TypeError:
# handle blank lines where linenum is '>' or ''
id = ''
# replace those things that would get confused with HTML symbols
text=text.replace("&","&").replace(">",">").replace("<","<")
# make space non-breakable so they don't get compressed or line wrapped
text = text.replace(' ',' ').rstrip()
return '<td class="diff_header"%s>%s</td><td nowrap="nowrap">%s</td>' \
% (id,linenum,text)
def _make_prefix(self):
"""Create unique anchor prefixes"""
# Generate a unique anchor prefix so multiple tables
# can exist on the same HTML page without conflicts.
fromprefix = "from%d_" % HtmlDiff._default_prefix
toprefix = "to%d_" % HtmlDiff._default_prefix
HtmlDiff._default_prefix += 1
# store prefixes so line format method has access
self._prefix = [fromprefix,toprefix]
def _convert_flags(self,fromlist,tolist,flaglist,context,numlines):
"""Makes list of "next" links"""
# all anchor names will be generated using the unique "to" prefix
toprefix = self._prefix[1]
# process change flags, generating middle column of next anchors/links
next_id = ['']*len(flaglist)
next_href = ['']*len(flaglist)
num_chg, in_change = 0, False
last = 0
for i,flag in enumerate(flaglist):
if flag:
if not in_change:
in_change = True
last = i
# at the beginning of a change, drop an anchor a few lines
# (the context lines) before the change for the previous
# link
i = max([0,i-numlines])
next_id[i] = ' id="difflib_chg_%s_%d"' % (toprefix,num_chg)
# at the beginning of a change, drop a link to the next
# change
num_chg += 1
next_href[last] = '<a href="#difflib_chg_%s_%d">n</a>' % (
toprefix,num_chg)
else:
in_change = False
# check for cases where there is no content to avoid exceptions
if not flaglist:
flaglist = [False]
next_id = ['']
next_href = ['']
last = 0
if context:
fromlist = ['<td></td><td> No Differences Found </td>']
tolist = fromlist
else:
fromlist = tolist = ['<td></td><td> Empty File </td>']
# if not a change on first line, drop a link
if not flaglist[0]:
next_href[0] = '<a href="#difflib_chg_%s_0">f</a>' % toprefix
# redo the last link to link to the top
next_href[last] = '<a href="#difflib_chg_%s_top">t</a>' % (toprefix)
return fromlist,tolist,flaglist,next_href,next_id
def make_table(self,fromlines,tolines,fromdesc='',todesc='',context=False,
numlines=5):
"""Returns HTML table of side by side comparison with change highlights
Arguments:
fromlines -- list of "from" lines
tolines -- list of "to" lines
fromdesc -- "from" file column header string
todesc -- "to" file column header string
context -- set to True for contextual differences (defaults to False
which shows full differences).
numlines -- number of context lines. When context is set True,
controls number of lines displayed before and after the change.
When context is False, controls the number of lines to place
the "next" link anchors before the next change (so click of
"next" link jumps to just before the change).
"""
# make unique anchor prefixes so that multiple tables may exist
# on the same page without conflict.
self._make_prefix()
# change tabs to spaces before it gets more difficult after we insert
# markup
fromlines,tolines = self._tab_newline_replace(fromlines,tolines)
# create diffs iterator which generates side by side from/to data
if context:
context_lines = numlines
else:
context_lines = None
diffs = _mdiff(fromlines,tolines,context_lines,linejunk=self._linejunk,
charjunk=self._charjunk)
# set up iterator to wrap lines that exceed desired width
if self._wrapcolumn:
diffs = self._line_wrapper(diffs)
# collect up from/to lines and flags into lists (also format the lines)
fromlist,tolist,flaglist = self._collect_lines(diffs)
# process change flags, generating middle column of next anchors/links
fromlist,tolist,flaglist,next_href,next_id = self._convert_flags(
fromlist,tolist,flaglist,context,numlines)
s = []
fmt = ' <tr><td class="diff_next"%s>%s</td>%s' + \
'<td class="diff_next">%s</td>%s</tr>\n'
for i in range(len(flaglist)):
if flaglist[i] is None:
# mdiff yields None on separator lines skip the bogus ones
# generated for the first line
if i > 0:
s.append(' </tbody> \n <tbody>\n')
else:
s.append( fmt % (next_id[i],next_href[i],fromlist[i],
next_href[i],tolist[i]))
if fromdesc or todesc:
header_row = '<thead><tr>%s%s%s%s</tr></thead>' % (
'<th class="diff_next"><br /></th>',
'<th colspan="2" class="diff_header">%s</th>' % fromdesc,
'<th class="diff_next"><br /></th>',
'<th colspan="2" class="diff_header">%s</th>' % todesc)
else:
header_row = ''
table = self._table_template % dict(
data_rows=''.join(s),
header_row=header_row,
prefix=self._prefix[1])
return table.replace('\0+','<span class="diff_add">'). \
replace('\0-','<span class="diff_sub">'). \
replace('\0^','<span class="diff_chg">'). \
replace('\1','</span>'). \
replace('\t',' ')
del re
def restore(delta, which):
r"""
Generate one of the two sequences that generated a delta.
Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract
lines originating from file 1 or 2 (parameter `which`), stripping off line
prefixes.
Examples:
>>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True),
... 'ore\ntree\nemu\n'.splitlines(keepends=True))
>>> diff = list(diff)
>>> print(''.join(restore(diff, 1)), end="")
one
two
three
>>> print(''.join(restore(diff, 2)), end="")
ore
tree
emu
"""
try:
tag = {1: "- ", 2: "+ "}[int(which)]
except KeyError:
raise ValueError('unknown delta choice (must be 1 or 2): %r'
% which)
prefixes = (" ", tag)
for line in delta:
if line[:2] in prefixes:
yield line[2:]
def _test():
import doctest, difflib
return doctest.testmod(difflib)
if __name__ == "__main__":
_test()
| gpl-3.0 |
luxus/home-assistant | tests/helpers/test_entity.py | 7 | 1761 | """Test the entity helper."""
# pylint: disable=protected-access,too-many-public-methods
import unittest
import homeassistant.helpers.entity as entity
from homeassistant.const import ATTR_HIDDEN
from tests.common import get_test_home_assistant
class TestHelpersEntity(unittest.TestCase):
"""Test homeassistant.helpers.entity module."""
def setUp(self): # pylint: disable=invalid-name
"""Setup things to be run when tests are started."""
self.entity = entity.Entity()
self.entity.entity_id = 'test.overwrite_hidden_true'
self.hass = self.entity.hass = get_test_home_assistant()
self.entity.update_ha_state()
def tearDown(self): # pylint: disable=invalid-name
"""Stop everything that was started."""
self.hass.stop()
entity.Entity.overwrite_attribute(self.entity.entity_id,
[ATTR_HIDDEN], [None])
def test_default_hidden_not_in_attributes(self):
"""Test that the default hidden property is set to False."""
self.assertNotIn(
ATTR_HIDDEN,
self.hass.states.get(self.entity.entity_id).attributes)
def test_overwriting_hidden_property_to_true(self):
"""Test we can overwrite hidden property to True."""
entity.Entity.overwrite_attribute(self.entity.entity_id,
[ATTR_HIDDEN], [True])
self.entity.update_ha_state()
state = self.hass.states.get(self.entity.entity_id)
self.assertTrue(state.attributes.get(ATTR_HIDDEN))
def test_split_entity_id(self):
"""Test split_entity_id."""
self.assertEqual(['domain', 'object_id'],
entity.split_entity_id('domain.object_id'))
| mit |
wuzhihui1123/django-cms | cms/south_migrations/0024_added_placeholder_model.py | 1680 | 20032 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
try:
from django.contrib.auth import get_user_model
except ImportError: # django < 1.5
from django.contrib.auth.models import User
else:
User = get_user_model()
user_orm_label = '%s.%s' % (User._meta.app_label, User._meta.object_name)
user_model_label = '%s.%s' % (User._meta.app_label, User._meta.model_name)
user_ptr_name = '%s_ptr' % User._meta.object_name.lower()
class Migration(SchemaMigration):
def forwards(self, orm):
# Dummy migration
pass
def backwards(self, orm):
# Dummy migration
pass
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [],
{'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [],
{'to': "orm['auth.Permission']", 'symmetrical': 'False',
'blank': 'True'})
},
'auth.permission': {
'Meta': {
'ordering': "('content_type__app_label', 'content_type__model', 'codename')",
'unique_together': "(('content_type', 'codename'),)",
'object_name': 'Permission'},
'codename': (
'django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['contenttypes.ContentType']"}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
user_model_label: {
'Meta': {'object_name': User.__name__, 'db_table': "'%s'" % User._meta.db_table},
'date_joined': ('django.db.models.fields.DateTimeField', [],
{'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [],
{'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [],
{'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [],
{'to': "orm['auth.Group']", 'symmetrical': 'False',
'blank': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [],
{'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [],
{'max_length': '30', 'blank': 'True'}),
'password': (
'django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': (
'django.db.models.fields.related.ManyToManyField', [],
{'to': "orm['auth.Permission']", 'symmetrical': 'False',
'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [],
{'unique': 'True', 'max_length': '30'})
},
'cms.cmsplugin': {
'Meta': {'object_name': 'CMSPlugin'},
'changed_date': ('django.db.models.fields.DateTimeField', [],
{'auto_now': 'True', 'blank': 'True'}),
'creation_date': ('django.db.models.fields.DateTimeField', [],
{'default': 'datetime.datetime.now'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [],
{'max_length': '15', 'db_index': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'lft': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'parent': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['cms.CMSPlugin']", 'null': 'True',
'blank': 'True'}),
'placeholder': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['cms.Placeholder']", 'null': 'True'}),
'plugin_type': ('django.db.models.fields.CharField', [],
{'max_length': '50', 'db_index': 'True'}),
'position': ('django.db.models.fields.PositiveSmallIntegerField', [],
{'null': 'True', 'blank': 'True'}),
'rght': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'tree_id': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'})
},
'cms.globalpagepermission': {
'Meta': {'object_name': 'GlobalPagePermission'},
'can_add': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change_advanced_settings': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_change_permissions': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_delete': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_moderate': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_move_page': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_publish': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_recover_page': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_view': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'group': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'sites': ('django.db.models.fields.related.ManyToManyField', [],
{'symmetrical': 'False', 'to': "orm['sites.Site']",
'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['%s']" % user_orm_label, 'null': 'True', 'blank': 'True'})
},
'cms.page': {
'Meta': {'ordering': "('site', 'tree_id', 'lft')",
'object_name': 'Page'},
'changed_by': (
'django.db.models.fields.CharField', [], {'max_length': '70'}),
'changed_date': ('django.db.models.fields.DateTimeField', [],
{'auto_now': 'True', 'blank': 'True'}),
'created_by': (
'django.db.models.fields.CharField', [], {'max_length': '70'}),
'creation_date': ('django.db.models.fields.DateTimeField', [],
{'auto_now_add': 'True', 'blank': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'in_navigation': ('django.db.models.fields.BooleanField', [],
{'default': 'True', 'db_index': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'lft': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'limit_visibility_in_menu': (
'django.db.models.fields.SmallIntegerField', [],
{'default': 'None', 'null': 'True', 'db_index': 'True',
'blank': 'True'}),
'login_required': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderator_state': ('django.db.models.fields.SmallIntegerField', [],
{'default': '1', 'blank': 'True'}),
'navigation_extenders': ('django.db.models.fields.CharField', [],
{'db_index': 'True', 'max_length': '80',
'null': 'True', 'blank': 'True'}),
'parent': ('django.db.models.fields.related.ForeignKey', [],
{'blank': 'True', 'related_name': "'children'",
'null': 'True', 'to': "orm['cms.Page']"}),
'placeholders': ('django.db.models.fields.related.ManyToManyField', [],
{'to': "orm['cms.Placeholder']",
'symmetrical': 'False'}),
'publication_date': ('django.db.models.fields.DateTimeField', [],
{'db_index': 'True', 'null': 'True',
'blank': 'True'}),
'publication_end_date': ('django.db.models.fields.DateTimeField', [],
{'db_index': 'True', 'null': 'True',
'blank': 'True'}),
'published': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'publisher_is_draft': ('django.db.models.fields.BooleanField', [],
{'default': 'True', 'db_index': 'True'}),
'publisher_public': (
'django.db.models.fields.related.OneToOneField', [],
{'related_name': "'publisher_draft'", 'unique': 'True', 'null': 'True',
'to': "orm['cms.Page']"}),
'publisher_state': ('django.db.models.fields.SmallIntegerField', [],
{'default': '0', 'db_index': 'True'}),
'reverse_id': ('django.db.models.fields.CharField', [],
{'db_index': 'True', 'max_length': '40', 'null': 'True',
'blank': 'True'}),
'rght': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['sites.Site']"}),
'soft_root': ('django.db.models.fields.BooleanField', [],
{'default': 'False', 'db_index': 'True'}),
'template': (
'django.db.models.fields.CharField', [], {'max_length': '100'}),
'tree_id': ('django.db.models.fields.PositiveIntegerField', [],
{'db_index': 'True'})
},
'cms.pagemoderator': {
'Meta': {'object_name': 'PageModerator'},
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'moderate_children': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderate_descendants': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderate_page': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'page': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['cms.Page']"}),
'user': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['%s']" % user_orm_label})
},
'cms.pagemoderatorstate': {
'Meta': {'ordering': "('page', 'action', '-created')",
'object_name': 'PageModeratorState'},
'action': ('django.db.models.fields.CharField', [],
{'max_length': '3', 'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [],
{'auto_now_add': 'True', 'blank': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.TextField', [],
{'default': "''", 'max_length': '1000', 'blank': 'True'}),
'page': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['cms.Page']"}),
'user': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['%s']" % user_orm_label, 'null': 'True'})
},
'cms.pagepermission': {
'Meta': {'object_name': 'PagePermission'},
'can_add': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change_advanced_settings': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_change_permissions': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_delete': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_moderate': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_move_page': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_publish': (
'django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_view': (
'django.db.models.fields.BooleanField', [], {'default': 'False'}),
'grant_on': (
'django.db.models.fields.IntegerField', [], {'default': '5'}),
'group': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'page': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['cms.Page']", 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [],
{'to': "orm['%s']" % user_orm_label, 'null': 'True', 'blank': 'True'})
},
'cms.pageuser': {
'Meta': {'object_name': 'PageUser', '_ormbases': [user_orm_label]},
'created_by': ('django.db.models.fields.related.ForeignKey', [],
{'related_name': "'created_users'",
'to': "orm['%s']" % user_orm_label}),
'user_ptr': ('django.db.models.fields.related.OneToOneField', [],
{'to': "orm['%s']" % user_orm_label, 'unique': 'True',
'primary_key': 'True'})
},
'cms.pageusergroup': {
'Meta': {'object_name': 'PageUserGroup', '_ormbases': ['auth.Group']},
'created_by': ('django.db.models.fields.related.ForeignKey', [],
{'related_name': "'created_usergroups'",
'to': "orm['%s']" % user_orm_label}),
'group_ptr': ('django.db.models.fields.related.OneToOneField', [],
{'to': "orm['auth.Group']", 'unique': 'True',
'primary_key': 'True'})
},
'cms.placeholder': {
'Meta': {'object_name': 'Placeholder'},
'default_width': (
'django.db.models.fields.PositiveSmallIntegerField', [],
{'null': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slot': ('django.db.models.fields.CharField', [],
{'max_length': '50', 'db_index': 'True'})
},
'cms.title': {
'Meta': {'unique_together': "(('language', 'page'),)",
'object_name': 'Title'},
'application_urls': ('django.db.models.fields.CharField', [],
{'db_index': 'True', 'max_length': '200',
'null': 'True', 'blank': 'True'}),
'creation_date': ('django.db.models.fields.DateTimeField', [],
{'default': 'datetime.datetime.now'}),
'has_url_overwrite': ('django.db.models.fields.BooleanField', [],
{'default': 'False', 'db_index': 'True'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [],
{'max_length': '15', 'db_index': 'True'}),
'menu_title': ('django.db.models.fields.CharField', [],
{'max_length': '255', 'null': 'True', 'blank': 'True'}),
'meta_description': ('django.db.models.fields.TextField', [],
{'max_length': '255', 'null': 'True',
'blank': 'True'}),
'meta_keywords': ('django.db.models.fields.CharField', [],
{'max_length': '255', 'null': 'True',
'blank': 'True'}),
'page': ('django.db.models.fields.related.ForeignKey', [],
{'related_name': "'title_set'", 'to': "orm['cms.Page']"}),
'page_title': ('django.db.models.fields.CharField', [],
{'max_length': '255', 'null': 'True', 'blank': 'True'}),
'path': ('django.db.models.fields.CharField', [],
{'max_length': '255', 'db_index': 'True'}),
'redirect': ('django.db.models.fields.CharField', [],
{'max_length': '255', 'null': 'True', 'blank': 'True'}),
'slug': (
'django.db.models.fields.SlugField', [], {'max_length': '255'}),
'title': (
'django.db.models.fields.CharField', [], {'max_length': '255'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)",
'unique_together': "(('app_label', 'model'),)",
'object_name': 'ContentType',
'db_table': "'django_content_type'"},
'app_label': (
'django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': (
'django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site',
'db_table': "'django_site'"},
'domain': (
'django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': (
'django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['cms']
| bsd-3-clause |
Govexec/django-categories | categories/editor/utils.py | 13 | 4618 | """
Provides compatibility with Django 1.1
Copied from django.contrib.admin.util
"""
from django.forms.forms import pretty_name
from django.db import models
from django.db.models.related import RelatedObject
from django.utils.encoding import force_unicode, smart_unicode, smart_str
from django.utils.translation import get_date_formats
from django.utils.text import capfirst
from django.utils import dateformat
from django.utils.html import escape
def lookup_field(name, obj, model_admin=None):
opts = obj._meta
try:
f = opts.get_field(name)
except models.FieldDoesNotExist:
# For non-field values, the value is either a method, property or
# returned via a callable.
if callable(name):
attr = name
value = attr(obj)
elif (model_admin is not None and hasattr(model_admin, name) and
not name == '__str__' and not name == '__unicode__'):
attr = getattr(model_admin, name)
value = attr(obj)
else:
attr = getattr(obj, name)
if callable(attr):
value = attr()
else:
value = attr
f = None
else:
attr = None
value = getattr(obj, name)
return f, attr, value
def label_for_field(name, model, model_admin=None, return_attr=False):
"""
Returns a sensible label for a field name. The name can be a callable or the
name of an object attributes, as well as a genuine fields. If return_attr is
True, the resolved attribute (which could be a callable) is also returned.
This will be None if (and only if) the name refers to a field.
"""
attr = None
try:
field = model._meta.get_field_by_name(name)[0]
if isinstance(field, RelatedObject):
label = field.opts.verbose_name
else:
label = field.verbose_name
except models.FieldDoesNotExist:
if name == "__unicode__":
label = force_unicode(model._meta.verbose_name)
attr = unicode
elif name == "__str__":
label = smart_str(model._meta.verbose_name)
attr = str
else:
if callable(name):
attr = name
elif model_admin is not None and hasattr(model_admin, name):
attr = getattr(model_admin, name)
elif hasattr(model, name):
attr = getattr(model, name)
else:
message = "Unable to lookup '%s' on %s" % (name, model._meta.object_name)
if model_admin:
message += " or %s" % (model_admin.__class__.__name__,)
raise AttributeError(message)
if hasattr(attr, "short_description"):
label = attr.short_description
elif callable(attr):
if attr.__name__ == "<lambda>":
label = "--"
else:
label = pretty_name(attr.__name__)
else:
label = pretty_name(name)
if return_attr:
return (label, attr)
else:
return label
def display_for_field(value, field):
from django.contrib.admin.templatetags.admin_list import _boolean_icon
from django.contrib.admin.views.main import EMPTY_CHANGELIST_VALUE
if field.flatchoices:
return dict(field.flatchoices).get(value, EMPTY_CHANGELIST_VALUE)
# NullBooleanField needs special-case null-handling, so it comes
# before the general null test.
elif isinstance(field, models.BooleanField) or isinstance(field, models.NullBooleanField):
return _boolean_icon(value)
elif value is None:
return EMPTY_CHANGELIST_VALUE
elif isinstance(field, models.DateField) or isinstance(field, models.TimeField):
if value:
(date_format, datetime_format, time_format) = get_date_formats()
if isinstance(field, models.DateTimeField):
return capfirst(dateformat.format(value, datetime_format))
elif isinstance(field, models.TimeField):
return capfirst(dateformat.time_format(value, time_format))
else:
return capfirst(dateformat.format(value, date_format))
else:
return EMPTY_CHANGELIST_VALUE
elif isinstance(field, models.DecimalField):
if value is not None:
return ('%%.%sf' % field.decimal_places) % value
else:
return EMPTY_CHANGELIST_VALUE
elif isinstance(field, models.FloatField):
return escape(value)
else:
return smart_unicode(value)
| apache-2.0 |
HybridF5/jacket | jacket/api/compute/openstack/compute/legacy_v2/contrib/agents.py | 1 | 7972 | # Copyright 2012 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import webob.exc
from jacket.api.compute.openstack import extensions
from jacket import context as nova_context
from jacket.compute import exception
from jacket.i18n import _
from jacket.objects import compute as objects
from jacket.compute import utils
authorize = extensions.extension_authorizer('compute', 'agents')
class AgentController(object):
"""The agent is talking about guest agent.The host can use this for
things like accessing files on the disk, configuring networking,
or running other applications/scripts in the guest while it is
running. Typically this uses some hypervisor-specific transport
to avoid being dependent on a working network configuration.
Xen, VMware, and VirtualBox have guest agents,although the Xen
driver is the only one with an implementation for managing them
in openstack. KVM doesn't really have a concept of a guest agent
(although one could be written).
You can find the design of agent update in this link:
http://wiki.openstack.org/AgentUpdate
and find the code in compute.virt.xenapi.vmops.VMOps._boot_new_instance.
In this design We need update agent in guest from host, so we need
some interfaces to update the agent info in host.
You can find more information about the design of the GuestAgent in
the following link:
http://wiki.openstack.org/GuestAgent
http://wiki.openstack.org/GuestAgentXenStoreCommunication
"""
def index(self, req):
"""Return a list of all agent builds. Filter by hypervisor."""
context = req.environ['compute.context']
authorize(context)
# NOTE(alex_xu): back-compatible with db layer hard-code admin
# permission checks.
nova_context.require_admin_context(context)
hypervisor = None
agents = []
if 'hypervisor' in req.GET:
hypervisor = req.GET['hypervisor']
builds = objects.AgentList.get_all(context, hypervisor=hypervisor)
for agent_build in builds:
agents.append({'hypervisor': agent_build.hypervisor,
'os': agent_build.os,
'architecture': agent_build.architecture,
'version': agent_build.version,
'md5hash': agent_build.md5hash,
'agent_id': agent_build.id,
'url': agent_build.url})
return {'agents': agents}
def update(self, req, id, body):
"""Update an existing agent build."""
context = req.environ['compute.context']
authorize(context)
# NOTE(alex_xu): back-compatible with db layer hard-code admin
# permission checks.
nova_context.require_admin_context(context)
try:
para = body['para']
url = para['url']
md5hash = para['md5hash']
version = para['version']
except (TypeError, KeyError) as ex:
msg = _("Invalid request body: %s") % ex
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
utils.validate_integer(id, 'id')
utils.check_string_length(url, 'url', max_length=255)
utils.check_string_length(md5hash, 'md5hash', max_length=255)
utils.check_string_length(version, 'version', max_length=255)
except exception.InvalidInput as exc:
raise webob.exc.HTTPBadRequest(explanation=exc.format_message())
try:
agent = objects.Agent(context=context, id=id)
agent.obj_reset_changes()
agent.version = version
agent.url = url
agent.md5hash = md5hash
agent.save()
except exception.AgentBuildNotFound as ex:
raise webob.exc.HTTPNotFound(explanation=ex.format_message())
# NOTE(alex_xu): The agent_id should be integer that consistent with
# create/index actions. But parameter 'id' is string type that parsed
# from url. This is a bug, but because back-compatibility, it can't be
# fixed for v2 API. This will be fixed in v2.1 API by Microversions in
# the future. lp bug #1333494
return {"agent": {'agent_id': id, 'version': version,
'url': url, 'md5hash': md5hash}}
def delete(self, req, id):
"""Deletes an existing agent build."""
context = req.environ['compute.context']
authorize(context)
# NOTE(alex_xu): back-compatible with db layer hard-code admin
# permission checks.
nova_context.require_admin_context(context)
try:
utils.validate_integer(id, 'id')
except exception.InvalidInput as exc:
raise webob.exc.HTTPBadRequest(explanation=exc.format_message())
try:
agent = objects.Agent(context=context, id=id)
agent.destroy()
except exception.AgentBuildNotFound as ex:
raise webob.exc.HTTPNotFound(explanation=ex.format_message())
def create(self, req, body):
"""Creates a new agent build."""
context = req.environ['compute.context']
authorize(context)
# NOTE(alex_xu): back-compatible with db layer hard-code admin
# permission checks.
nova_context.require_admin_context(context)
try:
agent = body['agent']
hypervisor = agent['hypervisor']
os = agent['os']
architecture = agent['architecture']
version = agent['version']
url = agent['url']
md5hash = agent['md5hash']
except (TypeError, KeyError) as ex:
msg = _("Invalid request body: %s") % ex
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
utils.check_string_length(hypervisor, 'hypervisor', max_length=255)
utils.check_string_length(os, 'os', max_length=255)
utils.check_string_length(architecture, 'architecture',
max_length=255)
utils.check_string_length(version, 'version', max_length=255)
utils.check_string_length(url, 'url', max_length=255)
utils.check_string_length(md5hash, 'md5hash', max_length=255)
except exception.InvalidInput as exc:
raise webob.exc.HTTPBadRequest(explanation=exc.format_message())
try:
agent_obj = objects.Agent(context=context)
agent_obj.hypervisor = hypervisor
agent_obj.os = os
agent_obj.architecture = architecture
agent_obj.version = version
agent_obj.url = url
agent_obj.md5hash = md5hash
agent_obj.create()
agent['agent_id'] = agent_obj.id
except exception.AgentBuildExists as ex:
raise webob.exc.HTTPConflict(explanation=ex.format_message())
return {'agent': agent}
class Agents(extensions.ExtensionDescriptor):
"""Agents support."""
name = "Agents"
alias = "os-agents"
namespace = "http://docs.openstack.org/compute/ext/agents/api/v2"
updated = "2012-10-28T00:00:00Z"
def get_resources(self):
resources = []
resource = extensions.ResourceExtension('os-agents',
AgentController())
resources.append(resource)
return resources
| apache-2.0 |
openshift/openshift-tools | openshift_tools/monitoring/gcputil.py | 13 | 3808 | #!/usr/bin/env python
# vim: expandtab:tabstop=4:shiftwidth=4
'''
Interface to gcloud
'''
#
# Copyright 2015 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Disabling invalid-name because pylint doesn't like the naming conention we have.
# pylint: disable=invalid-name
import json
import shlex
import subprocess
# Used for the low level google-api-python-client
# pylint: disable=import-error,unused-import
from apiclient.discovery import build
from oauth2client.client import GoogleCredentials
class GcloudUtil(object):
''' Routines for interacting with gcloud'''
def __init__(self, gcp_creds_path='/root/.gce/creds.json', verbose=False):
''' Save auth details for later usage '''
if not gcp_creds_path:
credentials = GoogleCredentials.get_application_default()
else:
credentials = GoogleCredentials.from_stream(gcp_creds_path)
self._credentials = credentials
self.creds_path = gcp_creds_path
self.verbose = verbose
self.auth_activate_svc_account()
def _run_cmd(self, cmd, out_format=None):
''' Actually call out aws tool '''
cmd = shlex.split(cmd)
if self.verbose:
print "Running command: " + str(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
rval = stdout
if proc.returncode != 0:
raise Exception("Non-zero return on command: {}. Error: [{}]".format(str(cmd), stderr))
if out_format == 'json':
try:
rval = json.loads(rval)
except ValueError as _:
# Error parsing output
pass
return rval
def auth_activate_svc_account(self):
'''activate a svc account'''
cmd = 'gcloud auth activate-service-account --key-file %s' % self.creds_path
self._run_cmd(cmd)
def get_bucket_list(self):
''' Get list of all S3 buckets visible to AWS ID '''
gcs_cmd = "gsutil ls gs://"
buckets = self._run_cmd(gcs_cmd)
# strip the gs:// and trailing / on each bucket
buckets = [bucket[5:-1] for bucket in buckets.strip().split('\n')]
if self.verbose:
print "Buckets found: " + str(buckets)
return buckets
def get_bucket_info(self, bucket):
''' Get size (in GB) and object count for bucket
Currently object is unsupported.
gsutil does not have an easy way to return an object count that
returns in a timely manner.
gsutil ls -lR gs://bucket | tail -n 1
gsutil du gs://bucket | wc -l
'''
cmd = "gsutil du -s gs://{}".format(bucket)
output = self._run_cmd(cmd)
if self.verbose:
print "cmd: %s. Results: [%s]" % (cmd, output)
# First check whether the bucket is completely empty
if output == "":
# Empty bucket, so just return size 0, object count 0
return [0, 0]
size = int(output.split()[0])
if self.verbose:
print "Bucket: {} Size: {} Objects: {}".format(bucket, size, 0)
size_gb = float(size) / 1024 / 1024 / 1024
return [size_gb, 0]
| apache-2.0 |
elkingtonmcb/rethinkdb | external/re2_20140111/re2/make_unicode_casefold.py | 105 | 3603 | #!/usr/bin/python
# coding=utf-8
#
# Copyright 2008 The RE2 Authors. All Rights Reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
# See unicode_casefold.h for description of case folding tables.
"""Generate C++ table for Unicode case folding."""
import unicode, sys
_header = """
// GENERATED BY make_unicode_casefold.py; DO NOT EDIT.
// make_unicode_casefold.py >unicode_casefold.cc
#include "re2/unicode_casefold.h"
namespace re2 {
"""
_trailer = """
} // namespace re2
"""
def _Delta(a, b):
"""Compute the delta for b - a. Even/odd and odd/even
are handled specially, as described above."""
if a+1 == b:
if a%2 == 0:
return 'EvenOdd'
else:
return 'OddEven'
if a == b+1:
if a%2 == 0:
return 'OddEven'
else:
return 'EvenOdd'
return b - a
def _AddDelta(a, delta):
"""Return a + delta, handling EvenOdd and OddEven specially."""
if type(delta) == int:
return a+delta
if delta == 'EvenOdd':
if a%2 == 0:
return a+1
else:
return a-1
if delta == 'OddEven':
if a%2 == 1:
return a+1
else:
return a-1
print >>sys.stderr, "Bad Delta: ", delta
raise "Bad Delta"
def _MakeRanges(pairs):
"""Turn a list like [(65,97), (66, 98), ..., (90,122)]
into [(65, 90, +32)]."""
ranges = []
last = -100
def evenodd(last, a, b, r):
if a != last+1 or b != _AddDelta(a, r[2]):
return False
r[1] = a
return True
def evenoddpair(last, a, b, r):
if a != last+2:
return False
delta = r[2]
d = delta
if type(delta) is not str:
return False
if delta.endswith('Skip'):
d = delta[:-4]
else:
delta = d + 'Skip'
if b != _AddDelta(a, d):
return False
r[1] = a
r[2] = delta
return True
for a, b in pairs:
if ranges and evenodd(last, a, b, ranges[-1]):
pass
elif ranges and evenoddpair(last, a, b, ranges[-1]):
pass
else:
ranges.append([a, a, _Delta(a, b)])
last = a
return ranges
# The maximum size of a case-folding group.
# Case folding is implemented in parse.cc by a recursive process
# with a recursion depth equal to the size of the largest
# case-folding group, so it is important that this bound be small.
# The current tables have no group bigger than 4.
# If there are ever groups bigger than 10 or so, it will be
# time to rework the code in parse.cc.
MaxCasefoldGroup = 4
def main():
lowergroups, casegroups = unicode.CaseGroups()
foldpairs = []
seen = {}
for c in casegroups:
if len(c) > MaxCasefoldGroup:
raise unicode.Error("casefold group too long: %s" % (c,))
for i in range(len(c)):
if c[i-1] in seen:
raise unicode.Error("bad casegroups %d -> %d" % (c[i-1], c[i]))
seen[c[i-1]] = True
foldpairs.append([c[i-1], c[i]])
lowerpairs = []
for lower, group in lowergroups.iteritems():
for g in group:
if g != lower:
lowerpairs.append([g, lower])
def printpairs(name, foldpairs):
foldpairs.sort()
foldranges = _MakeRanges(foldpairs)
print "// %d groups, %d pairs, %d ranges" % (len(casegroups), len(foldpairs), len(foldranges))
print "const CaseFold unicode_%s[] = {" % (name,)
for lo, hi, delta in foldranges:
print "\t{ %d, %d, %s }," % (lo, hi, delta)
print "};"
print "const int num_unicode_%s = %d;" % (name, len(foldranges),)
print ""
print _header
printpairs("casefold", foldpairs)
printpairs("tolower", lowerpairs)
print _trailer
if __name__ == '__main__':
main()
| agpl-3.0 |
yongshengwang/hue | build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/contrib/gis/maps/google/zoom.py | 224 | 6622 | from django.contrib.gis.geos import GEOSGeometry, LinearRing, Polygon, Point
from django.contrib.gis.maps.google.gmap import GoogleMapException
from django.utils.six.moves import xrange
from math import pi, sin, log, exp, atan
# Constants used for degree to radian conversion, and vice-versa.
DTOR = pi / 180.
RTOD = 180. / pi
class GoogleZoom(object):
"""
GoogleZoom is a utility for performing operations related to the zoom
levels on Google Maps.
This class is inspired by the OpenStreetMap Mapnik tile generation routine
`generate_tiles.py`, and the article "How Big Is the World" (Hack #16) in
"Google Maps Hacks" by Rich Gibson and Schuyler Erle.
`generate_tiles.py` may be found at:
http://trac.openstreetmap.org/browser/applications/rendering/mapnik/generate_tiles.py
"Google Maps Hacks" may be found at http://safari.oreilly.com/0596101619
"""
def __init__(self, num_zoom=19, tilesize=256):
"Initializes the Google Zoom object."
# Google's tilesize is 256x256, square tiles are assumed.
self._tilesize = tilesize
# The number of zoom levels
self._nzoom = num_zoom
# Initializing arrays to hold the parameters for each one of the
# zoom levels.
self._degpp = [] # Degrees per pixel
self._radpp = [] # Radians per pixel
self._npix = [] # 1/2 the number of pixels for a tile at the given zoom level
# Incrementing through the zoom levels and populating the parameter arrays.
z = tilesize # The number of pixels per zoom level.
for i in xrange(num_zoom):
# Getting the degrees and radians per pixel, and the 1/2 the number of
# for every zoom level.
self._degpp.append(z / 360.) # degrees per pixel
self._radpp.append(z / (2 * pi)) # radians per pixel
self._npix.append(z / 2) # number of pixels to center of tile
# Multiplying `z` by 2 for the next iteration.
z *= 2
def __len__(self):
"Returns the number of zoom levels."
return self._nzoom
def get_lon_lat(self, lonlat):
"Unpacks longitude, latitude from GEOS Points and 2-tuples."
if isinstance(lonlat, Point):
lon, lat = lonlat.coords
else:
lon, lat = lonlat
return lon, lat
def lonlat_to_pixel(self, lonlat, zoom):
"Converts a longitude, latitude coordinate pair for the given zoom level."
# Setting up, unpacking the longitude, latitude values and getting the
# number of pixels for the given zoom level.
lon, lat = self.get_lon_lat(lonlat)
npix = self._npix[zoom]
# Calculating the pixel x coordinate by multiplying the longitude value
# with with the number of degrees/pixel at the given zoom level.
px_x = round(npix + (lon * self._degpp[zoom]))
# Creating the factor, and ensuring that 1 or -1 is not passed in as the
# base to the logarithm. Here's why:
# if fac = -1, we'll get log(0) which is undefined;
# if fac = 1, our logarithm base will be divided by 0, also undefined.
fac = min(max(sin(DTOR * lat), -0.9999), 0.9999)
# Calculating the pixel y coordinate.
px_y = round(npix + (0.5 * log((1 + fac)/(1 - fac)) * (-1.0 * self._radpp[zoom])))
# Returning the pixel x, y to the caller of the function.
return (px_x, px_y)
def pixel_to_lonlat(self, px, zoom):
"Converts a pixel to a longitude, latitude pair at the given zoom level."
if len(px) != 2:
raise TypeError('Pixel should be a sequence of two elements.')
# Getting the number of pixels for the given zoom level.
npix = self._npix[zoom]
# Calculating the longitude value, using the degrees per pixel.
lon = (px[0] - npix) / self._degpp[zoom]
# Calculating the latitude value.
lat = RTOD * ( 2 * atan(exp((px[1] - npix)/ (-1.0 * self._radpp[zoom]))) - 0.5 * pi)
# Returning the longitude, latitude coordinate pair.
return (lon, lat)
def tile(self, lonlat, zoom):
"""
Returns a Polygon corresponding to the region represented by a fictional
Google Tile for the given longitude/latitude pair and zoom level. This
tile is used to determine the size of a tile at the given point.
"""
# The given lonlat is the center of the tile.
delta = self._tilesize / 2
# Getting the pixel coordinates corresponding to the
# the longitude/latitude.
px = self.lonlat_to_pixel(lonlat, zoom)
# Getting the lower-left and upper-right lat/lon coordinates
# for the bounding box of the tile.
ll = self.pixel_to_lonlat((px[0]-delta, px[1]-delta), zoom)
ur = self.pixel_to_lonlat((px[0]+delta, px[1]+delta), zoom)
# Constructing the Polygon, representing the tile and returning.
return Polygon(LinearRing(ll, (ll[0], ur[1]), ur, (ur[0], ll[1]), ll), srid=4326)
def get_zoom(self, geom):
"Returns the optimal Zoom level for the given geometry."
# Checking the input type.
if not isinstance(geom, GEOSGeometry) or geom.srid != 4326:
raise TypeError('get_zoom() expects a GEOS Geometry with an SRID of 4326.')
# Getting the envelope for the geometry, and its associated width, height
# and centroid.
env = geom.envelope
env_w, env_h = self.get_width_height(env.extent)
center = env.centroid
for z in xrange(self._nzoom):
# Getting the tile at the zoom level.
tile_w, tile_h = self.get_width_height(self.tile(center, z).extent)
# When we span more than one tile, this is an approximately good
# zoom level.
if (env_w > tile_w) or (env_h > tile_h):
if z == 0:
raise GoogleMapException('Geometry width and height should not exceed that of the Earth.')
return z-1
# Otherwise, we've zoomed in to the max.
return self._nzoom-1
def get_width_height(self, extent):
"""
Returns the width and height for the given extent.
"""
# Getting the lower-left, upper-left, and upper-right
# coordinates from the extent.
ll = Point(extent[:2])
ul = Point(extent[0], extent[3])
ur = Point(extent[2:])
# Calculating the width and height.
height = ll.distance(ul)
width = ul.distance(ur)
return width, height
| apache-2.0 |
cwyark/micropython | py/makeqstrdata.py | 29 | 5496 | """
Process raw qstr file and output qstr data with length, hash and data bytes.
This script works with Python 2.6, 2.7, 3.3 and 3.4.
"""
from __future__ import print_function
import re
import sys
# Python 2/3 compatibility:
# - iterating through bytes is different
# - codepoint2name lives in a different module
import platform
if platform.python_version_tuple()[0] == '2':
bytes_cons = lambda val, enc=None: bytearray(val)
from htmlentitydefs import codepoint2name
elif platform.python_version_tuple()[0] == '3':
bytes_cons = bytes
from html.entities import codepoint2name
# end compatibility code
codepoint2name[ord('-')] = 'hyphen';
# add some custom names to map characters that aren't in HTML
codepoint2name[ord(' ')] = 'space'
codepoint2name[ord('\'')] = 'squot'
codepoint2name[ord(',')] = 'comma'
codepoint2name[ord('.')] = 'dot'
codepoint2name[ord(':')] = 'colon'
codepoint2name[ord(';')] = 'semicolon'
codepoint2name[ord('/')] = 'slash'
codepoint2name[ord('%')] = 'percent'
codepoint2name[ord('#')] = 'hash'
codepoint2name[ord('(')] = 'paren_open'
codepoint2name[ord(')')] = 'paren_close'
codepoint2name[ord('[')] = 'bracket_open'
codepoint2name[ord(']')] = 'bracket_close'
codepoint2name[ord('{')] = 'brace_open'
codepoint2name[ord('}')] = 'brace_close'
codepoint2name[ord('*')] = 'star'
codepoint2name[ord('!')] = 'bang'
codepoint2name[ord('\\')] = 'backslash'
codepoint2name[ord('+')] = 'plus'
codepoint2name[ord('$')] = 'dollar'
codepoint2name[ord('=')] = 'equals'
codepoint2name[ord('?')] = 'question'
codepoint2name[ord('@')] = 'at_sign'
codepoint2name[ord('^')] = 'caret'
codepoint2name[ord('|')] = 'pipe'
codepoint2name[ord('~')] = 'tilde'
# this must match the equivalent function in qstr.c
def compute_hash(qstr, bytes_hash):
hash = 5381
for b in qstr:
hash = (hash * 33) ^ b
# Make sure that valid hash is never zero, zero means "hash not computed"
return (hash & ((1 << (8 * bytes_hash)) - 1)) or 1
def qstr_escape(qst):
def esc_char(m):
c = ord(m.group(0))
try:
name = codepoint2name[c]
except KeyError:
name = '0x%02x' % c
return "_" + name + '_'
return re.sub(r'[^A-Za-z0-9_]', esc_char, qst)
def parse_input_headers(infiles):
# read the qstrs in from the input files
qcfgs = {}
qstrs = {}
for infile in infiles:
with open(infile, 'rt') as f:
for line in f:
line = line.strip()
# is this a config line?
match = re.match(r'^QCFG\((.+), (.+)\)', line)
if match:
value = match.group(2)
if value[0] == '(' and value[-1] == ')':
# strip parenthesis from config value
value = value[1:-1]
qcfgs[match.group(1)] = value
continue
# is this a QSTR line?
match = re.match(r'^Q\((.*)\)$', line)
if not match:
continue
# get the qstr value
qstr = match.group(1)
# special case to specify control characters
if qstr == '\\n':
qstr = '\n'
# work out the corresponding qstr name
ident = qstr_escape(qstr)
# don't add duplicates
if ident in qstrs:
continue
# add the qstr to the list, with order number to retain original order in file
qstrs[ident] = (len(qstrs), ident, qstr)
if not qcfgs:
sys.stderr.write("ERROR: Empty preprocessor output - check for errors above\n")
sys.exit(1)
return qcfgs, qstrs
def make_bytes(cfg_bytes_len, cfg_bytes_hash, qstr):
qbytes = bytes_cons(qstr, 'utf8')
qlen = len(qbytes)
qhash = compute_hash(qbytes, cfg_bytes_hash)
if all(32 <= ord(c) <= 126 and c != '\\' and c != '"' for c in qstr):
# qstr is all printable ASCII so render it as-is (for easier debugging)
qdata = qstr
else:
# qstr contains non-printable codes so render entire thing as hex pairs
qdata = ''.join(('\\x%02x' % b) for b in qbytes)
if qlen >= (1 << (8 * cfg_bytes_len)):
print('qstr is too long:', qstr)
assert False
qlen_str = ('\\x%02x' * cfg_bytes_len) % tuple(((qlen >> (8 * i)) & 0xff) for i in range(cfg_bytes_len))
qhash_str = ('\\x%02x' * cfg_bytes_hash) % tuple(((qhash >> (8 * i)) & 0xff) for i in range(cfg_bytes_hash))
return '(const byte*)"%s%s" "%s"' % (qhash_str, qlen_str, qdata)
def print_qstr_data(qcfgs, qstrs):
# get config variables
cfg_bytes_len = int(qcfgs['BYTES_IN_LEN'])
cfg_bytes_hash = int(qcfgs['BYTES_IN_HASH'])
# print out the starter of the generated C header file
print('// This file was automatically generated by makeqstrdata.py')
print('')
# add NULL qstr with no hash or data
print('QDEF(MP_QSTR_NULL, (const byte*)"%s%s" "")' % ('\\x00' * cfg_bytes_hash, '\\x00' * cfg_bytes_len))
# go through each qstr and print it out
for order, ident, qstr in sorted(qstrs.values(), key=lambda x: x[0]):
qbytes = make_bytes(cfg_bytes_len, cfg_bytes_hash, qstr)
print('QDEF(MP_QSTR_%s, %s)' % (ident, qbytes))
def do_work(infiles):
qcfgs, qstrs = parse_input_headers(infiles)
print_qstr_data(qcfgs, qstrs)
if __name__ == "__main__":
do_work(sys.argv[1:])
| mit |
hobson/aima | aima/games.py | 3 | 10183 | """Games, or Adversarial Search. (Chapter 5)
"""
from utils import *
import random
#______________________________________________________________________________
# Minimax Search
def minimax_decision(state, game):
"""Given a state in a game, calculate the best move by searching
forward all the way to the terminal states. [Fig. 5.3]"""
player = game.to_move(state)
def max_value(state):
if game.terminal_test(state):
return game.utility(state, player)
v = -infinity
for a in game.actions(state):
v = max(v, min_value(game.result(state, a)))
return v
def min_value(state):
if game.terminal_test(state):
return game.utility(state, player)
v = infinity
for a in game.actions(state):
v = min(v, max_value(game.result(state, a)))
return v
# Body of minimax_decision:
return argmax(game.actions(state),
lambda a: min_value(game.result(state, a)))
#______________________________________________________________________________
def alphabeta_full_search(state, game):
"""Search game to determine best action; use alpha-beta pruning.
As in [Fig. 5.7], this version searches all the way to the leaves."""
player = game.to_move(state)
def max_value(state, alpha, beta):
if game.terminal_test(state):
return game.utility(state, player)
v = -infinity
for a in game.actions(state):
v = max(v, min_value(game.result(state, a), alpha, beta))
if v >= beta:
return v
alpha = max(alpha, v)
return v
def min_value(state, alpha, beta):
if game.terminal_test(state):
return game.utility(state, player)
v = infinity
for a in game.actions(state):
v = min(v, max_value(game.result(state, a), alpha, beta))
if v <= alpha:
return v
beta = min(beta, v)
return v
# Body of alphabeta_search:
return argmax(game.actions(state),
lambda a: min_value(game.result(state, a),
-infinity, infinity))
def alphabeta_search(state, game, d=4, cutoff_test=None, eval_fn=None):
"""Search game to determine best action; use alpha-beta pruning.
This version cuts off search and uses an evaluation function."""
player = game.to_move(state)
def max_value(state, alpha, beta, depth):
if cutoff_test(state, depth):
return eval_fn(state)
v = -infinity
for a in game.actions(state):
v = max(v, min_value(game.result(state, a),
alpha, beta, depth+1))
if v >= beta:
return v
alpha = max(alpha, v)
return v
def min_value(state, alpha, beta, depth):
if cutoff_test(state, depth):
return eval_fn(state)
v = infinity
for a in game.actions(state):
v = min(v, max_value(game.result(state, a),
alpha, beta, depth+1))
if v <= alpha:
return v
beta = min(beta, v)
return v
# Body of alphabeta_search starts here:
# The default test cuts off at depth d or at a terminal state
cutoff_test = (cutoff_test or
(lambda state,depth: depth>d or game.terminal_test(state)))
eval_fn = eval_fn or (lambda state: game.utility(state, player))
return argmax(game.actions(state),
lambda a: min_value(game.result(state, a),
-infinity, infinity, 0))
#______________________________________________________________________________
# Players for Games
def query_player(game, state):
"Make a move by querying standard input."
game.display(state)
return num_or_str(raw_input('Your move? '))
def random_player(game, state):
"A player that chooses a legal move at random."
return random.choice(game.actions(state))
def alphabeta_player(game, state):
return alphabeta_search(state, game)
def play_game(game, *players):
"""Play an n-person, move-alternating game.
>>> play_game(Fig52Game(), alphabeta_player, alphabeta_player)
3
"""
state = game.initial
while True:
for player in players:
move = player(game, state)
state = game.result(state, move)
if game.terminal_test(state):
return game.utility(state, game.to_move(game.initial))
#______________________________________________________________________________
# Some Sample Games
class Game:
"""A game is similar to a problem, but it has a utility for each
state and a terminal test instead of a path cost and a goal
test. To create a game, subclass this class and implement actions,
result, utility, and terminal_test. You may override display and
successors or you can inherit their default methods. You will also
need to set the .initial attribute to the initial state; this can
be done in the constructor."""
def actions(self, state):
"Return a list of the allowable moves at this point."
abstract
def result(self, state, move):
"Return the state that results from making a move from a state."
abstract
def utility(self, state, player):
"Return the value of this final state to player."
abstract
def terminal_test(self, state):
"Return True if this is a final state for the game."
return not self.actions(state)
def to_move(self, state):
"Return the player whose move it is in this state."
return state.to_move
def display(self, state):
"Print or otherwise display the state."
print state
def __repr__(self):
return '<%s>' % self.__class__.__name__
class Fig52Game(Game):
"""The game represented in [Fig. 5.2]. Serves as a simple test case.
>>> g = Fig52Game()
>>> minimax_decision('A', g)
'a1'
>>> alphabeta_full_search('A', g)
'a1'
>>> alphabeta_search('A', g)
'a1'
"""
succs = dict(A=dict(a1='B', a2='C', a3='D'),
B=dict(b1='B1', b2='B2', b3='B3'),
C=dict(c1='C1', c2='C2', c3='C3'),
D=dict(d1='D1', d2='D2', d3='D3'))
utils = Dict(B1=3, B2=12, B3=8, C1=2, C2=4, C3=6, D1=14, D2=5, D3=2)
initial = 'A'
def actions(self, state):
return self.succs.get(state, {}).keys()
def result(self, state, move):
return self.succs[state][move]
def utility(self, state, player):
if player == 'MAX':
return self.utils[state]
else:
return -self.utils[state]
def terminal_test(self, state):
return state not in ('A', 'B', 'C', 'D')
def to_move(self, state):
return if_(state in 'BCD', 'MIN', 'MAX')
class TicTacToe(Game):
"""Play TicTacToe on an h x v board, with Max (first player) playing 'X'.
A state has the player to move, a cached utility, a list of moves in
the form of a list of (x, y) positions, and a board, in the form of
a dict of {(x, y): Player} entries, where Player is 'X' or 'O'."""
def __init__(self, h=3, v=3, k=3):
update(self, h=h, v=v, k=k)
moves = [(x, y) for x in range(1, h+1)
for y in range(1, v+1)]
self.initial = Struct(to_move='X', utility=0, board={}, moves=moves)
def actions(self, state):
"Legal moves are any square not yet taken."
return state.moves
def result(self, state, move):
if move not in state.moves:
return state # Illegal move has no effect
board = state.board.copy(); board[move] = state.to_move
moves = list(state.moves); moves.remove(move)
return Struct(to_move=if_(state.to_move == 'X', 'O', 'X'),
utility=self.compute_utility(board, move, state.to_move),
board=board, moves=moves)
def utility(self, state, player):
"Return the value to player; 1 for win, -1 for loss, 0 otherwise."
return if_(player == 'X', state.utility, -state.utility)
def terminal_test(self, state):
"A state is terminal if it is won or there are no empty squares."
return state.utility != 0 or len(state.moves) == 0
def display(self, state):
board = state.board
for x in range(1, self.h+1):
for y in range(1, self.v+1):
print board.get((x, y), '.'),
print
def compute_utility(self, board, move, player):
"If X wins with this move, return 1; if O return -1; else return 0."
if (self.k_in_row(board, move, player, (0, 1)) or
self.k_in_row(board, move, player, (1, 0)) or
self.k_in_row(board, move, player, (1, -1)) or
self.k_in_row(board, move, player, (1, 1))):
return if_(player == 'X', +1, -1)
else:
return 0
def k_in_row(self, board, move, player, (delta_x, delta_y)):
"Return true if there is a line through move on board for player."
x, y = move
n = 0 # n is number of moves in row
while board.get((x, y)) == player:
n += 1
x, y = x + delta_x, y + delta_y
x, y = move
while board.get((x, y)) == player:
n += 1
x, y = x - delta_x, y - delta_y
n -= 1 # Because we counted move itself twice
return n >= self.k
class ConnectFour(TicTacToe):
"""A TicTacToe-like game in which you can only make a move on the bottom
row, or in a square directly above an occupied square. Traditionally
played on a 7x6 board and requiring 4 in a row."""
def __init__(self, h=7, v=6, k=4):
TicTacToe.__init__(self, h, v, k)
def actions(self, state):
return [(x, y) for (x, y) in state.moves
if y == 0 or (x, y-1) in state.board]
__doc__ += random_tests("""
>>> play_game(Fig52Game(), random_player, random_player)
6
>>> play_game(TicTacToe(), random_player, random_player)
0
""")
| mit |
jamiefolsom/edx-platform | common/lib/capa/capa/tests/__init__.py | 129 | 2700 | """Tools for helping with testing capa."""
import gettext
import os
import os.path
import fs.osfs
from capa.capa_problem import LoncapaProblem, LoncapaSystem
from capa.inputtypes import Status
from mock import Mock, MagicMock
import xml.sax.saxutils as saxutils
TEST_DIR = os.path.dirname(os.path.realpath(__file__))
def tst_render_template(template, context):
"""
A test version of render to template. Renders to the repr of the context, completely ignoring
the template name. To make the output valid xml, quotes the content, and wraps it in a <div>
"""
return '<div>{0}</div>'.format(saxutils.escape(repr(context)))
def calledback_url(dispatch='score_update'):
return dispatch
xqueue_interface = MagicMock()
xqueue_interface.send_to_queue.return_value = (0, 'Success!')
def test_capa_system():
"""
Construct a mock LoncapaSystem instance.
"""
the_system = Mock(
spec=LoncapaSystem,
ajax_url='/dummy-ajax-url',
anonymous_student_id='student',
cache=None,
can_execute_unsafe_code=lambda: False,
get_python_lib_zip=lambda: None,
DEBUG=True,
filestore=fs.osfs.OSFS(os.path.join(TEST_DIR, "test_files")),
i18n=gettext.NullTranslations(),
node_path=os.environ.get("NODE_PATH", "/usr/local/lib/node_modules"),
render_template=tst_render_template,
seed=0,
STATIC_URL='/dummy-static/',
STATUS_CLASS=Status,
xqueue={'interface': xqueue_interface, 'construct_callback': calledback_url, 'default_queuename': 'testqueue', 'waittime': 10},
)
return the_system
def mock_capa_module():
"""
capa response types needs just two things from the capa_module: location and track_function.
"""
capa_module = Mock()
capa_module.location.to_deprecated_string.return_value = 'i4x://Foo/bar/mock/abc'
# The following comes into existence by virtue of being called
# capa_module.runtime.track_function
return capa_module
def new_loncapa_problem(xml, capa_system=None, seed=723):
"""Construct a `LoncapaProblem` suitable for unit tests."""
return LoncapaProblem(xml, id='1', seed=seed, capa_system=capa_system or test_capa_system(),
capa_module=mock_capa_module())
def load_fixture(relpath):
"""
Return a `unicode` object representing the contents
of the fixture file at the given path within a test_files directory
in the same directory as the test file.
"""
abspath = os.path.join(os.path.dirname(__file__), 'test_files', relpath)
with open(abspath) as fixture_file:
contents = fixture_file.read()
return contents.decode('utf8')
| agpl-3.0 |
yangminz/Semi-supervised_Embedding | src/utils/mnist_util.py | 1 | 3928 | # -*- coding: utf-8 -*-
"""
Created on Thu Feb 25 14:40:06 2016
load MNIST dataset
@author: liudiwei
"""
import numpy as np
import struct
import matplotlib.pyplot as plt
import os
class DataUtils(object):
"""MNIST数据集加载
输出格式为:numpy.array()
使用方法如下
from data_util import DataUtils
def main():
trainfile_X = '../dataset/MNIST/train-images.idx3-ubyte'
trainfile_y = '../dataset/MNIST/train-labels.idx1-ubyte'
testfile_X = '../dataset/MNIST/t10k-images.idx3-ubyte'
testfile_y = '../dataset/MNIST/t10k-labels.idx1-ubyte'
train_X = DataUtils(filename=trainfile_X).getImage()
train_y = DataUtils(filename=trainfile_y).getLabel()
test_X = DataUtils(testfile_X).getImage()
test_y = DataUtils(testfile_y).getLabel()
#以下内容是将图像保存到本地文件中
#path_trainset = "../dataset/MNIST/imgs_train"
#path_testset = "../dataset/MNIST/imgs_test"
#if not os.path.exists(path_trainset):
# os.mkdir(path_trainset)
#if not os.path.exists(path_testset):
# os.mkdir(path_testset)
#DataUtils(outpath=path_trainset).outImg(train_X, train_y)
#DataUtils(outpath=path_testset).outImg(test_X, test_y)
return train_X, train_y, test_X, test_y
"""
def __init__(self, filename=None, outpath=None):
self._filename = filename
self._outpath = outpath
self._tag = '>'
self._twoBytes = 'II'
self._fourBytes = 'IIII'
self._pictureBytes = '784B'
self._labelByte = '1B'
self._twoBytes2 = self._tag + self._twoBytes
self._fourBytes2 = self._tag + self._fourBytes
self._pictureBytes2 = self._tag + self._pictureBytes
self._labelByte2 = self._tag + self._labelByte
def getImage(self):
"""
将MNIST的二进制文件转换成像素特征数据
"""
binfile = open(self._filename, 'rb') #以二进制方式打开文件
buf = binfile.read()
binfile.close()
index = 0
numMagic,numImgs,numRows,numCols=struct.unpack_from(self._fourBytes2,\
buf,\
index)
index += struct.calcsize(self._fourBytes)
images = []
for i in range(numImgs):
imgVal = struct.unpack_from(self._pictureBytes2, buf, index)
index += struct.calcsize(self._pictureBytes2)
imgVal = list(imgVal)
for j in range(len(imgVal)):
if imgVal[j] > 1:
imgVal[j] = 1
images.append(imgVal)
return np.array(images)
def getLabel(self):
"""
将MNIST中label二进制文件转换成对应的label数字特征
"""
binFile = open(self._filename,'rb')
buf = binFile.read()
binFile.close()
index = 0
magic, numItems= struct.unpack_from(self._twoBytes2, buf,index)
index += struct.calcsize(self._twoBytes2)
labels = [];
for x in range(numItems):
im = struct.unpack_from(self._labelByte2,buf,index)
index += struct.calcsize(self._labelByte2)
labels.append(im[0])
return np.array(labels)
def outImg(self, arrX, arrY):
"""
根据生成的特征和数字标号,输出png的图像
"""
m, n = np.shape(arrX)
#每张图是28*28=784Byte
for i in range(1):
img = np.array(arrX[i])
img = img.reshape(28,28)
outfile = str(i) + "_" + str(arrY[i]) + ".png"
plt.figure()
plt.imshow(img, cmap = 'binary') #将图像黑白显示
plt.savefig(self._outpath + "/" + outfile)
| gpl-3.0 |
wuyong2k/Stino | stino/pyarduino/arduino_project.py | 14 | 1725 | #!/usr/bin/env python
#-*- coding: utf-8 -*-
# 1. Copyright
# 2. Lisence
# 3. Author
"""
Documents
"""
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
import os
from . import base
from . import arduino_src
class Project(base.abs_file.Dir):
def __init__(self, path):
super(Project, self).__init__(path)
primary_file_name = self.name + '.ino'
primary_file_path = os.path.join(self.path, primary_file_name)
self.primary_file = base.abs_file.File(primary_file_path)
def list_ino_files(self):
files = self.list_files_of_extensions(arduino_src.INO_EXTS)
if files and self.primary_file.is_file():
files = [f for f in files if f.name.lower() !=
self.primary_file.name.lower()]
files.insert(0, self.primary_file)
return files
def list_cpp_files(self, is_big_project=False):
if is_big_project:
cpp_files = self.recursive_list_files(arduino_src.CPP_EXTS)
else:
cpp_files = self.list_files_of_extensions(arduino_src.CPP_EXTS)
primary_file_path = self.primary_file.get_path()
for cpp_file in cpp_files:
cpp_file_path = cpp_file.get_path()
if cpp_file_path.startswith(primary_file_path):
cpp_files.remove(cpp_file)
break
return cpp_files
def list_h_files(self, is_big_project=False):
if is_big_project:
files = files = self.recursive_list_files(arduino_src.H_EXTS)
else:
files = self.list_files_of_extensions(arduino_src.H_EXTS)
return files
| mit |
vvv1559/intellij-community | python/lib/Lib/site-packages/django/http/__init__.py | 73 | 23818 | import datetime
import os
import re
import time
from pprint import pformat
from urllib import urlencode, quote
from urlparse import urljoin
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
try:
# The mod_python version is more efficient, so try importing it first.
from mod_python.util import parse_qsl
except ImportError:
try:
# Python 2.6 and greater
from urlparse import parse_qsl
except ImportError:
# Python 2.5, 2.4. Works on Python 2.6 but raises
# PendingDeprecationWarning
from cgi import parse_qsl
# httponly support exists in Python 2.6's Cookie library,
# but not in Python 2.4 or 2.5.
import Cookie
if Cookie.Morsel._reserved.has_key('httponly'):
SimpleCookie = Cookie.SimpleCookie
else:
class Morsel(Cookie.Morsel):
def __setitem__(self, K, V):
K = K.lower()
if K == "httponly":
if V:
# The superclass rejects httponly as a key,
# so we jump to the grandparent.
super(Cookie.Morsel, self).__setitem__(K, V)
else:
super(Morsel, self).__setitem__(K, V)
def OutputString(self, attrs=None):
output = super(Morsel, self).OutputString(attrs)
if "httponly" in self:
output += "; httponly"
return output
class SimpleCookie(Cookie.SimpleCookie):
def __set(self, key, real_value, coded_value):
M = self.get(key, Morsel())
M.set(key, real_value, coded_value)
dict.__setitem__(self, key, M)
def __setitem__(self, key, value):
rval, cval = self.value_encode(value)
self.__set(key, rval, cval)
from django.utils.datastructures import MultiValueDict, ImmutableList
from django.utils.encoding import smart_str, iri_to_uri, force_unicode
from django.utils.http import cookie_date
from django.http.multipartparser import MultiPartParser
from django.conf import settings
from django.core.files import uploadhandler
from utils import *
RESERVED_CHARS="!*'();:@&=+$,/?%#[]"
absolute_http_url_re = re.compile(r"^https?://", re.I)
class Http404(Exception):
pass
class HttpRequest(object):
"""A basic HTTP request."""
# The encoding used in GET/POST dicts. None means use default setting.
_encoding = None
_upload_handlers = []
def __init__(self):
self.GET, self.POST, self.COOKIES, self.META, self.FILES = {}, {}, {}, {}, {}
self.path = ''
self.path_info = ''
self.method = None
def __repr__(self):
return '<HttpRequest\nGET:%s,\nPOST:%s,\nCOOKIES:%s,\nMETA:%s>' % \
(pformat(self.GET), pformat(self.POST), pformat(self.COOKIES),
pformat(self.META))
def get_host(self):
"""Returns the HTTP host using the environment or request headers."""
# We try three options, in order of decreasing preference.
if 'HTTP_X_FORWARDED_HOST' in self.META:
host = self.META['HTTP_X_FORWARDED_HOST']
elif 'HTTP_HOST' in self.META:
host = self.META['HTTP_HOST']
else:
# Reconstruct the host using the algorithm from PEP 333.
host = self.META['SERVER_NAME']
server_port = str(self.META['SERVER_PORT'])
if server_port != (self.is_secure() and '443' or '80'):
host = '%s:%s' % (host, server_port)
return host
def get_full_path(self):
return ''
def build_absolute_uri(self, location=None):
"""
Builds an absolute URI from the location and the variables available in
this request. If no location is specified, the absolute URI is built on
``request.get_full_path()``.
"""
if not location:
location = self.get_full_path()
if not absolute_http_url_re.match(location):
current_uri = '%s://%s%s' % (self.is_secure() and 'https' or 'http',
self.get_host(), self.path)
location = urljoin(current_uri, location)
return iri_to_uri(location)
def is_secure(self):
return os.environ.get("HTTPS") == "on"
def is_ajax(self):
return self.META.get('HTTP_X_REQUESTED_WITH') == 'XMLHttpRequest'
def _set_encoding(self, val):
"""
Sets the encoding used for GET/POST accesses. If the GET or POST
dictionary has already been created, it is removed and recreated on the
next access (so that it is decoded correctly).
"""
self._encoding = val
if hasattr(self, '_get'):
del self._get
if hasattr(self, '_post'):
del self._post
def _get_encoding(self):
return self._encoding
encoding = property(_get_encoding, _set_encoding)
def _initialize_handlers(self):
self._upload_handlers = [uploadhandler.load_handler(handler, self)
for handler in settings.FILE_UPLOAD_HANDLERS]
def _set_upload_handlers(self, upload_handlers):
if hasattr(self, '_files'):
raise AttributeError("You cannot set the upload handlers after the upload has been processed.")
self._upload_handlers = upload_handlers
def _get_upload_handlers(self):
if not self._upload_handlers:
# If thre are no upload handlers defined, initialize them from settings.
self._initialize_handlers()
return self._upload_handlers
upload_handlers = property(_get_upload_handlers, _set_upload_handlers)
def parse_file_upload(self, META, post_data):
"""Returns a tuple of (POST QueryDict, FILES MultiValueDict)."""
self.upload_handlers = ImmutableList(
self.upload_handlers,
warning = "You cannot alter upload handlers after the upload has been processed."
)
parser = MultiPartParser(META, post_data, self.upload_handlers, self.encoding)
return parser.parse()
def _get_raw_post_data(self):
if not hasattr(self, '_raw_post_data'):
if self._read_started:
raise Exception("You cannot access raw_post_data after reading from request's data stream")
try:
content_length = int(self.META.get('CONTENT_LENGTH', 0))
except (ValueError, TypeError):
# If CONTENT_LENGTH was empty string or not an integer, don't
# error out. We've also seen None passed in here (against all
# specs, but see ticket #8259), so we handle TypeError as well.
content_length = 0
if content_length:
self._raw_post_data = self.read(content_length)
else:
self._raw_post_data = self.read()
self._stream = StringIO(self._raw_post_data)
return self._raw_post_data
raw_post_data = property(_get_raw_post_data)
def _mark_post_parse_error(self):
self._post = QueryDict('')
self._files = MultiValueDict()
self._post_parse_error = True
def _load_post_and_files(self):
# Populates self._post and self._files
if self.method != 'POST':
self._post, self._files = QueryDict('', encoding=self._encoding), MultiValueDict()
return
if self._read_started:
self._mark_post_parse_error()
return
if self.META.get('CONTENT_TYPE', '').startswith('multipart'):
self._raw_post_data = ''
try:
self._post, self._files = self.parse_file_upload(self.META, self)
except:
# An error occured while parsing POST data. Since when
# formatting the error the request handler might access
# self.POST, set self._post and self._file to prevent
# attempts to parse POST data again.
# Mark that an error occured. This allows self.__repr__ to
# be explicit about it instead of simply representing an
# empty POST
self._mark_post_parse_error()
raise
else:
self._post, self._files = QueryDict(self.raw_post_data, encoding=self._encoding), MultiValueDict()
## File-like and iterator interface.
##
## Expects self._stream to be set to an appropriate source of bytes by
## a corresponding request subclass (WSGIRequest or ModPythonRequest).
## Also when request data has already been read by request.POST or
## request.raw_post_data, self._stream points to a StringIO instance
## containing that data.
def read(self, *args, **kwargs):
self._read_started = True
return self._stream.read(*args, **kwargs)
def readline(self, *args, **kwargs):
self._read_started = True
return self._stream.readline(*args, **kwargs)
def xreadlines(self):
while True:
buf = self.readline()
if not buf:
break
yield buf
__iter__ = xreadlines
def readlines(self):
return list(iter(self))
class QueryDict(MultiValueDict):
"""
A specialized MultiValueDict that takes a query string when initialized.
This is immutable unless you create a copy of it.
Values retrieved from this class are converted from the given encoding
(DEFAULT_CHARSET by default) to unicode.
"""
# These are both reset in __init__, but is specified here at the class
# level so that unpickling will have valid values
_mutable = True
_encoding = None
def __init__(self, query_string, mutable=False, encoding=None):
MultiValueDict.__init__(self)
if not encoding:
# *Important*: do not import settings any earlier because of note
# in core.handlers.modpython.
from django.conf import settings
encoding = settings.DEFAULT_CHARSET
self.encoding = encoding
for key, value in parse_qsl((query_string or ''), True): # keep_blank_values=True
self.appendlist(force_unicode(key, encoding, errors='replace'),
force_unicode(value, encoding, errors='replace'))
self._mutable = mutable
def _get_encoding(self):
if self._encoding is None:
# *Important*: do not import settings at the module level because
# of the note in core.handlers.modpython.
from django.conf import settings
self._encoding = settings.DEFAULT_CHARSET
return self._encoding
def _set_encoding(self, value):
self._encoding = value
encoding = property(_get_encoding, _set_encoding)
def _assert_mutable(self):
if not self._mutable:
raise AttributeError("This QueryDict instance is immutable")
def __setitem__(self, key, value):
self._assert_mutable()
key = str_to_unicode(key, self.encoding)
value = str_to_unicode(value, self.encoding)
MultiValueDict.__setitem__(self, key, value)
def __delitem__(self, key):
self._assert_mutable()
super(QueryDict, self).__delitem__(key)
def __copy__(self):
result = self.__class__('', mutable=True, encoding=self.encoding)
for key, value in dict.items(self):
dict.__setitem__(result, key, value)
return result
def __deepcopy__(self, memo):
import django.utils.copycompat as copy
result = self.__class__('', mutable=True, encoding=self.encoding)
memo[id(self)] = result
for key, value in dict.items(self):
dict.__setitem__(result, copy.deepcopy(key, memo), copy.deepcopy(value, memo))
return result
def setlist(self, key, list_):
self._assert_mutable()
key = str_to_unicode(key, self.encoding)
list_ = [str_to_unicode(elt, self.encoding) for elt in list_]
MultiValueDict.setlist(self, key, list_)
def setlistdefault(self, key, default_list=()):
self._assert_mutable()
if key not in self:
self.setlist(key, default_list)
return MultiValueDict.getlist(self, key)
def appendlist(self, key, value):
self._assert_mutable()
key = str_to_unicode(key, self.encoding)
value = str_to_unicode(value, self.encoding)
MultiValueDict.appendlist(self, key, value)
def update(self, other_dict):
self._assert_mutable()
f = lambda s: str_to_unicode(s, self.encoding)
if hasattr(other_dict, 'lists'):
for key, valuelist in other_dict.lists():
for value in valuelist:
MultiValueDict.update(self, {f(key): f(value)})
else:
d = dict([(f(k), f(v)) for k, v in other_dict.items()])
MultiValueDict.update(self, d)
def pop(self, key, *args):
self._assert_mutable()
return MultiValueDict.pop(self, key, *args)
def popitem(self):
self._assert_mutable()
return MultiValueDict.popitem(self)
def clear(self):
self._assert_mutable()
MultiValueDict.clear(self)
def setdefault(self, key, default=None):
self._assert_mutable()
key = str_to_unicode(key, self.encoding)
default = str_to_unicode(default, self.encoding)
return MultiValueDict.setdefault(self, key, default)
def copy(self):
"""Returns a mutable copy of this object."""
return self.__deepcopy__({})
def urlencode(self, safe=None):
"""
Returns an encoded string of all query string arguments.
:arg safe: Used to specify characters which do not require quoting, for
example::
>>> q = QueryDict('', mutable=True)
>>> q['next'] = '/a&b/'
>>> q.urlencode()
'next=%2Fa%26b%2F'
>>> q.urlencode(safe='/')
'next=/a%26b/'
"""
output = []
if safe:
encode = lambda k, v: '%s=%s' % ((quote(k, safe), quote(v, safe)))
else:
encode = lambda k, v: urlencode({k: v})
for k, list_ in self.lists():
k = smart_str(k, self.encoding)
output.extend([encode(k, smart_str(v, self.encoding))
for v in list_])
return '&'.join(output)
class CompatCookie(SimpleCookie):
"""
Cookie class that handles some issues with browser compatibility.
"""
def value_encode(self, val):
# Some browsers do not support quoted-string from RFC 2109,
# including some versions of Safari and Internet Explorer.
# These browsers split on ';', and some versions of Safari
# are known to split on ', '. Therefore, we encode ';' and ','
# SimpleCookie already does the hard work of encoding and decoding.
# It uses octal sequences like '\\012' for newline etc.
# and non-ASCII chars. We just make use of this mechanism, to
# avoid introducing two encoding schemes which would be confusing
# and especially awkward for javascript.
# NB, contrary to Python docs, value_encode returns a tuple containing
# (real val, encoded_val)
val, encoded = super(CompatCookie, self).value_encode(val)
encoded = encoded.replace(";", "\\073").replace(",","\\054")
# If encoded now contains any quoted chars, we need double quotes
# around the whole string.
if "\\" in encoded and not encoded.startswith('"'):
encoded = '"' + encoded + '"'
return val, encoded
def parse_cookie(cookie):
if cookie == '':
return {}
if not isinstance(cookie, Cookie.BaseCookie):
try:
c = CompatCookie()
c.load(cookie)
except Cookie.CookieError:
# Invalid cookie
return {}
else:
c = cookie
cookiedict = {}
for key in c.keys():
cookiedict[key] = c.get(key).value
return cookiedict
class BadHeaderError(ValueError):
pass
class HttpResponse(object):
"""A basic HTTP response, with content and dictionary-accessed headers."""
status_code = 200
def __init__(self, content='', mimetype=None, status=None,
content_type=None):
# _headers is a mapping of the lower-case name to the original case of
# the header (required for working with legacy systems) and the header
# value. Both the name of the header and its value are ASCII strings.
self._headers = {}
self._charset = settings.DEFAULT_CHARSET
if mimetype:
content_type = mimetype # For backwards compatibility
if not content_type:
content_type = "%s; charset=%s" % (settings.DEFAULT_CONTENT_TYPE,
self._charset)
if not isinstance(content, basestring) and hasattr(content, '__iter__'):
self._container = content
self._is_string = False
else:
self._container = [content]
self._is_string = True
self.cookies = CompatCookie()
if status:
self.status_code = status
self['Content-Type'] = content_type
def __str__(self):
"""Full HTTP message, including headers."""
return '\n'.join(['%s: %s' % (key, value)
for key, value in self._headers.values()]) \
+ '\n\n' + self.content
def _convert_to_ascii(self, *values):
"""Converts all values to ascii strings."""
for value in values:
if isinstance(value, unicode):
try:
value = value.encode('us-ascii')
except UnicodeError, e:
e.reason += ', HTTP response headers must be in US-ASCII format'
raise
else:
value = str(value)
if '\n' in value or '\r' in value:
raise BadHeaderError("Header values can't contain newlines (got %r)" % (value))
yield value
def __setitem__(self, header, value):
header, value = self._convert_to_ascii(header, value)
self._headers[header.lower()] = (header, value)
def __delitem__(self, header):
try:
del self._headers[header.lower()]
except KeyError:
pass
def __getitem__(self, header):
return self._headers[header.lower()][1]
def has_header(self, header):
"""Case-insensitive check for a header."""
return self._headers.has_key(header.lower())
__contains__ = has_header
def items(self):
return self._headers.values()
def get(self, header, alternate):
return self._headers.get(header.lower(), (None, alternate))[1]
def set_cookie(self, key, value='', max_age=None, expires=None, path='/',
domain=None, secure=False, httponly=False):
"""
Sets a cookie.
``expires`` can be a string in the correct format or a
``datetime.datetime`` object in UTC. If ``expires`` is a datetime
object then ``max_age`` will be calculated.
"""
self.cookies[key] = value
if expires is not None:
if isinstance(expires, datetime.datetime):
delta = expires - expires.utcnow()
# Add one second so the date matches exactly (a fraction of
# time gets lost between converting to a timedelta and
# then the date string).
delta = delta + datetime.timedelta(seconds=1)
# Just set max_age - the max_age logic will set expires.
expires = None
max_age = max(0, delta.days * 86400 + delta.seconds)
else:
self.cookies[key]['expires'] = expires
if max_age is not None:
self.cookies[key]['max-age'] = max_age
# IE requires expires, so set it if hasn't been already.
if not expires:
self.cookies[key]['expires'] = cookie_date(time.time() +
max_age)
if path is not None:
self.cookies[key]['path'] = path
if domain is not None:
self.cookies[key]['domain'] = domain
if secure:
self.cookies[key]['secure'] = True
if httponly:
self.cookies[key]['httponly'] = True
def delete_cookie(self, key, path='/', domain=None):
self.set_cookie(key, max_age=0, path=path, domain=domain,
expires='Thu, 01-Jan-1970 00:00:00 GMT')
def _get_content(self):
if self.has_header('Content-Encoding'):
return ''.join(self._container)
return smart_str(''.join(self._container), self._charset)
def _set_content(self, value):
self._container = [value]
self._is_string = True
content = property(_get_content, _set_content)
def __iter__(self):
self._iterator = iter(self._container)
return self
def next(self):
chunk = self._iterator.next()
if isinstance(chunk, unicode):
chunk = chunk.encode(self._charset)
return str(chunk)
def close(self):
if hasattr(self._container, 'close'):
self._container.close()
# The remaining methods partially implement the file-like object interface.
# See http://docs.python.org/lib/bltin-file-objects.html
def write(self, content):
if not self._is_string:
raise Exception("This %s instance is not writable" % self.__class__)
self._container.append(content)
def flush(self):
pass
def tell(self):
if not self._is_string:
raise Exception("This %s instance cannot tell its position" % self.__class__)
return sum([len(chunk) for chunk in self._container])
class HttpResponseRedirect(HttpResponse):
status_code = 302
def __init__(self, redirect_to):
HttpResponse.__init__(self)
self['Location'] = iri_to_uri(redirect_to)
class HttpResponsePermanentRedirect(HttpResponse):
status_code = 301
def __init__(self, redirect_to):
HttpResponse.__init__(self)
self['Location'] = iri_to_uri(redirect_to)
class HttpResponseNotModified(HttpResponse):
status_code = 304
class HttpResponseBadRequest(HttpResponse):
status_code = 400
class HttpResponseNotFound(HttpResponse):
status_code = 404
class HttpResponseForbidden(HttpResponse):
status_code = 403
class HttpResponseNotAllowed(HttpResponse):
status_code = 405
def __init__(self, permitted_methods):
HttpResponse.__init__(self)
self['Allow'] = ', '.join(permitted_methods)
class HttpResponseGone(HttpResponse):
status_code = 410
def __init__(self, *args, **kwargs):
HttpResponse.__init__(self, *args, **kwargs)
class HttpResponseServerError(HttpResponse):
status_code = 500
def __init__(self, *args, **kwargs):
HttpResponse.__init__(self, *args, **kwargs)
# A backwards compatible alias for HttpRequest.get_host.
def get_host(request):
return request.get_host()
# It's neither necessary nor appropriate to use
# django.utils.encoding.smart_unicode for parsing URLs and form inputs. Thus,
# this slightly more restricted function.
def str_to_unicode(s, encoding):
"""
Converts basestring objects to unicode, using the given encoding. Illegally
encoded input characters are replaced with Unicode "unknown" codepoint
(\ufffd).
Returns any non-basestring objects without change.
"""
if isinstance(s, str):
return unicode(s, encoding, 'replace')
else:
return s
| apache-2.0 |
cpcloud/blaze | blaze/compute/pydatetime.py | 16 | 6133 | from __future__ import absolute_import, division, print_function
from datetime import datetime, date, timedelta
import sys
def identity(x):
return x
def asday(dt):
if isinstance(dt, datetime):
return dt.date()
else:
return dt
def asweek(dt):
if isinstance(dt, datetime):
dt = dt.date()
return dt - timedelta(days=dt.isoweekday() - 1)
def ashour(dt):
return datetime(dt.year, dt.month, dt.day, dt.hour, tzinfo=dt.tzinfo)
def asminute(dt):
return datetime(dt.year, dt.month, dt.day, dt.hour, dt.minute,
tzinfo=dt.tzinfo)
def assecond(dt):
return datetime(dt.year, dt.month, dt.day, dt.hour, dt.minute,
dt.second, tzinfo=dt.tzinfo)
def asmillisecond(dt):
return datetime(dt.year, dt.month, dt.day, dt.hour, dt.minute,
dt.second, dt.microsecond // 1000, tzinfo=dt.tzinfo)
if sys.version_info < (2, 7):
def total_seconds(td):
""" Total seconds of a timedelta
For Python 2.6 compatibility
"""
return (td.microseconds + 1e6 * (td.seconds + 24 * 3600 * td.days)) / 1e6
else:
total_seconds = timedelta.total_seconds
unit_map = {'year': 'asyear',
'month': 'asmonth',
'week': 'asweek',
'day': 'asday',
'hour': 'ashour',
'minute': 'asminute',
'second': 'assecond',
'millisecond': 'asmillisecond',
'microsecond': identity}
def truncate_year(dt, measure):
"""
Truncate by years
>>> dt = datetime(2003, 6, 25, 12, 30, 0)
>>> truncate_year(dt, 1)
datetime.date(2003, 1, 1)
>>> truncate_year(dt, 5)
datetime.date(2000, 1, 1)
"""
return date(dt.year // measure * measure, 1, 1)
def truncate_month(dt, measure):
"""
Truncate by months
>>> dt = datetime(2000, 10, 25, 12, 30, 0)
>>> truncate_month(dt, 1)
datetime.date(2000, 10, 1)
>>> truncate_month(dt, 4)
datetime.date(2000, 8, 1)
"""
months = dt.year * 12 + dt.month
months = months // measure * measure
return date((months - 1) // 12, (months - 1) % 12 + 1, 1)
def truncate_day(dt, measure):
"""
Truncate by days
>>> dt = datetime(2000, 6, 27, 12, 30, 0)
>>> truncate_day(dt, 1)
datetime.date(2000, 6, 27)
>>> truncate_day(dt, 3)
datetime.date(2000, 6, 25)
"""
days = dt.toordinal()
days = days // measure * measure
return date.fromordinal(days)
oneday = timedelta(days=1)
def truncate_week(dt, measure):
"""
Truncate by weeks
>>> dt = datetime(2000, 6, 22, 12, 30, 0)
>>> truncate_week(dt, 1)
datetime.date(2000, 6, 18)
>>> truncate_week(dt, 3)
datetime.date(2000, 6, 4)
Weeks are defined by having isoweekday == 7 (Sunday)
>>> truncate_week(dt, 1).isoweekday()
7
"""
return truncate_day(dt, measure * 7)
epoch = datetime.utcfromtimestamp(0)
def utctotimestamp(dt):
"""
Convert a timestamp to seconds
>>> dt = datetime(2000, 1, 1)
>>> utctotimestamp(dt)
946684800.0
>>> datetime.utcfromtimestamp(946684800)
datetime.datetime(2000, 1, 1, 0, 0)
"""
return total_seconds(dt - epoch)
def truncate_minute(dt, measure):
"""
Truncate by minute
>>> dt = datetime(2000, 1, 1, 12, 30, 38)
>>> truncate_minute(dt, 1)
datetime.datetime(2000, 1, 1, 12, 30)
>>> truncate_minute(dt, 12)
datetime.datetime(2000, 1, 1, 12, 24)
"""
return asminute(truncate_second(dt, measure * 60))
def truncate_hour(dt, measure):
"""
Truncate by hour
>>> dt = datetime(2000, 1, 1, 12, 30, 38)
>>> truncate_hour(dt, 1)
datetime.datetime(2000, 1, 1, 12, 0)
>>> truncate_hour(dt, 5)
datetime.datetime(2000, 1, 1, 10, 0)
"""
return ashour(truncate_second(dt, measure * 3600))
def truncate_second(dt, measure):
"""
Truncate by second
>>> dt = datetime(2000, 1, 1, 12, 30, 38)
>>> truncate_second(dt, 15)
datetime.datetime(2000, 1, 1, 12, 30, 30)
"""
d = datetime(
dt.year, dt.month, dt.day, tzinfo=dt.tzinfo) # local zero for seconds
seconds = total_seconds(dt - d) // measure * measure
return dt.utcfromtimestamp(seconds + utctotimestamp(d))
def truncate_millisecond(dt, measure):
"""
Truncate by millisecond
>>> dt = datetime(2000, 1, 1, 12, 30, 38, 12345)
>>> truncate_millisecond(dt, 5)
datetime.datetime(2000, 1, 1, 12, 30, 38, 10000)
"""
d = datetime(
dt.year, dt.month, dt.day, tzinfo=dt.tzinfo) # local zero for seconds
seconds = total_seconds(dt - d) * 1000 // measure * measure / 1000. + 1e-7
return dt.utcfromtimestamp(seconds + utctotimestamp(d))
def truncate_microsecond(dt, measure):
"""
Truncate by microsecond
>>> dt = datetime(2000, 1, 1, 12, 30, 38, 12345)
>>> truncate_microsecond(dt, 100)
datetime.datetime(2000, 1, 1, 12, 30, 38, 12300)
"""
d = datetime(
dt.year, dt.month, dt.day, tzinfo=dt.tzinfo) # local zero for seconds
seconds = total_seconds(dt - d) * 1000000 // measure * measure / 1000000
return dt.utcfromtimestamp(seconds + utctotimestamp(d))
truncate_functions = {'year': truncate_year,
'month': truncate_month,
'week': truncate_week,
'day': truncate_day,
'hour': truncate_hour,
'minute': truncate_minute,
'second': truncate_second,
'millisecond': truncate_millisecond,
'microsecond': truncate_microsecond}
def truncate(dt, measure, unit):
""" Truncate datetimes
Examples
--------
>>> dt = datetime(2003, 6, 25, 12, 30, 0)
>>> truncate(dt, 1, 'day')
datetime.date(2003, 6, 25)
>>> truncate(dt, 5, 'hours')
datetime.datetime(2003, 6, 25, 10, 0)
>>> truncate(dt, 3, 'months')
datetime.date(2003, 6, 1)
"""
from blaze.expr.datetime import normalize_time_unit
unit = normalize_time_unit(unit)
return truncate_functions[unit](dt, measure)
| bsd-3-clause |
aronsky/home-assistant | homeassistant/components/feedreader.py | 5 | 8606 | """
Support for RSS/Atom feeds.
For more details about this component, please refer to the documentation at
https://home-assistant.io/components/feedreader/
"""
from datetime import datetime, timedelta
from logging import getLogger
from os.path import exists
from threading import Lock
import pickle
import voluptuous as vol
from homeassistant.const import EVENT_HOMEASSISTANT_START, CONF_SCAN_INTERVAL
from homeassistant.helpers.event import track_time_interval
import homeassistant.helpers.config_validation as cv
REQUIREMENTS = ['feedparser==5.2.1']
_LOGGER = getLogger(__name__)
CONF_URLS = 'urls'
CONF_MAX_ENTRIES = 'max_entries'
DEFAULT_MAX_ENTRIES = 20
DEFAULT_SCAN_INTERVAL = timedelta(hours=1)
DOMAIN = 'feedreader'
EVENT_FEEDREADER = 'feedreader'
CONFIG_SCHEMA = vol.Schema({
DOMAIN: {
vol.Required(CONF_URLS): vol.All(cv.ensure_list, [cv.url]),
vol.Optional(CONF_SCAN_INTERVAL, default=DEFAULT_SCAN_INTERVAL):
cv.time_period,
vol.Optional(CONF_MAX_ENTRIES, default=DEFAULT_MAX_ENTRIES):
cv.positive_int
}
}, extra=vol.ALLOW_EXTRA)
def setup(hass, config):
"""Set up the Feedreader component."""
urls = config.get(DOMAIN)[CONF_URLS]
scan_interval = config.get(DOMAIN).get(CONF_SCAN_INTERVAL)
max_entries = config.get(DOMAIN).get(CONF_MAX_ENTRIES)
data_file = hass.config.path("{}.pickle".format(DOMAIN))
storage = StoredData(data_file)
feeds = [FeedManager(url, scan_interval, max_entries, hass, storage) for
url in urls]
return len(feeds) > 0
class FeedManager:
"""Abstraction over Feedparser module."""
def __init__(self, url, scan_interval, max_entries, hass, storage):
"""Initialize the FeedManager object, poll as per scan interval."""
self._url = url
self._scan_interval = scan_interval
self._max_entries = max_entries
self._feed = None
self._hass = hass
self._firstrun = True
self._storage = storage
self._last_entry_timestamp = None
self._last_update_successful = False
self._has_published_parsed = False
self._event_type = EVENT_FEEDREADER
self._feed_id = url
hass.bus.listen_once(
EVENT_HOMEASSISTANT_START, lambda _: self._update())
self._init_regular_updates(hass)
def _log_no_entries(self):
"""Send no entries log at debug level."""
_LOGGER.debug("No new entries to be published in feed %s", self._url)
def _init_regular_updates(self, hass):
"""Schedule regular updates at the top of the clock."""
track_time_interval(hass, lambda now: self._update(),
self._scan_interval)
@property
def last_update_successful(self):
"""Return True if the last feed update was successful."""
return self._last_update_successful
def _update(self):
"""Update the feed and publish new entries to the event bus."""
import feedparser
_LOGGER.info("Fetching new data from feed %s", self._url)
self._feed = feedparser.parse(self._url,
etag=None if not self._feed
else self._feed.get('etag'),
modified=None if not self._feed
else self._feed.get('modified'))
if not self._feed:
_LOGGER.error("Error fetching feed data from %s", self._url)
self._last_update_successful = False
else:
# The 'bozo' flag really only indicates that there was an issue
# during the initial parsing of the XML, but it doesn't indicate
# whether this is an unrecoverable error. In this case the
# feedparser lib is trying a less strict parsing approach.
# If an error is detected here, log error message but continue
# processing the feed entries if present.
if self._feed.bozo != 0:
_LOGGER.error("Error parsing feed %s: %s", self._url,
self._feed.bozo_exception)
# Using etag and modified, if there's no new data available,
# the entries list will be empty
if self._feed.entries:
_LOGGER.debug("%s entri(es) available in feed %s",
len(self._feed.entries), self._url)
self._filter_entries()
self._publish_new_entries()
if self._has_published_parsed:
self._storage.put_timestamp(
self._feed_id, self._last_entry_timestamp)
else:
self._log_no_entries()
self._last_update_successful = True
_LOGGER.info("Fetch from feed %s completed", self._url)
def _filter_entries(self):
"""Filter the entries provided and return the ones to keep."""
if len(self._feed.entries) > self._max_entries:
_LOGGER.debug("Processing only the first %s entries "
"in feed %s", self._max_entries, self._url)
self._feed.entries = self._feed.entries[0:self._max_entries]
def _update_and_fire_entry(self, entry):
"""Update last_entry_timestamp and fire entry."""
# We are lucky, `published_parsed` data available, let's make use of
# it to publish only new available entries since the last run
if 'published_parsed' in entry.keys():
self._has_published_parsed = True
self._last_entry_timestamp = max(
entry.published_parsed, self._last_entry_timestamp)
else:
self._has_published_parsed = False
_LOGGER.debug("No published_parsed info available for entry %s",
entry)
entry.update({'feed_url': self._url})
self._hass.bus.fire(self._event_type, entry)
def _publish_new_entries(self):
"""Publish new entries to the event bus."""
new_entries = False
self._last_entry_timestamp = self._storage.get_timestamp(self._feed_id)
if self._last_entry_timestamp:
self._firstrun = False
else:
# Set last entry timestamp as epoch time if not available
self._last_entry_timestamp = \
datetime.utcfromtimestamp(0).timetuple()
for entry in self._feed.entries:
if self._firstrun or (
'published_parsed' in entry.keys() and
entry.published_parsed > self._last_entry_timestamp):
self._update_and_fire_entry(entry)
new_entries = True
else:
_LOGGER.debug("Entry %s already processed", entry)
if not new_entries:
self._log_no_entries()
self._firstrun = False
class StoredData:
"""Abstraction over pickle data storage."""
def __init__(self, data_file):
"""Initialize pickle data storage."""
self._data_file = data_file
self._lock = Lock()
self._cache_outdated = True
self._data = {}
self._fetch_data()
def _fetch_data(self):
"""Fetch data stored into pickle file."""
if self._cache_outdated and exists(self._data_file):
try:
_LOGGER.debug("Fetching data from file %s", self._data_file)
with self._lock, open(self._data_file, 'rb') as myfile:
self._data = pickle.load(myfile) or {}
self._cache_outdated = False
except: # noqa: E722 pylint: disable=bare-except
_LOGGER.error("Error loading data from pickled file %s",
self._data_file)
def get_timestamp(self, feed_id):
"""Return stored timestamp for given feed id (usually the url)."""
self._fetch_data()
return self._data.get(feed_id)
def put_timestamp(self, feed_id, timestamp):
"""Update timestamp for given feed id (usually the url)."""
self._fetch_data()
with self._lock, open(self._data_file, 'wb') as myfile:
self._data.update({feed_id: timestamp})
_LOGGER.debug("Overwriting feed %s timestamp in storage file %s",
feed_id, self._data_file)
try:
pickle.dump(self._data, myfile)
except: # noqa: E722 pylint: disable=bare-except
_LOGGER.error(
"Error saving pickled data to %s", self._data_file)
self._cache_outdated = True
| apache-2.0 |
metadave/erln8.sublime | Erln8.py | 1 | 2779 | # ------------------------------------------------------------
# erln8.sublime: erln8 support for SublimeText2
#
# Copyright (c) 2015 Dave Parfitt
#
# This file is provided to you under the Apache License,
# Version 2.0 (the "License"); you may not use this file
# except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ------------------------------------------------------------
#
import os
from os.path import dirname
import subprocess
from subprocess import Popen
import sublime, sublime_plugin
def doit(listindex):
return True
def e8(cwd, erln8, cmd_list):
sublime.status_message(cwd)
try:
p = subprocess.Popen([erln8] + cmd_list, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err=p.communicate()
if err != "":
sublime.error_message("Erln8 error: " + err)
return "error"
else:
cmd_output = out.strip()
return cmd_output
except Exception as e:
s = str(e)
sublime.error_message("Erln8 error: %s" % s)
return "error"
class Erln8buildableCommand(sublime_plugin.WindowCommand):
def run(self):
erln8_exe=self.window.active_view().settings().get('erln8_path','/usr/local/bin/erln8')
output = e8("/", erln8_exe, ["--buildable"])
items = output.split("\n")
#sublime.message_dialog(output)
self.window.show_quick_panel(items, doit)
def description(self):
return "Erln8Buildable"
class Erln8listCommand(sublime_plugin.WindowCommand):
def run(self):
erln8_exe=self.window.active_view().settings().get('erln8_path','/usr/local/bin/erln8')
output = e8("/", erln8_exe, ["--list"])
items = output.split("\n")
self.window.show_quick_panel(items, doit)
class Erln8Listener(sublime_plugin.EventListener):
def on_post_save(self, view):
erln8_exe=view.settings().get('erln8_path','/usr/local/bin/erln8')
p = view.file_name()
d = dirname(p)
erl_version = self.erln8_exec(d, erln8_exe, ["--show"])
view.set_status("erln8", "erln8: " + erl_version)
def erln8_exec(self, cwd, erln8, cmd_list):
sublime.status_message(cwd)
try:
p = subprocess.Popen([erln8] + cmd_list, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err=p.communicate()
if err != "":
sublime.error_message("Erln8 error: " + err)
return "error"
else:
cmd_output = out.strip()
return cmd_output
except Exception as e:
s = str(e)
sublime.error_message("Erln8 error: %s" % s)
return "error"
| apache-2.0 |
MadsackMediaStore/connector-magento | magentoerpconnect_pricing/product.py | 5 | 6776 | # -*- coding: utf-8 -*-
##############################################################################
#
# Author: Guewen Baconnier
# Copyright 2013 Camptocamp SA
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.tools.translate import _
from openerp.addons.connector.queue.job import job, related_action
from openerp.addons.connector.exception import FailedJobError
from openerp.addons.connector.unit.mapper import (mapping,
only_create
)
from openerp.addons.connector_ecommerce.event import on_product_price_changed
from openerp.addons.magentoerpconnect.unit.export_synchronizer import (
MagentoBaseExporter)
from openerp.addons.magentoerpconnect.backend import magento
from openerp.addons.magentoerpconnect import product
from openerp.addons.magentoerpconnect.connector import get_environment
from openerp.addons.magentoerpconnect.related_action import (
unwrap_binding,
)
# TODO: replace a price mapper only, not the full mapper
@magento(replacing=product.ProductImportMapper)
class ProductImportMapper(product.ProductImportMapper):
_model_name = 'magento.product.product'
@only_create
@mapping
def price(self, record):
""" The price is imported at the creation of
the product, then it is only modified and exported
from OpenERP """
return super(ProductImportMapper, self).price(record)
@magento
class ProductPriceExporter(MagentoBaseExporter):
""" Export the price of a product.
Use the pricelist configured on the backend for the
default price in Magento.
If different pricelists have been configured on the websites,
update the prices on the different websites.
"""
_model_name = ['magento.product.product']
def _get_price(self, pricelist_id):
""" Return the raw OpenERP data for ``self.binding_id`` """
if pricelist_id is None:
# a False value will set the 'Use default value' in Magento
return False
with self.session.change_context({'pricelist': pricelist_id}):
return self.session.read(self.model._name,
self.binding_id,
['price'])['price']
def _update(self, data, storeview_id=None):
self.backend_adapter.write(self.magento_id, data,
storeview_id=storeview_id)
def _run(self, website_id=None):
""" Export the product inventory to Magento
:param website_id: if None, export on all websites,
or OpenERP ID for the website to update
"""
# export of products is not implemented so we just raise
# if the export was existing, we would export it
assert self.magento_id, "Record has been deleted in Magento"
pricelist = self.backend_record.pricelist_id
if not pricelist:
name = self.backend_record.name
raise FailedJobError(
'Configuration Error:\n'
'No pricelist configured on the backend %s.\n\n'
'Resolution:\n'
'Go to Connectors > Backends > %s.\n'
'Choose a pricelist.' % (name, name))
pricelist_id = pricelist.id
# export the price for websites if they have a different
# pricelist
storeview_binder = self.get_binder_for_model('magento.storeview')
for website in self.backend_record.website_ids:
if website_id is not None and website.id != website_id:
continue
# 0 is the admin website, the update on this website
# set the default values in Magento, we use the default
# pricelist
site_pricelist_id = None
if website.magento_id == '0':
site_pricelist_id = pricelist_id
elif website.pricelist_id:
site_pricelist_id = website.pricelist_id.id
# The update of the prices in Magento is very weird:
# - The price is different per website (if the option
# is active in the config), but is shared between
# the store views of a website.
# - BUT the Magento API expects a storeview id to modify
# a price on a website (and not a website id...)
# So we take the first storeview of the website to update.
storeview_ids = self.session.search(
'magento.storeview',
[('store_id.website_id', '=', website.id)])
if not storeview_ids:
continue
magento_storeview = storeview_binder.to_backend(storeview_ids[0])
price = self._get_price(site_pricelist_id)
self._update({'price': price}, storeview_id=magento_storeview)
self.binder.bind(self.magento_id, self.binding_id)
return _('Prices have been updated.')
@on_product_price_changed
def product_price_changed(session, model_name, record_id, fields=None):
""" When a product.product price has been changed """
if session.context.get('connector_no_export'):
return
model = session.pool.get(model_name)
record = model.browse(session.cr, session.uid,
record_id, context=session.context)
for binding in record.magento_bind_ids:
export_product_price.delay(session,
binding._model._name,
binding.id,
priority=5)
@job
@related_action(action=unwrap_binding)
def export_product_price(session, model_name, record_id, website_id=None):
""" Export the price of a product. """
product_bind = session.browse(model_name, record_id)
backend_id = product_bind.backend_id.id
env = get_environment(session, model_name, backend_id)
price_exporter = env.get_connector_unit(ProductPriceExporter)
return price_exporter.run(record_id, website_id=website_id)
| agpl-3.0 |
huiyiqun/check_mk | cmk_base/config.py | 1 | 53500 | #!/usr/bin/env python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2014 [email protected] |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# tails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
import os
import sys
import copy
import marshal
import cmk.paths
import cmk.translations
import cmk.store as store
from cmk.exceptions import MKGeneralException
import cmk_base
import cmk_base.console as console
import cmk_base.default_config as default_config
import cmk_base.rulesets as rulesets
# This is mainly needed for pylint to detect all available
# configuration options during static analysis. The defaults
# are loaded later with load_default_config() again.
from cmk_base.default_config import *
def get_variable_names():
"""Provides the list of all known configuration variables."""
return [ k for k in default_config.__dict__.keys() if k[0] != "_" ]
def get_default_config():
"""Provides a dictionary containing the Check_MK default configuration"""
cfg = {}
for key in get_variable_names():
value = getattr(default_config, key)
if isinstance(value, dict) or isinstance(value, list):
value = copy.deepcopy(value)
cfg[key] = value
return cfg
def load_default_config():
globals().update(get_default_config())
def register(name, default_value):
"""Register a new configuration variable within Check_MK base."""
setattr(default_config, name, default_value)
def _add_check_variables_to_default_config():
"""Add configuration variables registered by checks to config module"""
import cmk_base.checks
default_config.__dict__.update(cmk_base.checks.get_check_variable_defaults())
def _clear_check_variables_from_default_config(check_variable_names):
"""Remove previously registered check variables from the config module"""
for varname in check_variable_names:
try:
delattr(default_config, varname)
except AttributeError:
pass
# Load user configured values of check related configuration variables
# into the check module to make it available during checking.
#
# In the same step we remove the check related configuration settings from the
# config module because they are not needed there anymore.
#
# And also remove it from the default config (in case it was present)
def set_check_variables_for_checks():
import cmk_base.checks
global_dict = globals()
check_variable_names = cmk_base.checks.check_variable_names()
check_variables = {}
for varname in check_variable_names:
check_variables[varname] = global_dict.pop(varname)
cmk_base.checks.set_check_variables(check_variables)
_clear_check_variables_from_default_config(check_variable_names)
#.
# .--Read Config---------------------------------------------------------.
# | ____ _ ____ __ _ |
# | | _ \ ___ __ _ __| | / ___|___ _ __ / _(_) __ _ |
# | | |_) / _ \/ _` |/ _` | | | / _ \| '_ \| |_| |/ _` | |
# | | _ < __/ (_| | (_| | | |__| (_) | | | | _| | (_| | |
# | |_| \_\___|\__,_|\__,_| \____\___/|_| |_|_| |_|\__, | |
# | |___/ |
# +----------------------------------------------------------------------+
# | Code for reading the configuration files. |
# '----------------------------------------------------------------------'
def load(with_conf_d=True, validate_hosts=True, exclude_parents_mk=False):
_initialize_config()
vars_before_config = all_nonfunction_vars()
_load_config(with_conf_d, exclude_parents_mk)
_transform_mgmt_config_vars_from_140_to_150()
_initialize_derived_config_variables()
_perform_post_config_loading_actions()
if validate_hosts:
_verify_non_duplicate_hosts()
# Such validation only makes sense when all checks have been loaded
if cmk_base.checks.all_checks_loaded():
verify_non_invalid_variables(vars_before_config)
verify_snmp_communities_type()
def load_packed_config():
"""Load the configuration for the CMK helpers of CMC
These files are written by PackedConfig().
Should have a result similar to the load() above. With the exception that the
check helpers would only need check related config variables.
The validations which are performed during load() also don't need to be performed.
"""
PackedConfig().load()
def _initialize_config():
_add_check_variables_to_default_config()
load_default_config()
def _perform_post_config_loading_actions():
"""These tasks must be performed after loading the Check_MK base configuration"""
# First cleanup things (needed for e.g. reloading the config)
cmk_base.config_cache.clear_all()
initialize_config_caches()
# In case the checks are not loaded yet it seems the current mode
# is not working with the checks. In this case also don't load the
# static checks into the configuration.
if cmk_base.checks.all_checks_loaded():
add_wato_static_checks_to_checks()
initialize_check_caches()
set_check_variables_for_checks()
def _load_config(with_conf_d, exclude_parents_mk):
import cmk_base.checks
helper_vars = {
"FILE_PATH" : None,
"FOLDER_PATH" : None,
"PHYSICAL_HOSTS" : rulesets.PHYSICAL_HOSTS,
"CLUSTER_HOSTS" : rulesets.CLUSTER_HOSTS,
"ALL_HOSTS" : rulesets.ALL_HOSTS,
"ALL_SERVICES" : rulesets.ALL_SERVICES,
"NEGATE" : rulesets.NEGATE,
}
global_dict = globals()
global_dict.update(helper_vars)
for _f in _get_config_file_paths(with_conf_d):
# During parent scan mode we must not read in old version of parents.mk!
if exclude_parents_mk and _f.endswith("/parents.mk"):
continue
try:
_hosts_before = set(all_hosts)
_clusters_before = set(clusters.keys())
# Make the config path available as a global variable to
# be used within the configuration file
if _f.startswith(cmk.paths.check_mk_config_dir + "/"):
_file_path = _f[len(cmk.paths.check_mk_config_dir) + 1:]
global_dict.update({
"FILE_PATH" : _file_path,
"FOLDER_PATH" : os.path.dirname(_file_path),
})
else:
global_dict.update({
"FILE_PATH" : None,
"FOLDER_PATH" : None,
})
execfile(_f, global_dict, global_dict)
_new_hosts = set(all_hosts).difference(_hosts_before)
_new_clusters = set(clusters.keys()).difference(_clusters_before)
set_folder_paths(_new_hosts.union(_new_clusters), _f)
except Exception, e:
if cmk.debug.enabled():
raise
elif sys.stderr.isatty():
console.error("Cannot read in configuration file %s: %s\n", _f, e)
sys.exit(1)
# Cleanup global helper vars
for helper_var in helper_vars.keys():
del global_dict[helper_var]
def _transform_mgmt_config_vars_from_140_to_150():
#FIXME We have to transform some configuration variables from host attributes
# to cmk_base configuration variables because during the migration step from
# 1.4.0 to 1.5.0 some config variables are not known in cmk_base. These variables
# are 'management_protocol' and 'management_snmp_community'.
# Clean this up one day!
for hostname, attributes in host_attributes.iteritems():
for name, var in [
('management_protocol', management_protocol),
('management_snmp_community', management_snmp_credentials),
]:
if attributes.get(name):
var.setdefault(hostname, attributes[name])
# Create list of all files to be included during configuration loading
def _get_config_file_paths(with_conf_d):
if with_conf_d:
list_of_files = sorted(
reduce(lambda a,b: a+b, [ [ "%s/%s" % (d, f) for f in fs if f.endswith(".mk")]
for d, _unused_sb, fs in os.walk(cmk.paths.check_mk_config_dir) ], []),
cmp=_cmp_config_paths
)
list_of_files = [ cmk.paths.main_config_file ] + list_of_files
else:
list_of_files = [ cmk.paths.main_config_file ]
for path in [ cmk.paths.final_config_file, cmk.paths.local_config_file ]:
if os.path.exists(path):
list_of_files.append(path)
return list_of_files
def initialize_config_caches():
collect_hosttags()
def _initialize_derived_config_variables():
global service_service_levels, host_service_levels
service_service_levels = extra_service_conf.get("_ec_sl", [])
host_service_levels = extra_host_conf.get("_ec_sl", [])
def get_derived_config_variable_names():
"""These variables are computed from other configuration variables and not configured directly.
The origin variable (extra_service_conf) should not be exported to the helper config. Only
the service levels are needed."""
return set([ "service_service_levels", "host_service_levels" ])
def _verify_non_duplicate_hosts():
duplicates = duplicate_hosts()
if duplicates:
# TODO: Raise an exception
console.error("Error in configuration: duplicate hosts: %s\n",
", ".join(duplicates))
sys.exit(3)
# Add WATO-configured explicit checks to (possibly empty) checks
# statically defined in checks.
def add_wato_static_checks_to_checks():
global checks
static = []
for entries in static_checks.values():
for entry in entries:
entry, rule_options = rulesets.get_rule_options(entry)
if rule_options.get("disabled"):
continue
# Parameters are optional
if len(entry[0]) == 2:
checktype, item = entry[0]
params = None
else:
checktype, item, params = entry[0]
if len(entry) == 3:
taglist, hostlist = entry[1:3]
else:
hostlist = entry[1]
taglist = []
# Make sure, that for dictionary based checks
# at least those keys defined in the factory
# settings are present in the parameters
if type(params) == dict:
def_levels_varname = cmk_base.checks.check_info[checktype].get("default_levels_variable")
if def_levels_varname:
for key, value in cmk_base.checks.factory_settings.get(def_levels_varname, {}).items():
if key not in params:
params[key] = value
static.append((taglist, hostlist, checktype, item, params))
# Note: We need to reverse the order of the static_checks. This is because
# users assume that earlier rules have precedence over later ones. For static
# checks that is important if there are two rules for a host with the same
# combination of check type and item. When the variable 'checks' is evaluated,
# *later* rules have precedence. This is not consistent with the rest, but a
# result of this "historic implementation".
static.reverse()
# Now prepend to checks. That makes that checks variable have precedence
# over WATO.
checks = static + checks
def initialize_check_caches():
single_host_checks = cmk_base.config_cache.get_dict("single_host_checks")
multi_host_checks = cmk_base.config_cache.get_list("multi_host_checks")
for entry in checks:
if len(entry) == 4 and type(entry[0]) == str:
single_host_checks.setdefault(entry[0], []).append(entry)
else:
multi_host_checks.append(entry)
def set_folder_paths(new_hosts, filename):
if not filename.startswith(cmk.paths.check_mk_config_dir):
return
path = filename[len(cmk.paths.check_mk_config_dir):]
for hostname in strip_tags(new_hosts):
host_paths[hostname] = path
def verify_non_invalid_variables(vars_before_config):
# Check for invalid configuration variables
vars_after_config = all_nonfunction_vars()
ignored_variables = set(['vars_before_config', 'parts',
'seen_hostnames',
'taggedhost' ,'hostname',
'service_service_levels',
'host_service_levels'])
found_invalid = 0
for name in vars_after_config:
if name not in ignored_variables and name not in vars_before_config:
console.error("Invalid configuration variable '%s'\n", name)
found_invalid += 1
if found_invalid:
console.error("--> Found %d invalid variables\n" % found_invalid)
console.error("If you use own helper variables, please prefix them with _.\n")
sys.exit(1)
def verify_snmp_communities_type():
# Special handling for certain deprecated variables
if type(snmp_communities) == dict:
console.error("ERROR: snmp_communities cannot be a dict any more.\n")
sys.exit(1)
def all_nonfunction_vars():
return set([ name for name,value in globals().items()
if name[0] != '_' and type(value) != type(lambda:0) ])
# Helper functions that determines the sort order of the
# configuration files. The following two rules are implemented:
# 1. *.mk files in the same directory will be read
# according to their lexical order.
# 2. subdirectories in the same directory will be
# scanned according to their lexical order.
# 3. subdirectories of a directory will always be read *after*
# the *.mk files in that directory.
def _cmp_config_paths(a, b):
pa = a.split('/')
pb = b.split('/')
return cmp(pa[:-1], pb[:-1]) or \
cmp(len(pa), len(pb)) or \
cmp(pa, pb)
class PackedConfig(object):
"""The precompiled host checks and the CMC Check_MK helpers use a
"precompiled" part of the Check_MK configuration during runtime.
a) They must not use the live config from etc/check_mk during
startup. They are only allowed to load the config activated by
the user.
b) They must not load the whole Check_MK config. Because they only
need the options needed for checking
"""
# These variables are part of the Check_MK configuration, but are not needed
# by the Check_MK keepalive mode, so exclude them from the packed config
_skipped_config_variable_names = [
"define_contactgroups",
"define_hostgroups",
"define_servicegroups",
"service_contactgroups",
"host_contactgroups",
"service_groups",
"host_groups",
"contacts",
"host_paths",
"timeperiods",
"extra_service_conf",
"extra_host_conf",
"extra_nagios_conf",
]
def __init__(self):
super(PackedConfig, self).__init__()
self._path = os.path.join(cmk.paths.var_dir, "base", "precompiled_check_config.mk")
def save(self):
self._write(self._pack())
def _pack(self):
import cmk_base.checks
helper_config = (
"#!/usr/bin/env python\n"
"# encoding: utf-8\n"
"# Created by Check_MK. Dump of the currently active configuration\n\n"
)
# These functions purpose is to filter out hosts which are monitored on different sites
active_hosts = all_active_hosts()
active_clusters = all_active_clusters()
def filter_all_hosts(all_hosts):
all_hosts_red = []
for host_entry in all_hosts:
hostname = host_entry.split("|", 1)[0]
if hostname in active_hosts:
all_hosts_red.append(host_entry)
return all_hosts_red
def filter_clusters(clusters):
clusters_red = {}
for cluster_entry, cluster_nodes in clusters.items():
clustername = cluster_entry.split("|", 1)[0]
if clustername in active_clusters:
clusters_red[cluster_entry] = cluster_nodes
return clusters_red
def filter_hostname_in_dict(values):
values_red = {}
for hostname, attributes in values.items():
if hostname in active_hosts:
values_red[hostname] = attributes
return values_red
filter_var_functions = {
"all_hosts" : filter_all_hosts,
"clusters" : filter_clusters,
"host_attributes" : filter_hostname_in_dict,
"ipaddresses" : filter_hostname_in_dict,
"ipv6addresses" : filter_hostname_in_dict,
"explicit_snmp_communities": filter_hostname_in_dict,
"hosttags" : filter_hostname_in_dict
}
#
# Add modified Check_MK base settings
#
variable_defaults = get_default_config()
derived_config_variable_names = get_derived_config_variable_names()
global_variables = globals()
for varname in get_variable_names() + list(derived_config_variable_names):
if varname in self._skipped_config_variable_names:
continue
val = global_variables[varname]
if varname not in derived_config_variable_names and val == variable_defaults[varname]:
continue
if not self._packable(varname, val):
continue
if varname in filter_var_functions:
val = filter_var_functions[varname](val)
helper_config += "\n%s = %r\n" % (varname, val)
#
# Add modified check specific Check_MK base settings
#
check_variable_defaults = cmk_base.checks.get_check_variable_defaults()
for varname, val in cmk_base.checks.get_check_variables().items():
if val == check_variable_defaults[varname]:
continue
if not self._packable(varname, val):
continue
helper_config += "\n%s = %r\n" % (varname, val)
return helper_config
def _packable(self, varname, val):
"""Checks whether or not a variable can be written to the config.mk
and read again from it."""
if type(val) in [ int, str, unicode, bool ] or not val:
return True
try:
eval(repr(val))
return True
except:
return False
def _write(self, helper_config):
store.makedirs(os.path.dirname(self._path))
store.save_file(self._path + ".orig", helper_config + "\n")
import marshal
code = compile(helper_config, '<string>', 'exec')
with open(self._path + ".compiled", "w") as compiled_file:
marshal.dump(code, compiled_file)
os.rename(self._path + ".compiled", self._path)
def load(self):
_initialize_config()
exec(marshal.load(open(self._path)), globals())
_perform_post_config_loading_actions()
#.
# .--Host tags-----------------------------------------------------------.
# | _ _ _ _ |
# | | | | | ___ ___| |_ | |_ __ _ __ _ ___ |
# | | |_| |/ _ \/ __| __| | __/ _` |/ _` / __| |
# | | _ | (_) \__ \ |_ | || (_| | (_| \__ \ |
# | |_| |_|\___/|___/\__| \__\__,_|\__, |___/ |
# | |___/ |
# +----------------------------------------------------------------------+
# | Helper functions for dealing with host tags |
# '----------------------------------------------------------------------'
def strip_tags(tagged_hostlist):
cache = cmk_base.config_cache.get_dict("strip_tags")
cache_id = tuple(tagged_hostlist)
try:
return cache[cache_id]
except KeyError:
result = map(lambda h: h.split('|', 1)[0], tagged_hostlist)
cache[cache_id] = result
return result
def tags_of_host(hostname):
"""Returns the list of all configured tags of a host. In case
a host has no tags configured or is not known, it returns an
empty list."""
hosttags = cmk_base.config_cache.get_dict("hosttags")
try:
return hosttags[hostname]
except KeyError:
return []
def collect_hosttags():
hosttags = cmk_base.config_cache.get_dict("hosttags")
for tagged_host in all_hosts + clusters.keys():
parts = tagged_host.split("|")
hosttags[parts[0]] = sorted(parts[1:])
#.
# .--HostCollections-----------------------------------------------------.
# | _ _ _ ____ _ _ _ _ |
# || | | | ___ ___| |_ / ___|___ | | | ___ ___| |_(_) ___ _ __ ___ |
# || |_| |/ _ \/ __| __| | / _ \| | |/ _ \/ __| __| |/ _ \| '_ \/ __| |
# || _ | (_) \__ \ |_| |__| (_) | | | __/ (__| |_| | (_) | | | \__ \ |
# ||_| |_|\___/|___/\__|\____\___/|_|_|\___|\___|\__|_|\___/|_| |_|___/ |
# | |
# +----------------------------------------------------------------------+
# | |
# '----------------------------------------------------------------------'
# Returns a set of all active hosts
def all_active_hosts():
cache = cmk_base.config_cache.get_set("all_active_hosts")
if not cache.is_populated():
cache.update(all_active_realhosts(), all_active_clusters())
cache.set_populated()
return cache
# Returns a set of all host names to be handled by this site
# hosts of other sitest or disabled hosts are excluded
def all_active_realhosts():
active_realhosts = cmk_base.config_cache.get_set("active_realhosts")
if not active_realhosts.is_populated():
active_realhosts.update(filter_active_hosts(all_configured_realhosts()))
active_realhosts.set_populated()
return active_realhosts
# Returns a set of all cluster host names to be handled by
# this site hosts of other sitest or disabled hosts are excluded
def all_active_clusters():
active_clusters = cmk_base.config_cache.get_set("active_clusters")
if not active_clusters.is_populated():
active_clusters.update(filter_active_hosts(all_configured_clusters()))
active_clusters.set_populated()
return active_clusters
# Returns a set of all hosts, regardless if currently
# disabled or monitored on a remote site.
def all_configured_hosts():
cache = cmk_base.config_cache.get_set("all_configured_hosts")
if not cache.is_populated():
cache.update(all_configured_realhosts(), all_configured_clusters())
cache.set_populated()
return cache
# Returns a set of all host names, regardless if currently
# disabled or monitored on a remote site. Does not return
# cluster hosts.
def all_configured_realhosts():
cache = cmk_base.config_cache.get_set("all_configured_realhosts")
if not cache.is_populated():
cache.update(strip_tags(all_hosts))
cache.set_populated()
return cache
# Returns a set of all cluster names, regardless if currently
# disabled or monitored on a remote site. Does not return
# normal hosts.
def all_configured_clusters():
cache = cmk_base.config_cache.get_set("all_configured_clusters")
if not cache.is_populated():
cache.update(strip_tags(clusters.keys()))
cache.set_populated()
return cache
# This function should only be used during duplicate host check! It has to work like
# all_active_hosts() but with the difference that duplicates are not removed.
def all_active_hosts_with_duplicates():
# Only available with CEE
if "shadow_hosts" in globals():
shadow_host_entries = shadow_hosts.keys()
else:
shadow_host_entries = []
return filter_active_hosts(strip_tags(all_hosts) \
+ strip_tags(clusters.keys()) \
+ strip_tags(shadow_host_entries), keep_duplicates=True)
# Returns a set of active hosts for this site
def filter_active_hosts(hostlist, keep_offline_hosts=False, keep_duplicates=False):
if only_hosts == None and distributed_wato_site == None:
active_hosts = hostlist
elif only_hosts == None:
active_hosts = [ hostname for hostname in hostlist
if host_is_member_of_site(hostname, distributed_wato_site) ]
elif distributed_wato_site == None:
if keep_offline_hosts:
active_hosts = hostlist
else:
active_hosts = [ hostname for hostname in hostlist
if rulesets.in_binary_hostlist(hostname, only_hosts) ]
else:
active_hosts = [ hostname for hostname in hostlist
if (keep_offline_hosts or rulesets.in_binary_hostlist(hostname, only_hosts))
and host_is_member_of_site(hostname, distributed_wato_site) ]
if keep_duplicates:
return active_hosts
else:
return set(active_hosts)
def duplicate_hosts():
seen_hostnames = set([])
duplicates = set([])
for hostname in all_active_hosts_with_duplicates():
if hostname in seen_hostnames:
duplicates.add(hostname)
else:
seen_hostnames.add(hostname)
return sorted(list(duplicates))
# Returns a list of all hosts which are associated with this site,
# but have been removed by the "only_hosts" rule. Normally these
# are the hosts which have the tag "offline".
#
# This is not optimized for performance, so use in specific situations.
def all_offline_hosts():
hostlist = filter_active_hosts(all_configured_realhosts().union(all_configured_clusters()),
keep_offline_hosts=True)
return [ hostname for hostname in hostlist
if not rulesets.in_binary_hostlist(hostname, only_hosts) ]
def all_configured_offline_hosts():
hostlist = all_configured_realhosts().union(all_configured_clusters())
return set([ hostname for hostname in hostlist
if not rulesets.in_binary_hostlist(hostname, only_hosts) ])
#.
# .--Hosts---------------------------------------------------------------.
# | _ _ _ |
# | | | | | ___ ___| |_ ___ |
# | | |_| |/ _ \/ __| __/ __| |
# | | _ | (_) \__ \ |_\__ \ |
# | |_| |_|\___/|___/\__|___/ |
# | |
# +----------------------------------------------------------------------+
# | Helper functions for dealing with hosts. |
# '----------------------------------------------------------------------'
def host_is_member_of_site(hostname, site):
for tag in tags_of_host(hostname):
if tag.startswith("site:"):
return site == tag[5:]
# hosts without a site: tag belong to all sites
return True
def alias_of(hostname, fallback):
aliases = rulesets.host_extra_conf(hostname, extra_host_conf.get("alias", []))
if len(aliases) == 0:
if fallback:
return fallback
else:
return hostname
else:
return aliases[0]
def get_additional_ipaddresses_of(hostname):
#TODO Regarding the following configuration variables from WATO
# there's no inheritance, thus we use 'host_attributes'.
# Better would be to use cmk_base configuration variables,
# eg. like 'management_protocol'.
return (host_attributes.get(hostname, {}).get("additional_ipv4addresses", []),
host_attributes.get(hostname, {}).get("additional_ipv6addresses", []))
def parents_of(hostname):
par = rulesets.host_extra_conf(hostname, parents)
# Use only those parents which are defined and active in
# all_hosts.
used_parents = []
for p in par:
ps = p.split(",")
for pss in ps:
if pss in all_active_realhosts():
used_parents.append(pss)
return used_parents
# If host is node of one or more clusters, return a list of the cluster host names.
# If not, return an empty list.
def clusters_of(hostname):
cache = cmk_base.config_cache.get_dict("clusters_of")
if not cache.is_populated():
for cluster, hosts in clusters.items():
clustername = cluster.split('|', 1)[0]
for name in hosts:
cache.setdefault(name, []).append(clustername)
cache.set_populated()
return cache.get(hostname, [])
#
# Agent type
#
def is_tcp_host(hostname):
return rulesets.in_binary_hostlist(hostname, tcp_hosts)
def is_snmp_host(hostname):
return rulesets.in_binary_hostlist(hostname, snmp_hosts)
def is_ping_host(hostname):
import cmk_base.piggyback as piggyback
return not is_snmp_host(hostname) \
and not is_tcp_host(hostname) \
and not piggyback.has_piggyback_raw_data(hostname) \
and not has_management_board(hostname)
def is_dual_host(hostname):
return is_tcp_host(hostname) and is_snmp_host(hostname)
def is_all_agents_host(hostname):
return "all-agents" in tags_of_host(hostname)
def is_all_special_agents_host(hostname):
return "all-agents" in tags_of_host(hostname)
#
# IPv4/IPv6
#
def is_ipv6_primary(hostname):
"""Whether or not the given host is configured to be monitored
primarily via IPv6."""
dual_stack_host = is_ipv4v6_host(hostname)
return (not dual_stack_host and is_ipv6_host(hostname)) \
or (dual_stack_host and _primary_ip_address_family_of(hostname) == "ipv6")
def _primary_ip_address_family_of(hostname):
rules = rulesets.host_extra_conf(hostname, primary_address_family)
if rules:
return rules[0]
return "ipv4"
def is_ipv4v6_host(hostname):
tags = tags_of_host(hostname)
return "ip-v6" in tags and "ip-v4" in tags
def is_ipv6_host(hostname):
return "ip-v6" in tags_of_host(hostname)
def is_ipv4_host(hostname):
"""Whether or not the given host is configured to be monitored via IPv4.
This is the case when it is set to be explicit IPv4 or implicit
(when host is not an IPv6 host and not a "No IP" host)"""
tags = tags_of_host(hostname)
if "ip-v4" in tags:
return True
return "ip-v6" not in tags and "no-ip" not in tags
def is_no_ip_host(hostname):
"""Whether or not the given host is configured not to be monitored via IP"""
return "no-ip" in tags_of_host(hostname)
#
# Management board
#
def has_management_board(hostname):
return management_protocol_of(hostname) is not None
def management_address_of(hostname):
attributes_of_host = host_attributes.get(hostname, {})
if attributes_of_host.get("management_address"):
return attributes_of_host["management_address"]
else:
return ipaddresses.get(hostname)
def management_protocol_of(hostname):
return management_protocol.get(hostname)
def management_credentials_of(hostname):
protocol = management_protocol_of(hostname)
if protocol == "snmp":
credentials_variable, default_value = management_snmp_credentials, snmp_default_community
elif protocol == "ipmi":
credentials_variable, default_value = management_ipmi_credentials, None
elif protocol is None:
return None
else:
raise NotImplementedError()
# First try to use the explicit configuration of the host
# (set directly for a host or via folder inheritance in WATO)
try:
return credentials_variable[hostname]
except KeyError:
pass
# If a rule matches, use the first rule for the management board protocol of the host
rule_settings = rulesets.host_extra_conf(hostname, management_board_config)
for protocol, credentials in rule_settings:
if protocol == management_protocol_of(hostname):
return credentials
return default_value
#
# Agent communication
#
def agent_port_of(hostname):
ports = rulesets.host_extra_conf(hostname, agent_ports)
if len(ports) == 0:
return agent_port
else:
return ports[0]
def tcp_connect_timeout_of(hostname):
timeouts = rulesets.host_extra_conf(hostname, tcp_connect_timeouts)
if len(timeouts) == 0:
return tcp_connect_timeout
else:
return timeouts[0]
def agent_encryption_of(hostname):
settings = rulesets.host_extra_conf(hostname, agent_encryption)
if settings:
return settings[0]
else:
return {'use_regular': 'disable',
'use_realtime': 'enforce'}
def agent_target_version(hostname):
agent_target_versions = rulesets.host_extra_conf(hostname, check_mk_agent_target_versions)
if agent_target_versions:
spec = agent_target_versions[0]
if spec == "ignore":
return None
elif spec == "site":
return cmk.__version__
elif type(spec) == str:
# Compatibility to old value specification format (a single version string)
return spec
elif spec[0] == 'specific':
return spec[1]
else:
return spec # return the whole spec in case of an "at least version" config
#
# SNMP
#
# Determine SNMP community for a specific host. It the host is found
# int the map snmp_communities, that community is returned. Otherwise
# the snmp_default_community is returned (wich is preset with
# "public", but can be overridden in main.mk
def snmp_credentials_of(hostname):
try:
return explicit_snmp_communities[hostname]
except KeyError:
pass
communities = rulesets.host_extra_conf(hostname, snmp_communities)
if len(communities) > 0:
return communities[0]
# nothing configured for this host -> use default
return snmp_default_community
def snmp_character_encoding_of(hostname):
entries = rulesets.host_extra_conf(hostname, snmp_character_encodings)
if len(entries) > 0:
return entries[0]
def snmp_timing_of(hostname):
timing = rulesets.host_extra_conf(hostname, snmp_timing)
if len(timing) > 0:
return timing[0]
else:
return {}
def snmpv3_contexts_of(hostname):
return rulesets.host_extra_conf(hostname, snmpv3_contexts)
def oid_range_limits_of(hostname):
return rulesets.host_extra_conf(hostname, snmp_limit_oid_range)
def snmp_port_of(hostname):
ports = rulesets.host_extra_conf(hostname, snmp_ports)
if len(ports) == 0:
return None # do not specify a port, use default
else:
return ports[0]
def is_snmpv3_host(hostname):
return type(snmp_credentials_of(hostname)) == tuple
def is_bulkwalk_host(hostname):
if bulkwalk_hosts:
return rulesets.in_binary_hostlist(hostname, bulkwalk_hosts)
else:
return False
def bulk_walk_size_of(hostname):
bulk_sizes = rulesets.host_extra_conf(hostname, snmp_bulk_size)
if not bulk_sizes:
return 10
else:
return bulk_sizes[0]
def is_snmpv2c_host(hostname):
return is_bulkwalk_host(hostname) or \
rulesets.in_binary_hostlist(hostname, snmpv2c_hosts)
def is_usewalk_host(hostname):
return rulesets.in_binary_hostlist(hostname, usewalk_hosts)
def is_inline_snmp_host(hostname):
# TODO: Better use "inline_snmp" once we have moved the code to an own module
has_inline_snmp = "netsnmp" in sys.modules
return has_inline_snmp and use_inline_snmp \
and not rulesets.in_binary_hostlist(hostname, non_inline_snmp_hosts)
#
# Groups
#
def hostgroups_of(hostname):
return rulesets.host_extra_conf(hostname, host_groups)
def summary_hostgroups_of(hostname):
return rulesets.host_extra_conf(hostname, summary_host_groups)
def contactgroups_of(hostname):
cgrs = []
# host_contactgroups may take single values as well as
# lists as item value. Of all list entries only the first
# one is used. The single-contact-groups entries are all
# recognized.
first_list = True
for entry in rulesets.host_extra_conf(hostname, host_contactgroups):
if type(entry) == list and first_list:
cgrs += entry
first_list = False
else:
cgrs.append(entry)
if monitoring_core == "nagios" and enable_rulebased_notifications:
cgrs.append("check-mk-notify")
return list(set(cgrs))
#
# Misc
#
def exit_code_spec(hostname):
spec = {}
specs = rulesets.host_extra_conf(hostname, check_mk_exit_status)
for entry in specs[::-1]:
spec.update(entry)
return spec
def check_period_of(hostname, service):
periods = rulesets.service_extra_conf(hostname, service, check_periods)
if periods:
period = periods[0]
if period == "24X7":
return None
else:
return period
else:
return None
def check_interval_of(hostname, section_name):
import cmk_base.checks
if not cmk_base.checks.is_snmp_check(section_name):
return # no values at all for non snmp checks
# Previous to 1.5 "match" could be a check name (including subchecks) instead of
# only main check names -> section names. This has been cleaned up, but we still
# need to be compatible. Strip of the sub check part of "match".
for match, minutes in rulesets.host_extra_conf(hostname, snmp_check_interval):
if match is None or match.split(".")[0] == section_name:
return minutes # use first match
#.
# .--Cluster-------------------------------------------------------------.
# | ____ _ _ |
# | / ___| |_ _ ___| |_ ___ _ __ |
# | | | | | | | / __| __/ _ \ '__| |
# | | |___| | |_| \__ \ || __/ | |
# | \____|_|\__,_|___/\__\___|_| |
# | |
# +----------------------------------------------------------------------+
# | Code dealing with clusters (virtual hosts that are used to deal with |
# | services that can move between physical nodes. |
# '----------------------------------------------------------------------'
# Checks whether or not the given host is a cluster host
def is_cluster(hostname):
# all_configured_clusters() needs to be used, because this function affects
# the agent bakery, which needs all configured hosts instead of just the hosts
# of this site
return hostname in all_configured_clusters()
# Returns the nodes of a cluster, or None if hostname is not a cluster
def nodes_of(hostname):
nodes_of_cache = cmk_base.config_cache.get_dict("nodes_of")
nodes = nodes_of_cache.get(hostname, False)
if nodes != False:
return nodes
for tagged_hostname, nodes in clusters.items():
if hostname == tagged_hostname.split("|")[0]:
nodes_of_cache[hostname] = nodes
return nodes
nodes_of_cache[hostname] = None
return None
# Determine weather a service (found on a physical host) is a clustered
# service and - if yes - return the cluster host of the service. If
# no, returns the hostname of the physical host.
def host_of_clustered_service(hostname, servicedesc):
the_clusters = clusters_of(hostname)
if not the_clusters:
return hostname
cluster_mapping = rulesets.service_extra_conf(hostname, servicedesc, clustered_services_mapping)
for cluster in cluster_mapping:
# Check if the host is in this cluster
if cluster in the_clusters:
return cluster
# 1. New style: explicitly assigned services
for cluster, conf in clustered_services_of.items():
nodes = nodes_of(cluster)
if not nodes:
raise MKGeneralException("Invalid entry clustered_services_of['%s']: %s is not a cluster." %
(cluster, cluster))
if hostname in nodes and \
rulesets.in_boolean_serviceconf_list(hostname, servicedesc, conf):
return cluster
# 1. Old style: clustered_services assumes that each host belong to
# exactly on cluster
if rulesets.in_boolean_serviceconf_list(hostname, servicedesc, clustered_services):
return the_clusters[0]
return hostname
#.
# .--Services------------------------------------------------------------.
# | ____ _ |
# | / ___| ___ _ ____ _(_) ___ ___ ___ |
# | \___ \ / _ \ '__\ \ / / |/ __/ _ \/ __| |
# | ___) | __/ | \ V /| | (_| __/\__ \ |
# | |____/ \___|_| \_/ |_|\___\___||___/ |
# | |
# +----------------------------------------------------------------------+
# | Service related helper functions |
# '----------------------------------------------------------------------'
# Renaming of service descriptions while keeping backward compatibility with
# existing installations.
# Synchronize with htdocs/wato.py and plugins/wato/check_mk_configuration.py!
# Cleanup! .. some day
def _get_old_cmciii_temp_description(item):
if "Temperature" in item:
return False, item # old item format, no conversion
parts = item.split(" ")
if parts[0] == "Ambient":
return False, "%s Temperature" % parts[1]
elif len(parts) == 2:
return False, "%s %s.Temperature" % (parts[1], parts[0])
else:
if parts[1] == "LCP":
parts[1] = "Liquid_Cooling_Package"
return False, "%s %s.%s-Temperature" % (parts[1], parts[0], parts[2])
_old_service_descriptions = {
"df" : "fs_%s",
"df_netapp" : "fs_%s",
"df_netapp32" : "fs_%s",
"esx_vsphere_datastores" : "fs_%s",
"hr_fs" : "fs_%s",
"vms_diskstat.df" : "fs_%s",
"zfsget" : "fs_%s",
"ps" : "proc_%s",
"ps.perf" : "proc_%s",
"wmic_process" : "proc_%s",
"services" : "service_%s",
"logwatch" : "LOG %s",
"logwatch.groups" : "LOG %s",
"hyperv_vm" : "hyperv_vms",
"ibm_svc_mdiskgrp" : "MDiskGrp %s",
"ibm_svc_system" : "IBM SVC Info",
"ibm_svc_systemstats.diskio" : "IBM SVC Throughput %s Total",
"ibm_svc_systemstats.iops" : "IBM SVC IOPS %s Total",
"ibm_svc_systemstats.disk_latency" : "IBM SVC Latency %s Total",
"ibm_svc_systemstats.cache" : "IBM SVC Cache Total",
"mknotifyd" : "Notification Spooler %s",
"mknotifyd.connection" : "Notification Connection %s",
"casa_cpu_temp" : "Temperature %s",
"cmciii.temp" : _get_old_cmciii_temp_description,
"cmciii.psm_current" : "%s",
"cmciii_lcp_airin" : "LCP Fanunit Air IN",
"cmciii_lcp_airout" : "LCP Fanunit Air OUT",
"cmciii_lcp_water" : "LCP Fanunit Water %s",
"etherbox.temp" : "Sensor %s",
# While using the old description, don't append the item, even when discovered
# with the new check which creates an item.
"liebert_bat_temp" : lambda item: (False, "Battery Temp"),
"nvidia.temp" : "Temperature NVIDIA %s",
"ups_bat_temp" : "Temperature Battery %s",
"innovaphone_temp" : lambda item: (False, "Temperature"),
"enterasys_temp" : lambda item: (False, "Temperature"),
"raritan_emx" : "Rack %s",
"raritan_pdu_inlet" : "Input Phase %s",
"postfix_mailq" : lambda item: (False, "Postfix Queue"),
"nullmailer_mailq" : lambda item: (False, "Nullmailer Queue"),
"barracuda_mailqueues" : lambda item: (False, "Mail Queue"),
"qmail_stats" : lambda item: (False, "Qmail Queue"),
"mssql_backup" : "%s Backup",
"mssql_counters.cache_hits" : "%s",
"mssql_counters.transactions" : "%s Transactions",
"mssql_counters.locks" : "%s Locks",
"mssql_counters.sqlstats" : "%s",
"mssql_counters.pageactivity" : "%s Page Activity",
"mssql_counters.locks_per_batch" : "%s Locks per Batch",
"mssql_counters.file_sizes" : "%s File Sizes",
"mssql_databases" : "%s Database",
"mssql_datafiles" : "Datafile %s",
"mssql_tablespaces" : "%s Sizes",
"mssql_transactionlogs" : "Transactionlog %s",
"mssql_versions" : "%s Version",
}
def service_description(hostname, check_plugin_name, item):
import cmk_base.checks as checks
if check_plugin_name not in checks.check_info:
if item:
return "Unimplemented check %s / %s" % (check_plugin_name, item)
else:
return "Unimplemented check %s" % check_plugin_name
# use user-supplied service description, if available
add_item = True
descr_format = service_descriptions.get(check_plugin_name)
if not descr_format:
# handle renaming for backward compatibility
if check_plugin_name in _old_service_descriptions and \
check_plugin_name not in use_new_descriptions_for:
# Can be a fucntion to generate the old description more flexible.
old_descr = _old_service_descriptions[check_plugin_name]
if callable(old_descr):
add_item, descr_format = old_descr(item)
else:
descr_format = old_descr
else:
descr_format = checks.check_info[check_plugin_name]["service_description"]
if type(descr_format) == str:
descr_format = descr_format.decode("utf-8")
# Note: we strip the service description (remove spaces).
# One check defines "Pages %s" as a description, but the item
# can by empty in some cases. Nagios silently drops leading
# and trailing spaces in the configuration file.
if add_item and type(item) in [str, unicode, int, long]:
if "%s" not in descr_format:
descr_format += " %s"
descr = descr_format % (item,)
else:
descr = descr_format
if "%s" in descr:
raise MKGeneralException("Found '%%s' in service description (Host: %s, Check type: %s, Item: %s). "
"Please try to rediscover the service to fix this issue." % \
(hostname, check_plugin_name, item))
return get_final_service_description(hostname, descr)
_old_active_check_service_descriptions = {
"http": lambda params: (params[0][1:] if params[0].startswith("^")
else "HTTP %s" % params[0])
}
def active_check_service_description(hostname, active_check_name, params):
import cmk_base.checks as checks
if active_check_name not in checks.active_check_info:
return "Unimplemented check %s" % active_check_name
if (active_check_name in _old_active_check_service_descriptions and
active_check_name not in use_new_descriptions_for):
description = _old_active_check_service_descriptions[active_check_name](params)
else:
act_info = checks.active_check_info[active_check_name]
description = act_info["service_description"](params)
description = description.replace('$HOSTNAME$', hostname)
return get_final_service_description(hostname, description)
def get_final_service_description(hostname, description):
translations = get_service_translations(hostname)
if translations:
# Translate
description = cmk.translations.translate_service_description(translations, description)
# Sanitize; Remove illegal characters from a service description
description = description.strip()
cache = cmk_base.config_cache.get_dict("final_service_description")
try:
new_description = cache[description]
except KeyError:
new_description = "".join([c for c in description
if c not in cmk_base.config.nagios_illegal_chars]).rstrip("\\")
cache[description] = new_description
return new_description
def service_ignored(hostname, check_plugin_name, service_description):
if check_plugin_name and check_plugin_name in ignored_checktypes:
return True
if service_description != None \
and rulesets.in_boolean_serviceconf_list(hostname, service_description, ignored_services):
return True
if check_plugin_name and _checktype_ignored_for_host(hostname, check_plugin_name):
return True
return False
def _checktype_ignored_for_host(host, checktype):
if checktype in ignored_checktypes:
return True
ignored = rulesets.host_extra_conf(host, ignored_checks)
for e in ignored:
if checktype == e or (type(e) == list and checktype in e):
return True
return False
#.
# .--Misc Helpers--------------------------------------------------------.
# | __ __ _ _ _ _ |
# | | \/ (_)___ ___ | | | | ___| |_ __ ___ _ __ ___ |
# | | |\/| | / __|/ __| | |_| |/ _ \ | '_ \ / _ \ '__/ __| |
# | | | | | \__ \ (__ | _ | __/ | |_) | __/ | \__ \ |
# | |_| |_|_|___/\___| |_| |_|\___|_| .__/ \___|_| |___/ |
# | |_| |
# +----------------------------------------------------------------------+
# | Different helper functions |
# '----------------------------------------------------------------------'
def is_cmc():
"""Whether or not the site is currently configured to use the Microcore."""
return monitoring_core == "cmc"
def decode_incoming_string(s, encoding="utf-8"):
try:
return s.decode(encoding)
except:
return s.decode(fallback_agent_output_encoding)
def get_piggyback_translations(hostname):
"""Get a dict that specifies the actions to be done during the hostname translation"""
rules = rulesets.host_extra_conf(hostname, piggyback_translation)
translations = {}
for rule in rules[::-1]:
translations.update(rule)
return translations
def get_service_translations(hostname):
translations_cache = cmk_base.config_cache.get_dict("service_description_translations")
if hostname in translations_cache:
return translations_cache[hostname]
rules = rulesets.host_extra_conf(hostname, service_description_translation)
translations = {}
for rule in rules[::-1]:
for k, v in rule.items():
if isinstance(v, list):
translations.setdefault(k, set())
translations[k] |= set(v)
else:
translations[k] = v
translations_cache[hostname] = translations
return translations
| gpl-2.0 |
ex1usive-m4d/TemplateDocx | controllers/phpdocx/lib/openoffice/openoffice.org/basis3.4/program/python-core-2.6.1/lib/bsddb/dbtables.py | 39 | 30467 | #-----------------------------------------------------------------------
#
# Copyright (C) 2000, 2001 by Autonomous Zone Industries
# Copyright (C) 2002 Gregory P. Smith
#
# License: This is free software. You may use this software for any
# purpose including modification/redistribution, so long as
# this header remains intact and that you do not claim any
# rights of ownership or authorship of this software. This
# software has been tested, but no warranty is expressed or
# implied.
#
# -- Gregory P. Smith <[email protected]>
# This provides a simple database table interface built on top of
# the Python Berkeley DB 3 interface.
#
_cvsid = '$Id: dbtables.py 66088 2008-08-31 14:00:51Z jesus.cea $'
import re
import sys
import copy
import random
import struct
import cPickle as pickle
try:
# For Pythons w/distutils pybsddb
from bsddb3 import db
except ImportError:
# For Python 2.3
from bsddb import db
# XXX(nnorwitz): is this correct? DBIncompleteError is conditional in _bsddb.c
if not hasattr(db,"DBIncompleteError") :
class DBIncompleteError(Exception):
pass
db.DBIncompleteError = DBIncompleteError
class TableDBError(StandardError):
pass
class TableAlreadyExists(TableDBError):
pass
class Cond:
"""This condition matches everything"""
def __call__(self, s):
return 1
class ExactCond(Cond):
"""Acts as an exact match condition function"""
def __init__(self, strtomatch):
self.strtomatch = strtomatch
def __call__(self, s):
return s == self.strtomatch
class PrefixCond(Cond):
"""Acts as a condition function for matching a string prefix"""
def __init__(self, prefix):
self.prefix = prefix
def __call__(self, s):
return s[:len(self.prefix)] == self.prefix
class PostfixCond(Cond):
"""Acts as a condition function for matching a string postfix"""
def __init__(self, postfix):
self.postfix = postfix
def __call__(self, s):
return s[-len(self.postfix):] == self.postfix
class LikeCond(Cond):
"""
Acts as a function that will match using an SQL 'LIKE' style
string. Case insensitive and % signs are wild cards.
This isn't perfect but it should work for the simple common cases.
"""
def __init__(self, likestr, re_flags=re.IGNORECASE):
# escape python re characters
chars_to_escape = '.*+()[]?'
for char in chars_to_escape :
likestr = likestr.replace(char, '\\'+char)
# convert %s to wildcards
self.likestr = likestr.replace('%', '.*')
self.re = re.compile('^'+self.likestr+'$', re_flags)
def __call__(self, s):
return self.re.match(s)
#
# keys used to store database metadata
#
_table_names_key = '__TABLE_NAMES__' # list of the tables in this db
_columns = '._COLUMNS__' # table_name+this key contains a list of columns
def _columns_key(table):
return table + _columns
#
# these keys are found within table sub databases
#
_data = '._DATA_.' # this+column+this+rowid key contains table data
_rowid = '._ROWID_.' # this+rowid+this key contains a unique entry for each
# row in the table. (no data is stored)
_rowid_str_len = 8 # length in bytes of the unique rowid strings
def _data_key(table, col, rowid):
return table + _data + col + _data + rowid
def _search_col_data_key(table, col):
return table + _data + col + _data
def _search_all_data_key(table):
return table + _data
def _rowid_key(table, rowid):
return table + _rowid + rowid + _rowid
def _search_rowid_key(table):
return table + _rowid
def contains_metastrings(s) :
"""Verify that the given string does not contain any
metadata strings that might interfere with dbtables database operation.
"""
if (s.find(_table_names_key) >= 0 or
s.find(_columns) >= 0 or
s.find(_data) >= 0 or
s.find(_rowid) >= 0):
# Then
return 1
else:
return 0
class bsdTableDB :
def __init__(self, filename, dbhome, create=0, truncate=0, mode=0600,
recover=0, dbflags=0):
"""bsdTableDB(filename, dbhome, create=0, truncate=0, mode=0600)
Open database name in the dbhome Berkeley DB directory.
Use keyword arguments when calling this constructor.
"""
self.db = None
myflags = db.DB_THREAD
if create:
myflags |= db.DB_CREATE
flagsforenv = (db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG |
db.DB_INIT_TXN | dbflags)
# DB_AUTO_COMMIT isn't a valid flag for env.open()
try:
dbflags |= db.DB_AUTO_COMMIT
except AttributeError:
pass
if recover:
flagsforenv = flagsforenv | db.DB_RECOVER
self.env = db.DBEnv()
# enable auto deadlock avoidance
self.env.set_lk_detect(db.DB_LOCK_DEFAULT)
self.env.open(dbhome, myflags | flagsforenv)
if truncate:
myflags |= db.DB_TRUNCATE
self.db = db.DB(self.env)
# this code relies on DBCursor.set* methods to raise exceptions
# rather than returning None
self.db.set_get_returns_none(1)
# allow duplicate entries [warning: be careful w/ metadata]
self.db.set_flags(db.DB_DUP)
self.db.open(filename, db.DB_BTREE, dbflags | myflags, mode)
self.dbfilename = filename
if sys.version_info[0] >= 3 :
class cursor_py3k(object) :
def __init__(self, dbcursor) :
self._dbcursor = dbcursor
def close(self) :
return self._dbcursor.close()
def set_range(self, search) :
v = self._dbcursor.set_range(bytes(search, "iso8859-1"))
if v != None :
v = (v[0].decode("iso8859-1"),
v[1].decode("iso8859-1"))
return v
def __next__(self) :
v = getattr(self._dbcursor, "next")()
if v != None :
v = (v[0].decode("iso8859-1"),
v[1].decode("iso8859-1"))
return v
class db_py3k(object) :
def __init__(self, db) :
self._db = db
def cursor(self, txn=None) :
return cursor_py3k(self._db.cursor(txn=txn))
def has_key(self, key, txn=None) :
return getattr(self._db,"has_key")(bytes(key, "iso8859-1"),
txn=txn)
def put(self, key, value, flags=0, txn=None) :
key = bytes(key, "iso8859-1")
if value != None :
value = bytes(value, "iso8859-1")
return self._db.put(key, value, flags=flags, txn=txn)
def put_bytes(self, key, value, txn=None) :
key = bytes(key, "iso8859-1")
return self._db.put(key, value, txn=txn)
def get(self, key, txn=None, flags=0) :
key = bytes(key, "iso8859-1")
v = self._db.get(key, txn=txn, flags=flags)
if v != None :
v = v.decode("iso8859-1")
return v
def get_bytes(self, key, txn=None, flags=0) :
key = bytes(key, "iso8859-1")
return self._db.get(key, txn=txn, flags=flags)
def delete(self, key, txn=None) :
key = bytes(key, "iso8859-1")
return self._db.delete(key, txn=txn)
def close (self) :
return self._db.close()
self.db = db_py3k(self.db)
else : # Python 2.x
pass
# Initialize the table names list if this is a new database
txn = self.env.txn_begin()
try:
if not getattr(self.db, "has_key")(_table_names_key, txn):
getattr(self.db, "put_bytes", self.db.put) \
(_table_names_key, pickle.dumps([], 1), txn=txn)
# Yes, bare except
except:
txn.abort()
raise
else:
txn.commit()
# TODO verify more of the database's metadata?
self.__tablecolumns = {}
def __del__(self):
self.close()
def close(self):
if self.db is not None:
self.db.close()
self.db = None
if self.env is not None:
self.env.close()
self.env = None
def checkpoint(self, mins=0):
try:
self.env.txn_checkpoint(mins)
except db.DBIncompleteError:
pass
def sync(self):
try:
self.db.sync()
except db.DBIncompleteError:
pass
def _db_print(self) :
"""Print the database to stdout for debugging"""
print "******** Printing raw database for debugging ********"
cur = self.db.cursor()
try:
key, data = cur.first()
while 1:
print repr({key: data})
next = cur.next()
if next:
key, data = next
else:
cur.close()
return
except db.DBNotFoundError:
cur.close()
def CreateTable(self, table, columns):
"""CreateTable(table, columns) - Create a new table in the database.
raises TableDBError if it already exists or for other DB errors.
"""
assert isinstance(columns, list)
txn = None
try:
# checking sanity of the table and column names here on
# table creation will prevent problems elsewhere.
if contains_metastrings(table):
raise ValueError(
"bad table name: contains reserved metastrings")
for column in columns :
if contains_metastrings(column):
raise ValueError(
"bad column name: contains reserved metastrings")
columnlist_key = _columns_key(table)
if getattr(self.db, "has_key")(columnlist_key):
raise TableAlreadyExists, "table already exists"
txn = self.env.txn_begin()
# store the table's column info
getattr(self.db, "put_bytes", self.db.put)(columnlist_key,
pickle.dumps(columns, 1), txn=txn)
# add the table name to the tablelist
tablelist = pickle.loads(getattr(self.db, "get_bytes",
self.db.get) (_table_names_key, txn=txn, flags=db.DB_RMW))
tablelist.append(table)
# delete 1st, in case we opened with DB_DUP
self.db.delete(_table_names_key, txn=txn)
getattr(self.db, "put_bytes", self.db.put)(_table_names_key,
pickle.dumps(tablelist, 1), txn=txn)
txn.commit()
txn = None
except db.DBError, dberror:
if txn:
txn.abort()
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
def ListTableColumns(self, table):
"""Return a list of columns in the given table.
[] if the table doesn't exist.
"""
assert isinstance(table, str)
if contains_metastrings(table):
raise ValueError, "bad table name: contains reserved metastrings"
columnlist_key = _columns_key(table)
if not getattr(self.db, "has_key")(columnlist_key):
return []
pickledcolumnlist = getattr(self.db, "get_bytes",
self.db.get)(columnlist_key)
if pickledcolumnlist:
return pickle.loads(pickledcolumnlist)
else:
return []
def ListTables(self):
"""Return a list of tables in this database."""
pickledtablelist = self.db.get_get(_table_names_key)
if pickledtablelist:
return pickle.loads(pickledtablelist)
else:
return []
def CreateOrExtendTable(self, table, columns):
"""CreateOrExtendTable(table, columns)
Create a new table in the database.
If a table of this name already exists, extend it to have any
additional columns present in the given list as well as
all of its current columns.
"""
assert isinstance(columns, list)
try:
self.CreateTable(table, columns)
except TableAlreadyExists:
# the table already existed, add any new columns
txn = None
try:
columnlist_key = _columns_key(table)
txn = self.env.txn_begin()
# load the current column list
oldcolumnlist = pickle.loads(
getattr(self.db, "get_bytes",
self.db.get)(columnlist_key, txn=txn, flags=db.DB_RMW))
# create a hash table for fast lookups of column names in the
# loop below
oldcolumnhash = {}
for c in oldcolumnlist:
oldcolumnhash[c] = c
# create a new column list containing both the old and new
# column names
newcolumnlist = copy.copy(oldcolumnlist)
for c in columns:
if not oldcolumnhash.has_key(c):
newcolumnlist.append(c)
# store the table's new extended column list
if newcolumnlist != oldcolumnlist :
# delete the old one first since we opened with DB_DUP
self.db.delete(columnlist_key, txn=txn)
getattr(self.db, "put_bytes", self.db.put)(columnlist_key,
pickle.dumps(newcolumnlist, 1),
txn=txn)
txn.commit()
txn = None
self.__load_column_info(table)
except db.DBError, dberror:
if txn:
txn.abort()
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
def __load_column_info(self, table) :
"""initialize the self.__tablecolumns dict"""
# check the column names
try:
tcolpickles = getattr(self.db, "get_bytes",
self.db.get)(_columns_key(table))
except db.DBNotFoundError:
raise TableDBError, "unknown table: %r" % (table,)
if not tcolpickles:
raise TableDBError, "unknown table: %r" % (table,)
self.__tablecolumns[table] = pickle.loads(tcolpickles)
def __new_rowid(self, table, txn) :
"""Create a new unique row identifier"""
unique = 0
while not unique:
# Generate a random 64-bit row ID string
# (note: might have <64 bits of true randomness
# but it's plenty for our database id needs!)
blist = []
for x in xrange(_rowid_str_len):
blist.append(random.randint(0,255))
newid = struct.pack('B'*_rowid_str_len, *blist)
if sys.version_info[0] >= 3 :
newid = newid.decode("iso8859-1") # 8 bits
# Guarantee uniqueness by adding this key to the database
try:
self.db.put(_rowid_key(table, newid), None, txn=txn,
flags=db.DB_NOOVERWRITE)
except db.DBKeyExistError:
pass
else:
unique = 1
return newid
def Insert(self, table, rowdict) :
"""Insert(table, datadict) - Insert a new row into the table
using the keys+values from rowdict as the column values.
"""
txn = None
try:
if not getattr(self.db, "has_key")(_columns_key(table)):
raise TableDBError, "unknown table"
# check the validity of each column name
if not self.__tablecolumns.has_key(table):
self.__load_column_info(table)
for column in rowdict.keys() :
if not self.__tablecolumns[table].count(column):
raise TableDBError, "unknown column: %r" % (column,)
# get a unique row identifier for this row
txn = self.env.txn_begin()
rowid = self.__new_rowid(table, txn=txn)
# insert the row values into the table database
for column, dataitem in rowdict.items():
# store the value
self.db.put(_data_key(table, column, rowid), dataitem, txn=txn)
txn.commit()
txn = None
except db.DBError, dberror:
# WIBNI we could just abort the txn and re-raise the exception?
# But no, because TableDBError is not related to DBError via
# inheritance, so it would be backwards incompatible. Do the next
# best thing.
info = sys.exc_info()
if txn:
txn.abort()
self.db.delete(_rowid_key(table, rowid))
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1], info[2]
else :
raise TableDBError, dberror.args[1], info[2]
def Modify(self, table, conditions={}, mappings={}):
"""Modify(table, conditions={}, mappings={}) - Modify items in rows matching 'conditions' using mapping functions in 'mappings'
* table - the table name
* conditions - a dictionary keyed on column names containing
a condition callable expecting the data string as an
argument and returning a boolean.
* mappings - a dictionary keyed on column names containing a
condition callable expecting the data string as an argument and
returning the new string for that column.
"""
try:
matching_rowids = self.__Select(table, [], conditions)
# modify only requested columns
columns = mappings.keys()
for rowid in matching_rowids.keys():
txn = None
try:
for column in columns:
txn = self.env.txn_begin()
# modify the requested column
try:
dataitem = self.db.get(
_data_key(table, column, rowid),
txn=txn)
self.db.delete(
_data_key(table, column, rowid),
txn=txn)
except db.DBNotFoundError:
# XXXXXXX row key somehow didn't exist, assume no
# error
dataitem = None
dataitem = mappings[column](dataitem)
if dataitem <> None:
self.db.put(
_data_key(table, column, rowid),
dataitem, txn=txn)
txn.commit()
txn = None
# catch all exceptions here since we call unknown callables
except:
if txn:
txn.abort()
raise
except db.DBError, dberror:
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
def Delete(self, table, conditions={}):
"""Delete(table, conditions) - Delete items matching the given
conditions from the table.
* conditions - a dictionary keyed on column names containing
condition functions expecting the data string as an
argument and returning a boolean.
"""
try:
matching_rowids = self.__Select(table, [], conditions)
# delete row data from all columns
columns = self.__tablecolumns[table]
for rowid in matching_rowids.keys():
txn = None
try:
txn = self.env.txn_begin()
for column in columns:
# delete the data key
try:
self.db.delete(_data_key(table, column, rowid),
txn=txn)
except db.DBNotFoundError:
# XXXXXXX column may not exist, assume no error
pass
try:
self.db.delete(_rowid_key(table, rowid), txn=txn)
except db.DBNotFoundError:
# XXXXXXX row key somehow didn't exist, assume no error
pass
txn.commit()
txn = None
except db.DBError, dberror:
if txn:
txn.abort()
raise
except db.DBError, dberror:
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
def Select(self, table, columns, conditions={}):
"""Select(table, columns, conditions) - retrieve specific row data
Returns a list of row column->value mapping dictionaries.
* columns - a list of which column data to return. If
columns is None, all columns will be returned.
* conditions - a dictionary keyed on column names
containing callable conditions expecting the data string as an
argument and returning a boolean.
"""
try:
if not self.__tablecolumns.has_key(table):
self.__load_column_info(table)
if columns is None:
columns = self.__tablecolumns[table]
matching_rowids = self.__Select(table, columns, conditions)
except db.DBError, dberror:
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
# return the matches as a list of dictionaries
return matching_rowids.values()
def __Select(self, table, columns, conditions):
"""__Select() - Used to implement Select and Delete (above)
Returns a dictionary keyed on rowids containing dicts
holding the row data for columns listed in the columns param
that match the given conditions.
* conditions is a dictionary keyed on column names
containing callable conditions expecting the data string as an
argument and returning a boolean.
"""
# check the validity of each column name
if not self.__tablecolumns.has_key(table):
self.__load_column_info(table)
if columns is None:
columns = self.tablecolumns[table]
for column in (columns + conditions.keys()):
if not self.__tablecolumns[table].count(column):
raise TableDBError, "unknown column: %r" % (column,)
# keyed on rows that match so far, containings dicts keyed on
# column names containing the data for that row and column.
matching_rowids = {}
# keys are rowids that do not match
rejected_rowids = {}
# attempt to sort the conditions in such a way as to minimize full
# column lookups
def cmp_conditions(atuple, btuple):
a = atuple[1]
b = btuple[1]
if type(a) is type(b):
if isinstance(a, PrefixCond) and isinstance(b, PrefixCond):
# longest prefix first
return cmp(len(b.prefix), len(a.prefix))
if isinstance(a, LikeCond) and isinstance(b, LikeCond):
# longest likestr first
return cmp(len(b.likestr), len(a.likestr))
return 0
if isinstance(a, ExactCond):
return -1
if isinstance(b, ExactCond):
return 1
if isinstance(a, PrefixCond):
return -1
if isinstance(b, PrefixCond):
return 1
# leave all unknown condition callables alone as equals
return 0
if sys.version_info[0] < 3 :
conditionlist = conditions.items()
conditionlist.sort(cmp_conditions)
else : # Insertion Sort. Please, improve
conditionlist = []
for i in conditions.items() :
for j, k in enumerate(conditionlist) :
r = cmp_conditions(k, i)
if r == 1 :
conditionlist.insert(j, i)
break
else :
conditionlist.append(i)
# Apply conditions to column data to find what we want
cur = self.db.cursor()
column_num = -1
for column, condition in conditionlist:
column_num = column_num + 1
searchkey = _search_col_data_key(table, column)
# speedup: don't linear search columns within loop
if column in columns:
savethiscolumndata = 1 # save the data for return
else:
savethiscolumndata = 0 # data only used for selection
try:
key, data = cur.set_range(searchkey)
while key[:len(searchkey)] == searchkey:
# extract the rowid from the key
rowid = key[-_rowid_str_len:]
if not rejected_rowids.has_key(rowid):
# if no condition was specified or the condition
# succeeds, add row to our match list.
if not condition or condition(data):
if not matching_rowids.has_key(rowid):
matching_rowids[rowid] = {}
if savethiscolumndata:
matching_rowids[rowid][column] = data
else:
if matching_rowids.has_key(rowid):
del matching_rowids[rowid]
rejected_rowids[rowid] = rowid
key, data = cur.next()
except db.DBError, dberror:
if sys.version_info[0] < 3 :
if dberror[0] != db.DB_NOTFOUND:
raise
else :
if dberror.args[0] != db.DB_NOTFOUND:
raise
continue
cur.close()
# we're done selecting rows, garbage collect the reject list
del rejected_rowids
# extract any remaining desired column data from the
# database for the matching rows.
if len(columns) > 0:
for rowid, rowdata in matching_rowids.items():
for column in columns:
if rowdata.has_key(column):
continue
try:
rowdata[column] = self.db.get(
_data_key(table, column, rowid))
except db.DBError, dberror:
if sys.version_info[0] < 3 :
if dberror[0] != db.DB_NOTFOUND:
raise
else :
if dberror.args[0] != db.DB_NOTFOUND:
raise
rowdata[column] = None
# return the matches
return matching_rowids
def Drop(self, table):
"""Remove an entire table from the database"""
txn = None
try:
txn = self.env.txn_begin()
# delete the column list
self.db.delete(_columns_key(table), txn=txn)
cur = self.db.cursor(txn)
# delete all keys containing this tables column and row info
table_key = _search_all_data_key(table)
while 1:
try:
key, data = cur.set_range(table_key)
except db.DBNotFoundError:
break
# only delete items in this table
if key[:len(table_key)] != table_key:
break
cur.delete()
# delete all rowids used by this table
table_key = _search_rowid_key(table)
while 1:
try:
key, data = cur.set_range(table_key)
except db.DBNotFoundError:
break
# only delete items in this table
if key[:len(table_key)] != table_key:
break
cur.delete()
cur.close()
# delete the tablename from the table name list
tablelist = pickle.loads(
getattr(self.db, "get_bytes", self.db.get)(_table_names_key,
txn=txn, flags=db.DB_RMW))
try:
tablelist.remove(table)
except ValueError:
# hmm, it wasn't there, oh well, that's what we want.
pass
# delete 1st, incase we opened with DB_DUP
self.db.delete(_table_names_key, txn=txn)
getattr(self.db, "put_bytes", self.db.put)(_table_names_key,
pickle.dumps(tablelist, 1), txn=txn)
txn.commit()
txn = None
if self.__tablecolumns.has_key(table):
del self.__tablecolumns[table]
except db.DBError, dberror:
if txn:
txn.abort()
if sys.version_info[0] < 3 :
raise TableDBError, dberror[1]
else :
raise TableDBError, dberror.args[1]
| bsd-3-clause |
Mitali-Sodhi/recommendations | recommendations.py | 1 | 2781 | from math import sqrt
critics = {'Lisa Rose': {'Lady in the water':2.5, 'Snakes on a plane' : 3.5, 'Just my luck' : 3.0, 'Superman returns' : 3.5, 'You, Me and Dupree' : 2.5, 'The Night Listener' : 3.0},
'Gene Seymor' : {'Lady in the water' : 3.0, 'Snakes on a plane' : 3.5, 'Just my luck' : 1.5, 'Superman returns' : 5.0, 'The Night Listener' : 3.0, 'You, Me and Dupree' : 3.5},
'Michael Plillips' : {'Lady in the water' : 2.5, 'Snakes on a plane' : 3.0, 'Superman returns' : 3.5, 'The Night Listener' : 4.0},
'Claudia Puig' : {'Snakes on a plane' : 3.5, 'Just my luck' : 3.0, 'The Night Listener' : 4.5, 'Superman returns' : 4.0, 'You, Me and Dupree' : 2.5},
'Mick LaSalle' : {'Lady in the water' : 3.0, 'Snakes on a plane' : 4.0, 'Just my luck' : 2.0, 'Superman returns' : 3.0, 'The Night Listener' : 3.0, 'You, Me and Dupree' : 2.0},
'Jack Matthews' : {'Lady in the water' : 3.0, 'Snakes on a plane' : 4.0, 'The Night Listener' : 3.0, 'Superman returns' : 5.0, 'You, Me and Dupree' : 3.5},
'Toby' : {'Snakes on a plane' : 4.5, 'You Me and Dupree' : 1.0, 'Superman returns' : 4.0}
}
def sim_distance(prefs, person1, person2):
si = {}
for item in prefs[person1]:
if item in prefs[person2]:
si[item] = 1
if len(si) == 0:
return 0
sum_of_squares = sum([pow(prefs[person1][item]-prefs[person2][item],2) for item in prefs[person1] if item in prefs[person2]])
return 1/(1+sum_of_squares)
def sim_pearson(prefs, person1, person2):
si = {}
for item in prefs[person1]:
if item in prefs[person2]:
si[item] = 1
n = len(si)
if n == 0:
return 0
sum1 = sum([prefs[person1][item] for item in si])
sum2 = sum([prefs[person2][item] for item in si])
sum1Sq = sum([pow(prefs[person1][item],2) for item in si])
sum2Sq = sum([pow(prefs[person2][item],2) for item in si])
pSum = sum([prefs[person1][item] * prefs[person2][item] for item in si])
num = pSum-(sum1*sum2/n)
den = sqrt((sum1Sq-pow(sum1,2)/n)*(sum2Sq-pow(sum2,2)/n))
if den == 0:
return 0
r=num/den
return r
def topMatches(prefs,person,n=5,similarity=sim_pearson):
scores = [(similarity(prefs,person,other),other) for other in prefs if other!=person]
scores.sort()
scores.reverse()
return scores[0:n]
def getReccomendations(prefs, person, similarity=sim_pearson):
totals={}
simSums={}
for other in prefs:
if other==person:
continue
sim=similarity(prefs,person,other)
if sim <= 0:
continue
for item in prefs[other]:
if item not in prefs[person] or prefs[person][item] == 0:
totals.setdefault(item,0)
totals[item] += prefs[other][item]*sim
simSums.setdefault(item,0)
simSums[item]+=sim
# print totals
# print simSums
rankings=[(total/simSums[item], item) for item,total in totals.items()]
rankings.sort()
rankings.reverse()
return rankings
| mit |
mrquim/mrquimrepo | repo/plugin.video.neptune-1.2.2/resources/lib/modules/jsunfuck.py | 5 | 9721 | #!/usr/bin/python
"""
Neptune Rising Add-on
Copyright (C) 2016 tknorris
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
import re
import sys
import urllib
import string
import json
class JSUnfuck(object):
numbers = None
words = {
"(![]+[])": "false",
"([]+{})": "[object Object]",
"(!![]+[])": "true",
"([][[]]+[])": "undefined",
"(+{}+[])": "NaN",
"([![]]+[][[]])": "falseundefined",
"([][f+i+l+t+e+r]+[])": "function filter() { [native code] }",
"(!![]+[][f+i+l+t+e+r])": "truefunction filter() { [native code] }",
"(+![]+([]+[])[c+o+n+s+t+r+u+c+t+o+r])": "0function String() { [native code] }",
"(+![]+[![]]+([]+[])[c+o+n+s+t+r+u+c+t+o+r])": "0falsefunction String() { [native code] }",
"([]+[][s+o+r+t][c+o+n+s+t+r+u+c+t+o+r](r+e+t+u+r+n+ +l+o+c+a+t+i+o+n)())": "https://123movies.to",
"([]+[])[f+o+n+t+c+o+l+o+r]()": '<font color="undefined"></font>',
"(+(+!![]+e+1+0+0+0)+[])": "Infinity",
"(+[![]]+[][f+i+l+t+e+r])": 'NaNfunction filter() { [native code] }',
'(+[![]]+[+(+!+[]+(!+[]+[])[3]+[1]+[0]+[0]+[0])])': 'NaNInfinity',
'([]+[])[i+t+a+l+i+c+s]()': '<i></i>',
'[[]][c+o+n+c+a+t]([[]])+[]': ',',
'([][f+i+l+l]+[])': 'function fill() { [native code]}',
'(!![]+[][f+i+l+l])': 'truefunction fill() { [native code]}',
'((+[])[c+o+n+s+t+r+u+c+t+o+r]+[])': 'function Number() {[native code]} _display:45:1',
'(+(+!+[]+[1]+e+[2]+[0])+[])': '1.1e+21',
'([]+[])[c+o+n+s+t+r+u+c+t+o+r][n+a+m+e]': 'S+t+r+i+n+g',
'([][e+n+t+r+i+e+s]()+[])': '[object Array Iterator]',
'([]+[])[l+i+n+k](")': '<a href="""></a>',
'(![]+[0])[i+t+a+l+i+c+s]()': '<i>false0</i>',
# dummy to force array dereference
'DUMMY1': '6p',
'DUMMY2': '2x',
'DUMMY3': '%3C',
'DUMMY4': '%5B',
'DUMMY5': '6q',
'DUMMY6': '4h',
}
uniqs = {
'[t+o+S+t+r+i+n+g]': 1,
'[][f+i+l+t+e+r][c+o+n+s+t+r+u+c+t+o+r](r+e+t+u+r+n+ +e+s+c+a+p+e)()': 2,
'[][f+i+l+t+e+r][c+o+n+s+t+r+u+c+t+o+r](r+e+t+u+r+n+ +u+n+e+s+c+a+p+e)()': 3,
'[][s+o+r+t][c+o+n+s+t+r+u+c+t+o+r](r+e+t+u+r+n+ +e+s+c+a+p+e)()': 2,
'[][s+o+r+t][c+o+n+s+t+r+u+c+t+o+r](r+e+t+u+r+n+ +u+n+e+s+c+a+p+e)()': 3,
}
def __init__(self, js):
self.js = js
def decode(self, replace_plus=True):
while True:
start_js = self.js
self.repl_words(self.words)
self.repl_numbers()
self.repl_arrays(self.words)
self.repl_uniqs(self.uniqs)
if start_js == self.js:
break
if replace_plus:
self.js = self.js.replace('+', '')
self.js = re.sub('\[[A-Za-z]*\]', '', self.js)
self.js = re.sub('\[(\d+)\]', '\\1', self.js)
return self.js
def repl_words(self, words):
while True:
start_js = self.js
for key, value in sorted(words.items(), key=lambda x: len(x[0]), reverse=True):
self.js = self.js.replace(key, value)
if self.js == start_js:
break
def repl_arrays(self, words):
for word in sorted(words.values(), key=lambda x: len(x), reverse=True):
for index in xrange(0, 100):
try:
repl = word[index]
self.js = self.js.replace('%s[%d]' % (word, index), repl)
except:
pass
def repl_numbers(self):
if self.numbers is None:
self.numbers = self.__gen_numbers()
while True:
start_js = self.js
for key, value in sorted(self.numbers.items(), key=lambda x: len(x[0]), reverse=True):
self.js = self.js.replace(key, value)
if self.js == start_js:
break
def repl_uniqs(self, uniqs):
for key, value in uniqs.iteritems():
if key in self.js:
if value == 1:
self.__handle_tostring()
elif value == 2:
self.__handle_escape(key)
elif value == 3:
self.__handle_unescape(key)
def __handle_tostring(self):
for match in re.finditer('(\d+)\[t\+o\+S\+t\+r\+i\+n\+g\](\d+)', self.js):
repl = to_base(match.group(1), match.group(2))
self.js = self.js.replace(match.group(0), repl)
def __handle_escape(self, key):
while True:
start_js = self.js
offset = self.js.find(key) + len(key)
if self.js[offset] == '(' and self.js[offset + 2] == ')':
c = self.js[offset + 1]
self.js = self.js.replace('%s(%s)' % (key, c), urllib.quote(c))
if start_js == self.js:
break
def __handle_unescape(self, key):
start = 0
while True:
start_js = self.js
offset = self.js.find(key, start)
if offset == -1: break
offset += len(key)
expr = ''
extra = ''
last_c = self.js[offset - 1]
abort = False
for i, c in enumerate(self.js[offset:]):
extra += c
if c == ')':
break
elif (i > 0 and c == '(') or (c == '[' and last_c != '+'):
abort = True
break
elif c == '%' or c in string.hexdigits:
expr += c
last_c = c
if not abort:
self.js = self.js.replace(key + extra, urllib.unquote(expr))
if start_js == self.js:
break
else:
start = offset
def __gen_numbers(self):
n = {'!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]': '9',
'!+[]+!![]+!![]+!![]+!![]': '5', '!+[]+!![]+!![]+!![]': '4',
'!+[]+!![]+!![]+!![]+!![]+!![]': '6', '!+[]+!![]': '2',
'!+[]+!![]+!![]': '3', '(+![]+([]+[]))': '0', '(+[]+[])': '0', '+[]':'0',
'(+!![]+[])': '1', '!+[]+!![]+!![]+!![]+!![]+!![]+!![]': '7',
'!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]': '8', '+!![]': '1',
'[+[]]': '[0]', '!+[]+!+[]': '2', '[+!+[]]': '[1]', '(+20)': '20',
'[+!![]]': '[1]', '[+!+[]+[+[]]]': '[10]', '+(1+1)': '11'}
for i in xrange(2, 20):
key = '+!![]' * (i - 1)
key = '!+[]' + key
n['(' + key + ')'] = str(i)
key += '+[]'
n['(' + key + ')'] = str(i)
n['[' + key + ']'] = '[' + str(i) + ']'
for i in xrange(2, 10):
key = '!+[]+' * (i - 1) + '!+[]'
n['(' + key + ')'] = str(i)
n['[' + key + ']'] = '[' + str(i) + ']'
key = '!+[]' + '+!![]' * (i - 1)
n['[' + key + ']'] = '[' + str(i) + ']'
for i in xrange(0, 10):
key = '(+(+!+[]+[%d]))' % (i)
n[key] = str(i + 10)
key = '[+!+[]+[%s]]' % (i)
n[key] = '[' + str(i + 10) + ']'
for tens in xrange(2, 10):
for ones in xrange(0, 10):
key = '!+[]+' * (tens) + '[%d]' % (ones)
n['(' + key + ')'] = str(tens * 10 + ones)
n['[' + key + ']'] = '[' + str(tens * 10 + ones) + ']'
for hundreds in xrange(1, 10):
for tens in xrange(0, 10):
for ones in xrange(0, 10):
key = '+!+[]' * hundreds + '+[%d]+[%d]))' % (tens, ones)
if hundreds > 1: key = key[1:]
key = '(+(' + key
n[key] = str(hundreds * 100 + tens * 10 + ones)
return n
def to_base(n, base, digits="0123456789abcdefghijklmnopqrstuvwxyz"):
n, base = int(n), int(base)
if n < base:
return digits[n]
else:
return to_base(n // base, base, digits).lstrip(digits[0]) + digits[n % base]
def cfunfuck(fuckedup):
fuck = re.findall(r's,t,o,p,b,r,e,a,k,i,n,g,f,\s*(\w+=).*?:\+?\(?(.*?)\)?\}', fuckedup)
fucks = re.findall(r'(\w+)\.\w+([\+\-\*\/]=)\+?\(?(.*?)\)?;', fuckedup)
endunfuck = fuck[0][0].split('=')[0]
unfuck = JSUnfuck(fuck[0][1]).decode()
unfuck = re.sub(r'[\(\)]', '', unfuck)
unfuck = fuck[0][0]+unfuck
exec(unfuck)
for fucker in fucks:
unfucker = JSUnfuck(fucker[2]).decode()
unfucker = re.sub(r'[\(\)]', '', unfucker)
unfucker = fucker[0]+fucker[1]+unfucker
exec(unfucker)
return str(eval(endunfuck))
def main():
with open(sys.argv[1]) as f:
start_js = f.read()
print JSUnfuck(start_js).decode()
if __name__ == '__main__':
sys.exit(main())
| gpl-2.0 |
pentestmonkey/pysecdump | framework/win32/lsasecretslive.py | 6 | 4282 | # This file is part of creddump.
#
# creddump is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# creddump is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with creddump. If not, see <http://www.gnu.org/licenses/>.
"""
@author: Brendan Dolan-Gavitt
@license: GNU General Public License 2.0 or later
@contact: [email protected]
"""
# Modified by pentestmonkey
from framework.win32.hashdumplive import get_bootkey,str_to_key
from Crypto.Hash import MD5,SHA256
from Crypto.Cipher import ARC4,DES,AES
from struct import unpack,pack
from wpc.regkey import regkey
from binascii import hexlify
xp = None
def get_lsa_key(bootkey):
global xp
r = regkey("HKEY_LOCAL_MACHINE\\SECURITY\\Policy\\PolSecretEncryptionKey")
if r.is_present():
xp = 1
else:
r = regkey("HKEY_LOCAL_MACHINE\\SECURITY\\Policy\\PolEKList")
if r.is_present:
xp = 0
else:
return None
obf_lsa_key = r.get_value("")
if not obf_lsa_key:
return None
if xp:
md5 = MD5.new()
md5.update(bootkey)
for i in range(1000):
md5.update(obf_lsa_key[60:76])
rc4key = md5.digest()
rc4 = ARC4.new(rc4key)
lsa_key = rc4.decrypt(obf_lsa_key[12:60])
return lsa_key[0x10:0x20]
else:
lsa_key = decrypt_lsa(obf_lsa_key, bootkey)
return lsa_key[68:100]
def decrypt_secret(secret, key):
"""Python implementation of SystemFunction005.
Decrypts a block of data with DES using given key.
Note that key can be longer than 7 bytes."""
decrypted_data = ''
j = 0 # key index
for i in range(0,len(secret),8):
enc_block = secret[i:i+8]
block_key = key[j:j+7]
des_key = str_to_key(block_key)
des = DES.new(des_key, DES.MODE_ECB)
decrypted_data += des.decrypt(enc_block)
j += 7
if len(key[j:j+7]) < 7:
j = len(key[j:j+7])
(dec_data_len,) = unpack("<L", decrypted_data[:4])
return decrypted_data[8:8+dec_data_len]
def decrypt_lsa(ciphertext, bootkey):
# vista+
sha256 = SHA256.new()
sha256.update(bootkey)
for i in range(1000):
sha256.update(ciphertext[28:60])
aeskey = sha256.digest()
aes = AES.new(aeskey, AES.MODE_ECB)
cleartext = aes.decrypt(ciphertext[60:len(ciphertext)])
return cleartext
def decrypt_lsa2(ciphertext, bootkey):
ciphertext2 = decrypt_lsa(ciphertext, bootkey)
(length,) = unpack("<L", ciphertext2[:4])
return ciphertext2[16:16+length]
def get_secret_by_name(name, lsakey):
global xp
r = regkey("HKEY_LOCAL_MACHINE\\SECURITY\\Policy\\Secrets\\%s\\CurrVal" % name)
if not r.is_present():
return None
enc_secret = r.get_value("")
if xp:
encryptedSecretSize = unpack('<I', enc_secret[:4])[0]
offset = len(enc_secret)-encryptedSecretSize
secret = decrypt_secret(enc_secret[offset:], lsakey)
return decrypt_secret(enc_secret[0xC:], lsakey)
else:
return decrypt_lsa2(enc_secret, lsakey)
def get_secrets():
global xp
bootkey = get_bootkey()
lsakey = get_lsa_key(bootkey)
r = regkey("HKEY_LOCAL_MACHINE\\SECURITY\\Policy\\Secrets")
if not r.is_present:
print "[E] Secrets key not accessible: HKEY_LOCAL_MACHINE\\SECURITY\\Policy\\Secrets"
return None
secrets = {}
for service_key in r.get_subkeys():
service_name = service_key.get_name().split("\\")[-1]
skey = regkey(service_key.get_name() + "\\CurrVal")
enc_secret = skey.get_value("")
if not enc_secret:
continue
if xp:
encryptedSecretSize = unpack('<I', enc_secret[:4])[0]
offset = len(enc_secret)-encryptedSecretSize
secret = decrypt_secret(enc_secret[offset:], lsakey)
else:
secret = decrypt_lsa2(enc_secret, lsakey)
secrets[service_name] = secret
return secrets
def get_live_secrets():
return get_secrets()
| gpl-3.0 |
mpurzynski/MozDef | scripts/demo/populate_sample_events.py | 3 | 2497 | import glob
import os
import optparse
import random
import hjson
import time
from datetime import datetime
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
def handle_event(event):
timestamp = toUTC(datetime.now()).isoformat()
event['timestamp'] = timestamp
event['receivedtimestamp'] = timestamp
event['utctimestamp'] = timestamp
# add demo to the tags so it's clear it's not real data.
if 'tags' not in event:
event['tags'] = list()
event['tags'] += 'demodata'
return event
def handle_events(sample_events, num_picked, es_client):
selected_events = []
if num_picked == 0:
selected_events = sample_events
else:
# pick a random type of event to send
for i in range(0, num_picked):
selected_events.append(random.choice(sample_events))
for event in selected_events:
event = handle_event(event)
es_client.save_event(event)
def run(num_rounds, num_events, sleep_time, es_client):
sample_events_dir = os.path.join(os.path.dirname(__file__), "sample_events")
sample_event_files = glob.glob(sample_events_dir + '/*')
sample_events = []
for sample_file in sample_event_files:
sample_events += hjson.load(open(sample_file))
# # pick a random number of events to send
if num_rounds == 0:
print("Running indefinitely")
while True:
handle_events(sample_events, num_events, es_client)
time.sleep(sleep_time)
else:
print("Running for {0} rounds".format(num_rounds))
handle_events(sample_events, num_events, es_client)
if __name__ == '__main__':
parser = optparse.OptionParser()
parser.add_option('--elasticsearch_host', help='Elasticsearch host (default: http://localhost:9200)', default='http://localhost:9200')
parser.add_option('--num_events', help='Number of random events to insert (default: 0 (run all))', default=0)
parser.add_option('--num_rounds', help='Number of rounds to insert events (default: 0 (run continuously))', default=0)
parser.add_option('--sleep_time', help='Number of seconds to sleep between rounds (default: 2)', default=2)
options, arguments = parser.parse_args()
es_client = ElasticsearchClient(options.elasticsearch_host)
run(
num_rounds=options.num_rounds,
num_events=options.num_events,
sleep_time=options.sleep_time,
es_client=es_client
)
| mpl-2.0 |
ypid-bot/check_mk | web/plugins/metrics/check_mk.py | 1 | 189295 | #!/usr/bin/python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2014 [email protected] |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# tails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
# TODO Graphingsystem:
# - Default-Template: Wenn im Graph kein "range" angegeben ist, aber
# in der Unit eine "range"-Angabe ist, dann soll diese genommen werden.
# Und dann sämtliche Schablonen, die nur wegen Range
# 0..100 da sind, wieder durch generic ersetzen.
# Metric definitions for Check_MK's checks
# .--Units---------------------------------------------------------------.
# | _ _ _ _ |
# | | | | |_ __ (_) |_ ___ |
# | | | | | '_ \| | __/ __| |
# | | |_| | | | | | |_\__ \ |
# | \___/|_| |_|_|\__|___/ |
# | |
# +----------------------------------------------------------------------+
# | Definition of units of measurement. |
# '----------------------------------------------------------------------'
# Optional attributes of units:
#
# stepping: FIXME: Describe this
#
# graph_unit: Compute a common unit for the whole graph. This is an optional
# feature to solve the problem that some unit names are too long
# to be shown on the left of the screen together with the values.
# For fixing this the "graph unit" is available which is displayed
# on the top left of the graph and is used for the whole graph. So
# once a "graph unit" is computed, it does not need to be shown
# beside each label.
# This has to be set to a function which recevies a list of values,
# then computes the optimal unit for the given values and then
# returns a two element tuple. The first element is the "graph unit"
# and the second is a list containing all of the values rendered with
# the graph unit.
# TODO: Move fundamental units like "" to main file.
unit_info[""] = {
"title" : "",
"description" : _("Floating point number"),
"symbol" : "",
"render" : lambda v: render_scientific(v, 2),
}
unit_info["count"] = {
"title" : _("Count"),
"symbol" : "",
"render" : lambda v: metric_number_with_precision(v),
"stepping" : "integer", # for vertical graph labels
}
# value ranges from 0.0 ... 100.0
unit_info["%"] = {
"title" : _("%"),
"description" : _("Percentage (0...100)"),
"symbol" : _("%"),
"render" : lambda v: percent_human_redable(v, 3),
}
unit_info["s"] = {
"title" : _("sec"),
"description" : _("Timespan or Duration in seconds"),
"symbol" : _("s"),
"render" : age_human_readable,
"stepping" : "time", # for vertical graph labels
}
unit_info["1/s"] = {
"title" : _("per second"),
"description" : _("Frequency (displayed in events/s)"),
"symbol" : _("/s"),
"render" : lambda v: "%s%s" % (drop_dotzero(v), _("/s")),
}
unit_info["hz"] = {
"title" : _("Hz"),
"symbol" : _("Hz"),
"description" : _("Frequency (displayed in Hz)"),
"render" : lambda v : physical_precision(v, 3, _("Hz")),
}
unit_info["bytes"] = {
"title" : _("Bytes"),
"symbol" : _("B"),
"render" : bytes_human_readable,
"stepping" : "binary", # for vertical graph labels
}
unit_info["bytes/s"] = {
"title" : _("Bytes per second"),
"symbol" : _("B/s"),
"render" : lambda v: bytes_human_readable(v) + _("/s"),
"stepping" : "binary", # for vertical graph labels
}
unit_info["bits/s"] = {
"title" : _("Bits per second"),
"symbol" : _("bits/s"),
"render" : lambda v: physical_precision(v, 3, _("bit/s")),
"graph_unit" : lambda v: physical_precision_list(v, 3, _("bit/s")),
}
# Output in bytes/days, value is in bytes/s
unit_info["bytes/d"] = {
"title" : _("Bytes per day"),
"symbol" : _("B/d"),
"render" : lambda v: bytes_human_readable(v * 86400.0, unit="B/d"),
"graph_unit" : lambda values: bytes_human_readable_list(
[ v * 86400.0 for v in values ], unit=_("B/d")),
"stepping" : "binary", # for vertical graph labels
}
unit_info["c"] = {
"title" : _("Degree Celsius"),
"symbol" : u"°C",
"render" : lambda v: "%s %s" % (drop_dotzero(v), u"°C"),
}
unit_info["a"] = {
"title" : _("Electrical Current (Amperage)"),
"symbol" : _("A"),
"render" : lambda v: physical_precision(v, 3, _("A")),
}
unit_info["v"] = {
"title" : _("Electrical Tension (Voltage)"),
"symbol" : _("V"),
"render" : lambda v: physical_precision(v, 3, _("V")),
}
unit_info["w"] = {
"title" : _("Electrical Power"),
"symbol" : _("W"),
"render" : lambda v: physical_precision(v, 3, _("W")),
}
unit_info["va"] = {
"title" : _("Electrical Apparent Power"),
"symbol" : _("VA"),
"render" : lambda v: physical_precision(v, 3, _("VA")),
}
unit_info["wh"] = {
"title" : _("Electrical Energy"),
"symbol" : _("Wh"),
"render" : lambda v: physical_precision(v, 3, _("Wh")),
}
unit_info["dbm"] = {
"title" : _("Decibel-milliwatts"),
"symbol" : _("dBm"),
"render" : lambda v: "%s %s" % (drop_dotzero(v), _("dBm")),
}
unit_info["dbmv"] = {
"title" : _("Decibel-millivolt"),
"symbol" : _("dBmV"),
"render" : lambda v: "%s %s" % (drop_dotzero(v), _("dBmV")),
}
unit_info["db"] = {
"title" : _("Decibel"),
"symbol" : _("dB"),
"render" : lambda v: physical_precision(v, 3, _("dB")),
}
# 'Percent obscuration per meter'-Obscuration for any atmospheric phenomenon, e.g. smoke, dust, snow
unit_info["%/m"] = {
"title" : _("Percent Per Meter"),
"symbol" : _("%/m"),
"render" : lambda v: percent_human_redable(v, 3) + _("/m"),
}
unit_info["bar"] = {
"title" : _("Bar"),
"symbol" : _("bar"),
"render" : lambda v: physical_precision(v, 4, _("bar")),
}
unit_info["pa"] = {
"title" : _("Pascal"),
"symbol" : _("Pa"),
"render" : lambda v: physical_precision(v, 3, _("Pa")),
}
unit_info["l/s"] = {
"title" : _("Liters per second"),
"symbol" : _("l/s"),
"render" : lambda v: physical_precision(v, 3, _("l/s")),
}
#.
# .--Metrics-------------------------------------------------------------.
# | __ __ _ _ |
# | | \/ | ___| |_ _ __(_) ___ ___ |
# | | |\/| |/ _ \ __| '__| |/ __/ __| |
# | | | | | __/ |_| | | | (__\__ \ |
# | |_| |_|\___|\__|_| |_|\___|___/ |
# | |
# +----------------------------------------------------------------------+
# | Definitions of metrics |
# '----------------------------------------------------------------------'
# Title are always lower case - except the first character!
# Colors:
#
# red
# magenta orange
# 11 12 13 14 15 16
# 46 21
# 45 22
# blue 44 23 yellow
# 43 24
# 42 25
# 41 26
# 36 35 34 33 32 31
# cyan yellow-green
# green
#
# Special colors:
# 51 gray
# 52 brown 1
# 53 brown 2
#
# For a new metric_info you have to choose a color. No more hex-codes are needed!
# Instead you can choose a number of the above color ring and a letter 'a' or 'b
# where 'a' represents the basic color and 'b' is a nuance/shading of the basic color.
# Both number and letter must be declared!
#
# Example:
# "color" : "23/a" (basic color yellow)
# "color" : "23/b" (nuance of color yellow)
#
# As an alternative you can call indexed_color with a color index and the maximum
# number of colors you will need to generate a color. This function tries to return
# high contrast colors for "close" indices, so the colors of idx 1 and idx 2 may
# have stronger contrast than the colors at idx 3 and idx 10.
# retrieve an indexed color.
# param idx: the color index
# param total: the total number of colors needed in one graph.
COLOR_WHEEL_SIZE = 48
def indexed_color(idx, total):
if idx < COLOR_WHEEL_SIZE:
# use colors from the color wheel if possible
base_col = (idx % 4) + 1
tone = ((idx / 4) % 6) + 1
if idx % 8 < 4:
shade = "a"
else:
shade = "b"
return "%d%d/%s" % (base_col, tone, shade)
else:
# generate distinct rgb values. these may be ugly ; also, they
# may overlap with the colors from the wheel
idx = idx - COLOR_WHEEL_SIZE
base_color = idx % 7 # red, green, blue, red+green, red+blue,
# green+blue, red+green+blue
delta = 255 / ((total - COLOR_WHEEL_SIZE) / 7)
offset = 255 - (delta * ((idx / 7) + 1))
red = int(base_color in [0, 3, 4, 6])
green = int(base_color in [1, 3, 5, 6])
blue = int(base_color in [2, 4, 5, 6])
return "#%02x%02x%02x" % (red * offset, green * offset, blue * offset)
MAX_NUMBER_HOPS = 45 # the amount of hop metrics, graphs and perfometers to create
for idx in range(0, MAX_NUMBER_HOPS):
if idx:
prefix_perf = "hop_%d_" % idx
prefix_text = "Hop %d " % idx
else:
prefix_perf = ""
prefix_text = ""
metric_info["%srta" % prefix_perf] = {
"title" : _("%sRound trip average") % prefix_text,
"unit" : "s",
"color" : "33/a"
}
metric_info["%srtmin" % prefix_perf] = {
"title" : _("%sRound trip minimum") % prefix_text,
"unit" : "s",
"color" : "42/a",
}
metric_info["%srtmax" % prefix_perf] = {
"title" : _("%sRound trip maximum") % prefix_text,
"unit" : "s",
"color" : "42/b",
}
metric_info["%srtstddev" % prefix_perf] = {
"title" : _("%sRound trip standard devation") % prefix_text,
"unit" : "s",
"color" : "16/a",
}
metric_info["%spl" % prefix_perf] = {
"title" : _("%sPacket loss") % prefix_text,
"unit" : "%",
"color" : "#ffc030",
}
metric_info["%sresponse_time" % prefix_perf] = {
"title" : _("%sResponse time") % prefix_text,
"unit" : "s",
"color" : "23/a"
}
metric_info["hops"] = {
"title" : _("Number of hops"),
"unit" : "count",
"color" : "51/a",
}
metric_info["uptime"] = {
"title" : _("Uptime"),
"unit" : "s",
"color" : "#80f000",
}
metric_info["age"] = {
"title" : _("Age"),
"unit" : "s",
"color" : "#80f000",
}
metric_info["runtime"] = {
"title" : _("Process Runtime"),
"unit" : "s",
"color" : "#80f000",
}
metric_info["lifetime_remaining"] = {
"title" : _("Lifetime remaining"),
"unit" : "s",
"color" : "#80f000",
}
metric_info["cache_hit_ratio"] = {
"title" : _("Cache hit ratio"),
"unit" : "%",
"color" : "#60c0c0",
}
metric_info["zfs_l2_hit_ratio"] = {
"title" : _("L2 cache hit ratio"),
"unit" : "%",
"color" : "46/a",
}
metric_info["prefetch_data_hit_ratio"] = {
"title" : _("Prefetch data hit ratio"),
"unit" : "%",
"color" : "41/b",
}
metric_info["prefetch_metadata_hit_ratio"] = {
"title" : _("Prefetch metadata hit ratio"),
"unit" : "%",
"color" : "43/a",
}
metric_info["zfs_metadata_used"] = {
"title" : _("Used meta data"),
"unit" : "bytes",
"color" : "31/a",
}
metric_info["zfs_metadata_max"] = {
"title" : _("Maxmimum of meta data"),
"unit" : "bytes",
"color" : "33/a",
}
metric_info["zfs_metadata_limit"] = {
"title" : _("Limit of meta data"),
"unit" : "bytes",
"color" : "36/a",
}
metric_info["zfs_l2_size"] = {
"title" : _("L2 cache size"),
"unit" : "bytes",
"color" : "31/a",
}
# database, tablespace
metric_info["database_size"] = {
"title" : _("Database size"),
"unit" : "bytes",
"color" : "16/a",
}
metric_info["data_size"] = {
"title" : _("Data size"),
"unit" : "bytes",
"color" : "25/a",
}
metric_info["unallocated_size"] = {
"title" : _("Unallocated space"),
"help" : _("Space in the database that has not been reserved for database objects"),
"unit" : "bytes",
"color" : "34/a",
}
metric_info["reserved_size"] = {
"title" : _("Reserved space"),
"help" : _("Total amount of space allocated by objects in the database"),
"unit" : "bytes",
"color" : "41/a",
}
metric_info["indexes_size"] = {
"title" : _("Index space"),
"unit" : "bytes",
"color" : "31/a",
}
metric_info["unused_size"] = {
"title" : _("Unused space"),
"help" : _("Total amount of space reserved for objects in the database, but not yed used"),
"unit" : "bytes",
"color" : "46/a",
}
metric_info["allocated_size"] = {
"title" : _("Allocated space"),
"unit" : "bytes",
"color" : "42/a"
}
metric_info["tablespace_size"] = {
"title" : _("Tablespace size"),
"unit" : "bytes",
"color" : "#092507",
}
metric_info["tablespace_used"] = {
"title" : _("Tablespace used"),
"unit" : "bytes",
"color" : "#e59d12",
}
metric_info["tablespace_max_size"] = {
"title" : _("Tablespace maximum size"),
"unit" : "bytes",
"color" : "#172121",
}
metric_info["tablespace_wasted"] = {
"title" : _("Tablespace wasted"),
"unit" : "bytes",
"color" : "#a02020",
}
metric_info["indexspace_wasted"] = {
"title" : _("Indexspace wasted"),
"unit" : "bytes",
"color" : "#20a080",
}
metric_info["mem_total"] = {
"title" : _("RAM installed"),
"color": "#f0f0f0",
"unit" : "bytes",
}
metric_info["mem_free"] = {
"title" : _("Free RAM"),
"color" : "#ffffff",
"unit" : "bytes",
}
metric_info["mem_used"] = {
"color": "#80ff40",
"title" : _("RAM used"),
"unit" : "bytes",
}
metric_info["mem_available"] = {
"color" : "21/a",
"title" : _("RAM available"),
"unit" : "bytes",
}
metric_info["pagefile_used"] = {
"color": "#408f20",
"title" : _("Commit Charge"),
"unit" : "bytes",
}
metric_info["mem_used_percent"] = {
"color": "#80ff40",
"title" : _("RAM used"),
"unit" : "%",
}
metric_info["mem_perm_used"] = {
"color": "#80ff40",
"title" : _("Permanent Generation Memory"),
"unit" : "bytes",
}
metric_info["swap_total"] = {
"title" : _("Swap installed"),
"color": "#e0e0e0",
"unit" : "bytes",
}
metric_info["swap_free"] = {
"title" : _("Free swap space"),
"unit" : "bytes",
"color" : "#eeeeee",
}
metric_info["swap_used"] = {
"title" : _("Swap used"),
"color": "#408f20",
"unit" : "bytes",
}
metric_info["swap_used_percent"] = {
"color": "#408f20",
"title" : _("Swap used"),
"unit" : "%",
}
metric_info["swap_cached"] = {
"title" : _("Swap cached"),
"color": "#5bebc9",
"unit" : "bytes",
}
metric_info["caches"] = {
"title" : _("Memory used by caches"),
"unit" : "bytes",
"color" : "51/a",
}
metric_info["mem_pages_rate"] = {
"title" : _("Memory Pages"),
"unit" : "1/s",
"color": "34/a",
}
metric_info["mem_lnx_total_used"] = {
"title" : _("Total used memory"),
"color": "#70f038",
"unit" : "bytes",
}
metric_info["mem_lnx_cached"] = {
"title" : _("File contents"),
"color": "#91cceb",
"unit" : "bytes",
}
metric_info["mem_lnx_buffers"] = {
"title" : _("Filesystem structure"),
"color": "#5bb9eb",
"unit" : "bytes",
}
metric_info["mem_lnx_slab"] = {
"title" : _("Slab (Various smaller caches)"),
"color": "#af91eb",
"unit" : "bytes",
}
metric_info["mem_lnx_sreclaimable"] = {
"title" : _("Reclaimable memory"),
"color": "23/a",
"unit" : "bytes",
}
metric_info["mem_lnx_sunreclaim"] = {
"title" : _("Unreclaimable memory"),
"color": "24/a",
"unit" : "bytes",
}
metric_info["mem_lnx_pending"] = {
"title" : _("Pending memory"),
"color": "25/a",
"unit" : "bytes",
}
metric_info["mem_lnx_unevictable"] = {
"title" : _("Unevictable memory"),
"color": "26/a",
"unit" : "bytes",
}
metric_info["mem_lnx_active"] = {
"title" : _("Active"),
"color": "#dd2020",
"unit" : "bytes",
}
metric_info["mem_lnx_anon_pages"] = {
"title" : _("Anonymous pages"),
"color": "#cc4040",
"unit" : "bytes",
}
metric_info["mem_lnx_active_anon"] = {
"title" : _("Active (anonymous)"),
"color": "#ff4040",
"unit" : "bytes",
}
metric_info["mem_lnx_active_file"] = {
"title" : _("Active (files)"),
"color": "#ff8080",
"unit" : "bytes",
}
metric_info["mem_lnx_inactive"] = {
"title" : _("Inactive"),
"color": "#275c6b",
"unit" : "bytes",
}
metric_info["mem_lnx_inactive_anon"] = {
"title" : _("Inactive (anonymous)"),
"color": "#377cab",
"unit" : "bytes",
}
metric_info["mem_lnx_inactive_file"] = {
"title" : _("Inactive (files)"),
"color": "#4eb0f2",
"unit" : "bytes",
}
metric_info["mem_lnx_active"] = {
"title" : _("Active"),
"color": "#ff4040",
"unit" : "bytes",
}
metric_info["mem_lnx_inactive"] = {
"title" : _("Inactive"),
"color": "#4040ff",
"unit" : "bytes",
}
metric_info["mem_lnx_dirty"] = {
"title" : _("Dirty disk blocks"),
"color": "#f2904e",
"unit" : "bytes",
}
metric_info["mem_lnx_writeback"] = {
"title" : _("Currently being written"),
"color": "#f2df40",
"unit" : "bytes",
}
metric_info["mem_lnx_nfs_unstable"] = {
"title" : _("Modified NFS data"),
"color": "#c6f24e",
"unit" : "bytes",
}
metric_info["mem_lnx_bounce"] = {
"title" : _("Bounce buffers"),
"color": "#4ef26c",
"unit" : "bytes",
}
metric_info["mem_lnx_writeback_tmp"] = {
"title" : _("Dirty FUSE data"),
"color": "#4eeaf2",
"unit" : "bytes",
}
metric_info["mem_lnx_total_total"] = {
"title" : _("Total virtual memory"),
"color": "#f0f0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_committed_as"] = {
"title" : _("Committed memory"),
"color": "#40a080",
"unit" : "bytes",
}
metric_info["mem_lnx_commit_limit"] = {
"title" : _("Commit limit"),
"color": "#e0e0e0",
"unit" : "bytes",
}
metric_info["mem_lnx_shmem"] = {
"title" : _("Shared memory"),
"color": "#bf9111",
"unit" : "bytes",
}
metric_info["mem_lnx_kernel_stack"] = {
"title" : _("Kernel stack"),
"color": "#7192ad",
"unit" : "bytes",
}
metric_info["mem_lnx_page_tables"] = {
"title" : _("Page tables"),
"color": "#71ad9f",
"unit" : "bytes",
}
metric_info["mem_lnx_mlocked"] = {
"title" : _("Locked mmap() data"),
"color": "#a671ad",
"unit" : "bytes",
}
metric_info["mem_lnx_mapped"] = {
"title" : _("Mapped data"),
"color": "#a671ad",
"unit" : "bytes",
}
metric_info["mem_lnx_anon_huge_pages"] = {
"title" : _("Anonymous huge pages"),
"color": "#f0f0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_huge_pages_total"] = {
"title" : _("Huge pages total"),
"color": "#f0f0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_huge_pages_free"] = {
"title" : _("Huge pages free"),
"color": "#f0a0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_huge_pages_rsvd"] = {
"title" : _("Huge pages reserved part of free"),
"color": "#40f0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_huge_pages_surp"] = {
"title" : _("Huge pages surplus"),
"color": "#90f0b0",
"unit" : "bytes",
}
metric_info["mem_lnx_vmalloc_total"] = {
"title" : _("Total address space"),
"color": "#f0f0f0",
"unit" : "bytes",
}
metric_info["mem_lnx_vmalloc_used"] = {
"title" : _("Allocated space"),
"color": "#aaf76f",
"unit" : "bytes",
}
metric_info["mem_lnx_vmalloc_chunk"] = {
"title" : _("Largest free chunk"),
"color": "#c6f7e9",
"unit" : "bytes",
}
metric_info["mem_lnx_hardware_corrupted"] = {
"title" : _("Hardware corrupted memory"),
"color": "13/a",
"unit" : "bytes",
}
# Consumed Host Memory usage is defined as the amount of host memory that is allocated to the virtual machine
metric_info["mem_esx_host"] = {
"title" : _("Consumed host memory"),
"color": "#70f038",
"unit" : "bytes",
}
# Active Guest Memory is defined as the amount of guest memory that is currently being used by the guest operating system and its applications
metric_info["mem_esx_guest"] = {
"title" : _("Active guest memory"),
"color": "15/a",
"unit" : "bytes",
}
metric_info["mem_esx_ballooned"] = {
"title" : _("Ballooned memory"),
"color": "21/a",
"unit" : "bytes",
}
metric_info["mem_esx_shared"] = {
"title" : _("Shared memory"),
"color": "34/a",
"unit" : "bytes",
}
metric_info["mem_esx_private"] = {
"title" : _("Private memory"),
"color": "25/a",
"unit" : "bytes",
}
metric_info["pagefile_total"] = {
"title" : _("Pagefile installed"),
"color": "#e0e0e0",
"unit" : "bytes",
}
metric_info["load1"] = {
"title" : _("CPU load average of last minute"),
"unit" : "",
"color" : "34/c",
}
metric_info["load5"] = {
"title" : _("CPU load average of last 5 minutes"),
"unit" : "",
"color" : "#428399",
}
metric_info["load15"] = {
"title" : _("CPU load average of last 15 minutes"),
"unit" : "",
"color" : "#2c5766",
}
metric_info["context_switches"] = {
"title" : _("Context switches"),
"unit" : "1/s",
"color" : "#80ff20",
}
metric_info["major_page_faults"] = {
"title" : _("Major page faults"),
"unit" : "1/s",
"color" : "#20ff80",
}
metric_info["process_creations"] = {
"title" : _("Process creations"),
"unit" : "1/s",
"color" : "#ff8020",
}
metric_info["process_virtual_size"] = {
"title" : _("Virtual size"),
"unit" : "bytes",
"color" : "16/a",
}
metric_info["process_resident_size"] = {
"title" : _("Resident size"),
"unit" : "bytes",
"color" : "14/a",
}
metric_info["process_mapped_size"] = {
"title" : _("Mapped size"),
"unit" : "bytes",
"color" : "12/a",
}
metric_info["process_handles"] = {
"title" : _("Process handles"),
"unit" : "count",
"color" : "32/a",
}
metric_info["mem_heap"] = {
"title" : _("Heap memory usage"),
"unit" : "bytes",
"color" : "23/a",
}
metric_info["mem_heap_committed"] = {
"title" : _("Heap memory committed"),
"unit" : "bytes",
"color" : "23/b",
}
metric_info["mem_nonheap"] = {
"title" : _("Non-heap memory usage"),
"unit" : "bytes",
"color" : "16/a",
}
metric_info["mem_nonheap_committed"] = {
"title" : _("Non-heap memory committed"),
"unit" : "bytes",
"color" : "16/b",
}
metric_info["processes"] = {
"title" : _("Processes"),
"unit" : "count",
"color" : "#8040f0",
}
metric_info["threads"] = {
"title" : _("Threads"),
"unit" : "count",
"color" : "#8040f0",
}
metric_info["threads_idle"] = {
"title" : _("Idle threads"),
"unit" : "count",
"color" : "#8040f0",
}
metric_info["threads_rate"] = {
"title" : _("Thread creations per second"),
"unit" : "1/s",
"color" : "44/a",
}
metric_info["threads_daemon"] = {
"title" : _("Daemon threads"),
"unit" : "count",
"color" : "32/a",
}
metric_info["threads_max"] = {
"title" : _("Maximum number of threads"),
"help" : _("Maximum number of threads started at any given time during the JVM lifetime"),
"unit" : "count",
"color" : "35/a",
}
metric_info["threads_total"] = {
"title" : _("Number of threads"),
"unit" : "count",
"color" : "41/a",
}
metric_info["threads_busy"] = {
"title" : _("Busy threads"),
"unit" : "count",
"color" : "34/a",
}
for what, color in [ ("msg", "12"), ("rollovers", "13"), ("regular", "14"),
("warning", "15"), ("user", "16") ]:
metric_info["assert_%s" % what] = {
"title" : _("%s Asserts") % what.title(),
"unit" : "count",
"color" : "%s/a" % color,
}
metric_info["vol_context_switches"] = {
"title" : _("Voluntary context switches"),
"help" : _("A voluntary context switch occurs when a thread blocks "
"because it requires a resource that is unavailable"),
"unit" : "count",
"color" : "36/a",
}
metric_info["invol_context_switches"] = {
"title" : _("Involuntary context switches"),
"help" : _("An involuntary context switch takes place when a thread "
"executes for the duration of its time slice or when the "
"system identifies a higher-priority thread to run"),
"unit" : "count",
"color" : "45/b",
}
metric_info["tapes_total"] = {
"title" : _("Total number of tapes"),
"unit" : "count",
"color" : "#8040f0",
}
metric_info["tapes_free"] = {
"title" : _("Free tapes"),
"unit" : "count",
"color" : "#8044ff",
}
metric_info["tapes_util"] = {
"title" : _("Tape utilization"),
"unit" : "%",
"color" : "#ff8020",
}
metric_info["fs_used"] = {
"title" : _("Used filesystem space"),
"unit" : "bytes",
"color" : "#00ffc6",
}
metric_info["inodes_used"] = {
"title" : _("Used inodes"),
"unit" : "count",
"color" : "#a0608f",
}
metric_info["fs_size"] = {
"title" : _("Filesystem size"),
"unit" : "bytes",
"color" : "#006040",
}
metric_info["fs_growth"] = {
"title" : _("Filesystem growth"),
"unit" : "bytes/d",
"color" : "#29cfaa",
}
metric_info["fs_trend"] = {
"title" : _("Trend of filesystem growth"),
"unit" : "bytes/d",
"color" : "#808080",
}
metric_info["fs_provisioning"] = {
"title" : _("Provisioned filesystem space"),
"unit" : "bytes",
"color" : "#ff8000",
}
metric_info["temp"] = {
"title" : _("Temperature"),
"unit" : "c",
"color" : "16/a"
}
metric_info["cifs_share_users"] = {
"title" : _("Users using a cifs share"),
"unit" : "count",
"color" : "#60f020",
}
metric_info["smoke_ppm"] = {
"title" : _("Smoke"),
"unit" : "%/m",
"color" : "#60f088",
}
metric_info["smoke_perc"] = {
"title" : _("Smoke"),
"unit" : "%",
"color" : "#60f088",
}
metric_info["airflow"] = {
"title" : _("Air flow"),
"unit" : "l/s",
"color" : "#ff6234",
}
metric_info["fluidflow"] = {
"title" : _("Fluid flow"),
"unit" : "l/s",
"color" : "#ff6234",
}
metric_info["deviation_calibration_point"] = {
"title" : _("Deviation from calibration point"),
"unit" : "%",
"color" : "#60f020",
}
metric_info["deviation_airflow"] = {
"title" : _("Airflow deviation"),
"unit" : "%",
"color" : "#60f020",
}
metric_info["health_perc"] = {
"title" : _("Health"),
"unit" : "%",
"color" : "#ff6234",
}
# TODO: user -> cpu_util_user
metric_info["user"] = {
"title" : _("User"),
"help" : _("CPU time spent in user space"),
"unit" : "%",
"color" : "#60f020",
}
# metric_info["cpu_util_privileged"] = {
# "title" : _("Privileged"),
# "help" : _("CPU time spent in privileged mode"),
# "unit" : "%",
# "color" : "23/a",
# }
metric_info["nice"] = {
"title" : _("Nice"),
"help" : _("CPU time spent in user space for niced processes"),
"unit" : "%",
"color" : "#ff9050",
}
metric_info["interrupt"] = {
"title" : _("Interrupt"),
"unit" : "%",
"color" : "#ff9050",
}
metric_info["system"] = {
"title" : _("System"),
"help" : _("CPU time spent in kernel space"),
"unit" : "%",
"color" : "#ff6000",
}
metric_info["io_wait"] = {
"title" : _("I/O-wait"),
"help" : _("CPU time spent waiting for I/O"),
"unit" : "%",
"color" : "#00b0c0",
}
metric_info["cpu_util_guest"] = {
"title" : _("Guest operating systems"),
"help" : _("CPU time spent for executing guest operating systems"),
"unit" : "%",
"color" : "12/a",
}
metric_info["cpu_util_steal"] = {
"title" : _("Steal"),
"help" : _("CPU time stolen by other operating systems"),
"unit" : "%",
"color" : "16/a",
}
metric_info["idle"] = {
"title" : _("Idle"),
"help" : _("CPU idle time"),
"unit" : "%",
"color" : "#805022",
}
metric_info["fpga_util"] = {
"title" : _("FPGA utilization"),
"unit" : "%",
"color" : "#60f020",
}
metric_info["generic_util"] = {
"title" : _("Utilization"),
"unit" : "%",
"color" : "26/a",
}
metric_info["util"] = {
"title" : _("CPU utilization"),
"unit" : "%",
"color" : "26/a",
}
metric_info["util_average"] = {
"title" : _("CPU utilization (average)"),
"unit" : "%",
"color" : "26/b",
}
metric_info["util1s"] = {
"title" : _("CPU utilization last second"),
"unit" : "%",
"color" : "#50ff20",
}
metric_info["util5s"] = {
"title" : _("CPU utilization last five seconds"),
"unit" : "%",
"color" : "#600020",
}
metric_info["util1"] = {
"title" : _("CPU utilization last minute"),
"unit" : "%",
"color" : "#60f020",
}
metric_info["util5"] = {
"title" : _("CPU utilization last 5 minutes"),
"unit" : "%",
"color" : "#80f040",
}
metric_info["util15"] = {
"title" : _("CPU utilization last 15 minutes"),
"unit" : "%",
"color" : "#9a52bf",
}
MAX_CORES = 128
for i in range(MAX_CORES):
# generate different colors for each core.
# unfortunately there are only 24 colors on our
# color wheel, times two for two shades each, we
# can only draw 48 differently colored graphs
metric_info["cpu_core_util_%d" % i] = {
"title" : _("Utilization Core %d") % (i + 1),
"unit" : "%",
"color" : indexed_color(i, MAX_CORES),
}
metric_info["time_offset"] = {
"title" : _("Time offset"),
"unit" : "s",
"color" : "#9a52bf",
}
metric_info["jitter"] = {
"title" : _("Time dispersion (jitter)"),
"unit" : "s",
"color" : "43/b",
}
metric_info["connection_time"] = {
"title" : _("Connection time"),
"unit" : "s",
"color" : "#94b65a",
}
metric_info["infections_rate"] = {
"title" : _("Infections"),
"unit" : "1/s",
"color" : "15/a",
}
metric_info["connections_blocked_rate"] = {
"title" : _("Blocked connections"),
"unit" : "1/s",
"color" : "14/a",
}
metric_info["open_network_sockets"] = {
"title" : _("Open network sockets"),
"unit" : "count",
"color" : "21/a",
}
metric_info["connections"] = {
"title" : _("Connections"),
"unit" : "count",
"color" : "#a080b0",
}
metric_info["connections_async_writing"] = {
"title" : _("Asynchronous writing connections"),
"unit" : "count",
"color" : "16/a",
}
metric_info["connections_async_keepalive"] = {
"title" : _("Asynchronous keep alive connections"),
"unit" : "count",
"color" : "22/a",
}
metric_info["connections_async_closing"] = {
"title" : _("Asynchronous closing connections"),
"unit" : "count",
"color" : "24/a",
}
metric_info["connections_rate"] = {
"title" : _("Connections per second"),
"unit" : "1/s",
"color" : "#a080b0",
}
metric_info["connections_duration_min"] = {
"title" : _("Connections duration min"),
"unit" : "s",
"color" : "24/a"
}
metric_info["connections_duration_max"] = {
"title" : _("Connections duration max"),
"unit" : "s",
"color" : "25/a"
}
metric_info["connections_duration_mean"] = {
"title" : _("Connections duration max"),
"unit" : "s",
"color" : "25/a"
}
metric_info["packet_velocity_asic"] = {
"title" : _("Packet velocity asic"),
"unit" : "1/s",
"color" : "26/a"
}
metric_info["requests_per_second"] = {
"title" : _("Requests per second"),
"unit" : "1/s",
"color" : "#4080a0",
}
metric_info["input_signal_power_dbm"] = {
"title" : _("Input power"),
"unit" : "dbm",
"color" : "#20c080",
}
metric_info["output_signal_power_dbm"] = {
"title" : _("Output power"),
"unit" : "dbm",
"color" : "#2080c0",
}
metric_info["downstream_power"] = {
"title" : _("Downstream power"),
"unit" : "dbmv",
"color" : "14/a",
}
metric_info["current"] = {
"title" : _("Electrical current"),
"unit" : "a",
"color" : "#ffb030",
}
metric_info["differential_current_ac"] = {
"title" : _("Differential current AC"),
"unit" : "a",
"color" : "#ffb030",
}
metric_info["differential_current_dc"] = {
"title" : _("Differential current DC"),
"unit" : "a",
"color" : "#ffb030",
}
metric_info["voltage"] = {
"title" : _("Electrical voltage"),
"unit" : "v",
"color" : "14/a",
}
metric_info["power"] = {
"title" : _("Electrical power"),
"unit" : "w",
"color" : "22/a",
}
metric_info["appower"] = {
"title" : _("Electrical apparent power"),
"unit" : "va",
"color" : "22/b",
}
metric_info["energy"] = {
"title" : _("Electrical energy"),
"unit" : "wh",
"color" : "#aa80b0",
}
metric_info["output_load"] = {
"title" : _("Output load"),
"unit" : "%",
"color" : "#c83880",
}
metric_info["voltage_percent"] = {
"title" : _("Electrical tension in % of normal value"),
"unit" : "%",
"color" : "#ffc020",
}
metric_info["humidity"] = {
"title" : _("Relative humidity"),
"unit" : "%",
"color" : "#90b0b0",
}
metric_info["busy_workers"] = {
"title" : _("Busy workers"),
"unit" : "count",
"color" : "#a080b0",
}
metric_info["idle_workers"] = {
"title" : _("Idle workers"),
"unit" : "count",
"color" : "43/b",
}
metric_info["busy_servers"] = {
"title" : _("Busy servers"),
"unit" : "count",
"color" : "#a080b0",
}
metric_info["idle_servers"] = {
"title" : _("Idle servers"),
"unit" : "count",
"color" : "43/b",
}
metric_info["open_slots"] = {
"title" : _("Open slots"),
"unit" : "count",
"color" : "31/a",
}
metric_info["total_slots"] = {
"title" : _("Total slots"),
"unit" : "count",
"color" : "33/b",
}
metric_info["signal_noise"] = {
"title" : _("Signal/Noise ratio"),
"unit" : "db",
"color" : "#aadd66",
}
metric_info["noise_floor"] = {
"title" : _("Noise floor"),
"unit" : "dbm",
"color" : "11/a",
}
metric_info["codewords_corrected"] = {
"title" : _("Corrected codewords"),
"unit" : "%",
"color" : "#ff8040",
}
metric_info["codewords_uncorrectable"] = {
"title" : _("Uncorrectable codewords"),
"unit" : "%",
"color" : "#ff4020",
}
metric_info["total_sessions"] = {
"title" : _("Total sessions"),
"unit" : "count",
"color" : "#94b65a",
}
metric_info["running_sessions"] = {
"title" : _("Running sessions"),
"unit" : "count",
"color" : "42/a",
}
metric_info["rejected_sessions"] = {
"title" : _("Rejected sessions"),
"unit" : "count",
"color" : "45/a",
}
metric_info["active_sessions"] = {
"title" : _("Active sessions"),
"unit" : "count",
"color" : "11/a",
}
metric_info["inactive_sessions"] = {
"title" : _("Inactive sessions"),
"unit" : "count",
"color" : "13/a",
}
metric_info["session_rate"] = {
"title" : _("Session Rate"),
"unit" : "1/s",
"color" : "#4080a0",
}
metric_info["shared_locks"] = {
"title" : _("Shared locks"),
"unit" : "count",
"color" : "#92ec89",
}
metric_info["exclusive_locks"] = {
"title" : _("Exclusive locks"),
"unit" : "count",
"color" : "#ca5706",
}
metric_info["disk_read_throughput"] = {
"title" : _("Read throughput"),
"unit" : "bytes/s",
"color" : "#40c080",
}
metric_info["disk_write_throughput"] = {
"title" : _("Write throughput"),
"unit" : "bytes/s",
"color" : "#4080c0",
}
metric_info["disk_ios"] = {
"title" : _("Disk I/O operations"),
"unit" : "1/s",
"color" : "#60e0a0",
}
metric_info["disk_read_ios"] = {
"title" : _("Read operations"),
"unit" : "1/s",
"color" : "#60e0a0",
}
metric_info["disk_write_ios"] = {
"title" : _("Write operations"),
"unit" : "1/s",
"color" : "#60a0e0",
}
metric_info["disk_average_read_wait"] = {
"title" : _("Read wait Time"),
"unit" : "s",
"color" : "#20e8c0",
}
metric_info["disk_average_write_wait"] = {
"title" : _("Write wait time"), "unit" : "s", "color" : "#20c0e8",
}
metric_info["disk_average_wait"] = {
"title" : _("Request wait time"),
"unit" : "s",
"color" : "#4488cc",
}
metric_info["disk_average_read_request_size"] = {
"title" : _("Average read request size"),
"unit" : "bytes",
"color" : "#409c58",
}
metric_info["disk_average_write_request_size"] = {
"title" : _("Average write request size"),
"unit" : "bytes",
"color" : "#40589c",
}
metric_info["disk_average_request_size"] = {
"title" : _("Average request size"),
"unit" : "bytes",
"color" : "#4488cc",
}
metric_info["disk_latency"] = {
"title" : _("Average disk latency"),
"unit" : "s",
"color" : "#c04080",
}
metric_info["read_latency"] = {
"title" : _("Read latency"),
"unit" : "s",
"color" : "35/a",
}
metric_info["write_latency"] = {
"title" : _("Write latency"),
"unit" : "s",
"color" : "45/a",
}
metric_info["disk_queue_length"] = {
"title" : _("Average disk I/O-queue length"),
"unit" : "",
"color" : "35/a",
}
metric_info["disk_read_ql"] = {
"title" : _("Average disk read queue length"),
"unit" : "",
"color" : "45/a",
}
metric_info["disk_write_ql"] = {
"title" : _("Average disk write queue length"),
"unit" : "",
"color" : "#7060b0",
}
metric_info["disk_utilization"] = {
"title" : _("Disk utilization"),
"unit" : "%",
"color" : "#a05830",
}
metric_info["disk_capacity"] = {
"title" : _("Total disk capacity"),
"unit" : "bytes",
"color" : "12/a",
}
metric_info["disks"] = {
"title" : _("Disks"),
"unit" : "count",
"color" : "41/a",
}
metric_info["spare_disks"] = {
"title" : _("Spare disk"),
"unit" : "count",
"color" : "26/a",
}
metric_info["failed_disks"] = {
"title" : _("Failed disk"),
"unit" : "count",
"color" : "13/a",
}
metric_info["xda_hitratio"] = {
"title" : _("XDA hitratio"),
"unit" : "%",
"color" : "#0ae86d",
}
metric_info["data_hitratio"] = {
"title" : _("Data hitratio"),
"unit" : "%",
"color" : "#2828de",
}
metric_info["index_hitratio"] = {
"title" : _("Index hitratio"),
"unit" : "%",
"color" : "#dc359f",
}
metric_info["total_hitratio"] = {
"title" : _("Total hitratio"),
"unit" : "%",
"color" : "#2e282c",
}
metric_info["deadlocks"] = {
"title" : _("Deadlocks"),
"unit" : "1/s",
"color" : "#dc359f",
}
metric_info["lockwaits"] = {
"title" : _("Waitlocks"),
"unit" : "1/s",
"color" : "#2e282c",
}
metric_info["sort_overflow"] = {
"title" : _("Sort overflow"),
"unit" : "%",
"color" : "#e72121",
}
metric_info["hours_operation"] = {
"title" : _("Hours of operation"),
"unit" : "s",
"color" : "#94b65a",
}
metric_info["hours_since_service"] = {
"title" : _("Hours since service"),
"unit" : "s",
"color" : "#94b65a",
}
metric_info["execution_time"] = {
"title" : _("Execution time"),
"unit" : "s",
"color" : "#d080af",
}
metric_info["user_time"] = {
"title" : _("CPU time in user space"),
"unit" : "s",
"color" : "#60f020",
}
metric_info["system_time"] = {
"title" : _("CPU time in system space"),
"unit" : "s",
"color" : "#ff6000",
}
metric_info["children_user_time"] = {
"title" : _("Child time in user space"),
"unit" : "s",
"color" : "#aef090",
}
metric_info["children_system_time"] = {
"title" : _("Child time in system space"),
"unit" : "s",
"color" : "#ffb080",
}
metric_info["sync_latency"] = {
"title" : _("Sync latency"),
"unit" : "s",
"color" : "#ffb080",
}
metric_info["mail_latency"] = {
"title" : _("Mail latency"),
"unit" : "s",
"color" : "#ffb080",
}
metric_info["printer_queue"] = {
"title" : _("Printer queue length"),
"unit" : "count",
"color" : "#a63df2",
}
metric_info["if_in_octets"] = {
"title" : _("Input Octets"),
"unit" : "bytes/s",
"color" : "#00e060",
}
metric_info["if_in_bps"] = {
"title" : _("Input bandwidth"),
"unit" : "bits/s",
"color" : "#00e060",
}
metric_info["if_in_pkts"] = {
"title" : _("Input Packets"),
"unit" : "1/s",
"color" : "#00e060",
}
metric_info["if_out_pkts"] = {
"title" : _("Output Packets"),
"unit" : "1/s",
"color" : "#0080e0",
}
metric_info["if_out_bps"] = {
"title" : _("Output bandwidth"),
"unit" : "bits/s",
"color" : "#0080e0",
}
metric_info["if_out_octets"] = {
"title" : _("Output Octets"),
"unit" : "bytes/s",
"color" : "#0080e0",
}
metric_info["if_in_discards"] = {
"title" : _("Input Discards"),
"unit" : "1/s",
"color" : "#ff8000",
}
metric_info["if_in_errors"] = {
"title" : _("Input Errors"),
"unit" : "1/s",
"color" : "#ff0000",
}
metric_info["if_out_discards"] = {
"title" : _("Output Discards"),
"unit" : "1/s",
"color" : "#ff8080",
}
metric_info["if_out_errors"] = {
"title" : _("Output Errors"),
"unit" : "1/s",
"color" : "#ff0080",
}
metric_info["if_in_unicast"] = {
"title" : _("Input unicast packets"),
"unit" : "1/s",
"color" : "#00ffc0",
}
metric_info["if_in_non_unicast"] = {
"title" : _("Input non-unicast packets"),
"unit" : "1/s",
"color" : "#00c080",
}
metric_info["if_out_unicast"] = {
"title" : _("Output unicast packets"),
"unit" : "1/s",
"color" : "#00c0ff",
}
metric_info["if_out_unicast_octets"] = {
"title" : _("Output unicast oackets"),
"unit" : "bytes/s",
"color" : "#00c0ff",
}
metric_info["if_out_non_unicast"] = {
"title" : _("Output non-unicast packets"),
"unit" : "1/s",
"color" : "#0080c0",
}
metric_info["if_out_non_unicast_octets"] = {
"title" : _("Output non-unicast octets"),
"unit" : "bytes/s",
"color" : "#0080c0",
}
metric_info["wlan_physical_errors"] = {
"title" : "WLAN physical errors",
"unit" : "1/s",
"color" : "14/a",
}
metric_info["wlan_resets"] = {
"title" : "WLAN Reset operations",
"unit" : "1/s",
"color" : "21/a",
}
metric_info["wlan_retries"] = {
"title" : "WLAN transmission retries",
"unit" : "1/s",
"color" : "24/a",
}
metric_info["read_blocks"] = {
"title" : _("Read blocks per second"),
"unit" : "1/s",
"color" : "11/a",
}
metric_info["write_blocks"] = {
"title" : _("Write blocks per second"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["broadcast_packets"] = {
"title" : _("Broadcast packets"),
"unit" : "1/s",
"color" : "11/a",
}
metric_info["multicast_packets"] = {
"title" : _("Multicast packets"),
"unit" : "1/s",
"color" : "14/a",
}
metric_info["fc_rx_bytes"] = {
"title" : _("Input"),
"unit" : "bytes/s",
"color" : "31/a",
}
metric_info["fc_tx_bytes"] = {
"title" : _("Output"),
"unit" : "bytes/s",
"color" : "35/a",
}
metric_info["fc_rx_frames"] = {
"title" : _("Received Frames"),
"unit" : "1/s",
"color" : "31/b",
}
metric_info["fc_tx_frames"] = {
"title" : _("Transmitted Frames"),
"unit" : "1/s",
"color" : "35/b",
}
metric_info["fc_crc_errors"] = {
"title" : _("Receive CRC errors"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["fc_encouts"] = {
"title" : _("Enc-Outs"),
"unit" : "1/s",
"color" : "12/a",
}
metric_info["fc_encins"] = {
"title" : _("Enc-Ins"),
"unit" : "1/s",
"color" : "13/b",
}
metric_info["fc_bbcredit_zero"] = {
"title" : _("BBcredit zero"),
"unit" : "1/s",
"color" : "46/a",
}
metric_info["fc_c3discards"] = {
"title" : _("C3 discards"),
"unit" : "1/s",
"color" : "14/a",
}
metric_info["fc_notxcredits"] = {
"title" : _("No TX Credits"),
"unit" : "1/s",
"color" : "15/a",
}
metric_info["fc_c2c3_discards"] = {
"title" : _("C2 and c3 discards"),
"unit" : "1/s",
"color" : "15/a",
}
metric_info["fc_link_fails"] = {
"title" : _("Link failures"),
"unit" : "1/s",
"color" : "11/a",
}
metric_info["fc_sync_losses"] = {
"title" : _("Sync losses"),
"unit" : "1/s",
"color" : "12/a",
}
metric_info["fc_prim_seq_errors"] = {
"title" : _("Primitive sequence errors"),
"unit" : "1/s",
"color" : "13/a",
}
metric_info["fc_invalid_tx_words"] = {
"title" : _("Invalid TX words"),
"unit" : "1/s",
"color" : "14/a",
}
metric_info["fc_invalid_crcs"] = {
"title" : _("Invalid CRCs"),
"unit" : "1/s",
"color" : "15/a",
}
metric_info["fc_address_id_errors"] = {
"title" : _("Address ID errors"),
"unit" : "1/s",
"color" : "16/a",
}
metric_info["fc_link_resets_in"] = {
"title" : _("Link resets in"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["fc_link_resets_out"] = {
"title" : _("Link resets out"),
"unit" : "1/s",
"color" : "22/a",
}
metric_info["fc_offline_seqs_in"] = {
"title" : _("Offline sequences in"),
"unit" : "1/s",
"color" : "23/a",
}
metric_info["fc_offline_seqs_out"] = {
"title" : _("Offline sequences out"),
"unit" : "1/s",
"color" : "24/a",
}
metric_info["fc_c2_fbsy_frames"] = {
"title" : _("F_BSY frames"),
"unit" : "1/s",
"color" : "25/a",
}
metric_info["fc_c2_frjt_frames"] = {
"title" : _("F_RJT frames"),
"unit" : "1/s",
"color" : "26/a",
}
metric_info["rmon_packets_63"] = {
"title" : _("Packets of size 0-63 bytes"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["rmon_packets_127"] = {
"title" : _("Packets of size 64-127 bytes"),
"unit" : "1/s",
"color" : "24/a",
}
metric_info["rmon_packets_255"] = {
"title" : _("Packets of size 128-255 bytes"),
"unit" : "1/s",
"color" : "31/a",
}
metric_info["rmon_packets_511"] = {
"title" : _("Packets of size 256-511 bytes"),
"unit" : "1/s",
"color" : "34/a",
}
metric_info["rmon_packets_1023"] = {
"title" : _("Packets of size 512-1023 bytes"),
"unit" : "1/s",
"color" : "41/a",
}
metric_info["rmon_packets_1518"] = {
"title" : _("Packets of size 1024-1518 bytes"),
"unit" : "1/s",
"color" : "44/a",
}
metric_info["tcp_listen"] = {
"title" : _("State %s") % "LISTEN",
"unit" : "count",
"color" : "44/a",
}
metric_info["tcp_established"] = {
"title" : _("State %s") % "ESTABLISHED",
"unit" : "count",
"color" : "#00f040",
}
metric_info["tcp_syn_sent"] = {
"title" : _("State %s") % "SYN_SENT",
"unit" : "count",
"color" : "#a00000",
}
metric_info["tcp_syn_recv"] = {
"title" : _("State %s") % "SYN_RECV",
"unit" : "count",
"color" : "#ff4000",
}
metric_info["tcp_last_ack"] = {
"title" : _("State %s") % "LAST_ACK",
"unit" : "count",
"color" : "#c060ff",
}
metric_info["tcp_close_wait"] = {
"title" : _("State %s") % "CLOSE_WAIT",
"unit" : "count",
"color" : "#f000f0",
}
metric_info["tcp_time_wait"] = {
"title" : _("State %s") % "TIME_WAIT",
"unit" : "count",
"color" : "#00b0b0",
}
metric_info["tcp_closed"] = {
"title" : _("State %s") % "CLOSED",
"unit" : "count",
"color" : "#ffc000",
}
metric_info["tcp_closing"] = {
"title" : _("State %s") % "CLOSING",
"unit" : "count",
"color" : "#ffc080",
}
metric_info["tcp_fin_wait1"] = {
"title" : _("State %s") % "FIN_WAIT1",
"unit" : "count",
"color" : "#cccccc",
}
metric_info["tcp_fin_wait2"] = {
"title" : _("State %s") % "FIN_WAIT2",
"unit" : "count",
"color" : "#888888",
}
metric_info["tcp_bound"] = {
"title" : _("State %s") % "BOUND",
"unit" : "count",
"color" : "#4060a0",
}
metric_info["tcp_idle"] = {
"title" : _("State %s") % "IDLE",
"unit" : "count",
"color" : "41/a",
}
metric_info["fw_connections_active"] = {
"title" : _("Active connections"),
"unit" : "count",
"color" : "15/a",
}
metric_info["fw_connections_established"] = {
"title" : _("Established connections"),
"unit" : "count",
"color" : "41/a",
}
metric_info["fw_connections_halfopened"] = {
"title" : _("Half opened connections"),
"unit" : "count",
"color" : "16/a",
}
metric_info["fw_connections_halfclosed"] = {
"title" : _("Half closed connections"),
"unit" : "count",
"color" : "11/a",
}
metric_info["fw_connections_passthrough"] = {
"title" : _("Unoptimized connections"),
"unit" : "count",
"color" : "34/a",
}
metric_info["host_check_rate"] = {
"title" : _("Host check rate"),
"unit" : "1/s",
"color" : "52/a",
}
metric_info["monitored_hosts"] = {
"title" : _("Monitored hosts"),
"unit" : "1/s",
"color" : "52/a",
}
metric_info["hosts_active"] = {
"title" : _("Active hosts"),
"unit" : "count",
"color" : "11/a",
}
metric_info["hosts_inactive"] = {
"title" : _("Inactive hosts"),
"unit" : "count",
"color" : "16/a",
}
metric_info["hosts_degraded"] = {
"title" : _("Degraded hosts"),
"unit" : "count",
"color" : "23/a",
}
metric_info["hosts_offline"] = {
"title" : _("Offline hosts"),
"unit" : "count",
"color" : "31/a",
}
metric_info["hosts_other"] = {
"title" : _("Other hosts"),
"unit" : "count",
"color" : "41/a",
}
metric_info["service_check_rate"] = {
"title" : _("Service check rate"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["monitored_services"] = {
"title" : _("Monitored services"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["livestatus_connect_rate"] = {
"title" : _("Livestatus connects"),
"unit" : "1/s",
"color" : "#556677",
}
metric_info["livestatus_request_rate"] = {
"title" : _("Livestatus requests"),
"unit" : "1/s",
"color" : "#bbccdd",
}
metric_info["helper_usage_cmk"] = {
"title" : _("Check_MK helper usage"),
"unit" : "%",
"color" : "15/a",
}
metric_info["helper_usage_generic"] = {
"title" : _("Generic helper usage"),
"unit" : "%",
"color" : "41/a",
}
metric_info["average_latency_cmk"] = {
"title" : _("Check_MK check latency"),
"unit" : "s",
"color" : "15/a",
}
metric_info["average_latency_generic"] = {
"title" : _("Check latency"),
"unit" : "s",
"color" : "41/a",
}
metric_info["livestatus_usage"] = {
"title" : _("Livestatus usage"),
"unit" : "%",
"color" : "12/a",
}
metric_info["livestatus_overflows_rate"] = {
"title" : _("Livestatus overflows"),
"unit" : "1/s",
"color" : "16/a",
}
metric_info["num_open_events"] = {
"title" : _("Open events"),
"unit" : "count",
"color" : "12/a",
}
metric_info["average_message_rate"] = {
"title" : _("Incoming messages"),
"unit" : "1/s",
"color" : "11/a",
}
metric_info["average_rule_trie_rate"] = {
"title" : _("Rule tries"),
"unit" : "1/s",
"color" : "13/a",
}
metric_info["average_rule_hit_rate"] = {
"title" : _("Rule hits"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["average_drop_rate"] = {
"title" : _("Message drops"),
"unit" : "1/s",
"color" : "24/a",
}
metric_info["average_event_rate"] = {
"title" : _("Event creations"),
"unit" : "1/s",
"color" : "31/a",
}
metric_info["average_connect_rate"] = {
"title" : _("Status: Client connects"),
"unit" : "1/s",
"color" : "34/a",
}
metric_info["average_request_time"] = {
"title" : _("Status: Request time"),
"unit" : "s",
"color" : "41/a",
}
metric_info["average_processing_time"] = {
"title" : _("Event processing time"),
"unit" : "s",
"color" : "31/a",
}
metric_info["average_rule_hit_ratio"] = {
"title" : _("Rule hit ratio"),
"unit" : "%",
"color" : "31/a",
}
metric_info["log_message_rate"] = {
"title" : _("Log messages"),
"unit" : "1/s",
"color" : "#aa44cc",
}
metric_info["normal_updates"] = {
"title" : _("Pending normal updates"),
"unit" : "count",
"color" : "#c08030",
}
metric_info["security_updates"] = {
"title" : _("Pending security updates"),
"unit" : "count",
"color" : "#ff0030",
}
metric_info["used_dhcp_leases"] = {
"title" : _("Used DHCP leases"),
"unit" : "count",
"color" : "#60bbbb",
}
metric_info["free_dhcp_leases"] = {
"title" : _("Free DHCP leases"),
"unit" : "count",
"color" : "34/a",
}
metric_info["pending_dhcp_leases"] = {
"title" : _("Pending DHCP leases"),
"unit" : "count",
"color" : "31/a",
}
metric_info["registered_phones"] = {
"title" : _("Registered phones"),
"unit" : "count",
"color" : "#60bbbb",
}
metric_info["messages"] = {
"title" : _("Messages"),
"unit" : "count",
"color" : "#aa44cc",
}
metric_info["call_legs"] = {
"title" : _("Call legs"),
"unit" : "count",
"color" : "#60bbbb",
}
metric_info["mails_received_time"] = {
"title" : _("Received mails"),
"unit" : "s",
"color" : "31/a",
}
metric_info["mail_queue_deferred_length"] = {
"title" : _("Length of deferred mail queue"),
"unit" : "count",
"color" : "#40a0b0",
}
metric_info["mail_queue_active_length"] = {
"title" : _("Length of active mail queue"),
"unit" : "count",
"color" : "#ff6000",
}
metric_info["mail_queue_deferred_size"] = {
"title" : _("Size of deferred mail queue"),
"unit" : "bytes",
"color" : "43/a",
}
metric_info["mail_queue_active_size"] = {
"title" : _("Size of active mail queue"),
"unit" : "bytes",
"color" : "31/a",
}
metric_info["messages_inbound"] = {
"title" : _("Inbound messages"),
"unit" : "1/s",
"color" : "31/a",
}
metric_info["messages_outbound"] = {
"title" : _("Outbound messages"),
"unit" : "1/s",
"color" : "36/a",
}
metric_info["pages_total"] = {
"title" : _("Total printed pages"),
"unit" : "count",
"color" : "46/a",
}
metric_info["pages_color"] = {
"title" : _("Color"),
"unit" : "count",
"color" : "#0010f4",
}
metric_info["pages_bw"] = {
"title" : _("B/W"),
"unit" : "count",
"color" : "51/a",
}
metric_info["pages_a4"] = {
"title" : _("A4"),
"unit" : "count",
"color" : "31/a",
}
metric_info["pages_a3"] = {
"title" : _("A3"),
"unit" : "count",
"color" : "31/b",
}
metric_info["pages_color_a4"] = {
"title" : _("Color A4"),
"unit" : "count",
"color" : "41/a",
}
metric_info["pages_bw_a4"] = {
"title" : _("B/W A4"),
"unit" : "count",
"color" : "51/b",
}
metric_info["pages_color_a3"] = {
"title" : _("Color A3"),
"unit" : "count",
"color" : "44/a",
}
metric_info["pages_bw_a3"] = {
"title" : _("B/W A3"),
"unit" : "count",
"color" : "52/a",
}
metric_info["supply_toner_cyan"] = {
"title" : _("Supply toner cyan"),
"unit" : "%",
"color" : "34/a",
}
metric_info["supply_toner_magenta"] = {
"title" : _("Supply toner magenta"),
"unit" : "%",
"color" : "12/a",
}
metric_info["supply_toner_yellow"] = {
"title" : _("Supply toner yellow"),
"unit" : "%",
"color" : "23/a",
}
metric_info["supply_toner_black"] = {
"title" : _("Supply toner black"),
"unit" : "%",
"color" : "51/a",
}
metric_info["supply_toner_other"] = {
"title" : _("Supply toner"),
"unit" : "%",
"color" : "52/a",
}
metric_info["pressure"] = {
"title" : _("Pressure"),
"unit" : "bar",
"color" : "#ff6234",
}
metric_info["pressure_pa"] = {
"title" : _("Pressure"),
"unit" : "pa",
"color" : "#ff6234",
}
metric_info["licenses"] = {
"title" : _("Used licenses"),
"unit" : "count",
"color" : "#ff6234",
}
metric_info["files_open"] = {
"title" : _("Open files"),
"unit" : "count",
"color" : "#ff6234",
}
metric_info["directories"] = {
"title" : _("Directories"),
"unit" : "count",
"color" : "#202020",
}
metric_info["shared_memory_segments"] = {
"title" : _("Shared memory segments"),
"unit" : "count",
"color" : "#606060",
}
metric_info["semaphore_ids"] = {
"title" : _("IPC semaphore IDs"),
"unit" : "count",
"color" : "#404040",
}
metric_info["semaphores"] = {
"title" : _("IPC semaphores"),
"unit" : "count",
"color" : "#ff4534",
}
metric_info["backup_size"] = {
"title" : _("Backup size"),
"unit" : "bytes",
"color" : "12/a",
}
metric_info["backup_avgspeed"] = {
"title" : _("Average speed of backup"),
"unit" : "bytes/s",
"color" : "22/a",
}
metric_info["backup_duration"] = {
"title" : _("Duration of backup"),
"unit" : "s",
"color" : "33/a",
}
metric_info["job_duration"] = {
"title" : _("Job duration"),
"unit" : "s",
"color" : "33/a",
}
metric_info["backup_age_database"] = {
"title" : _("Age of last database backup"),
"unit" : "s",
"color" : "11/a",
}
metric_info["backup_age_database_diff"] = {
"title" : _("Age of last differential database backup"),
"unit" : "s",
"color" : "14/a",
}
metric_info["backup_age_log"] = {
"title" : _("Age of last log backup"),
"unit" : "s",
"color" : "21/a",
}
metric_info["backup_age_file_or_filegroup"] = {
"title" : _("Age of last file or filegroup backup"),
"unit" : "s",
"color" : "24/a",
}
metric_info["backup_age_file_diff"] = {
"title" : _("Age of last differential file backup"),
"unit" : "s",
"color" : "31/a",
}
metric_info["backup_age_partial"] = {
"title" : _("Age of last partial backup"),
"unit" : "s",
"color" : "34/a",
}
metric_info["backup_age_differential_partial"] = {
"title" : _("Age of last differential partial backup"),
"unit" : "s",
"color" : "41/a",
}
metric_info["backup_age"] = {
"title" : _("Time since last backup"),
"unit" : "s",
"color" : "34/a",
}
metric_info["checkpoint_age"] = {
"title" : _("Time since last checkpoint"),
"unit" : "s",
"color" : "#006040",
}
metric_info["backup_age"] = {
"title" : _("Time since last backup"),
"unit" : "s",
"color" : "34/a",
}
metric_info["logswitches_last_hour"] = {
"title" : _("Log switches in the last 60 minutes"),
"unit" : "count",
"color" : "#006040",
}
metric_info["database_apply_lag"] = {
"title" : _("Database apply lag"),
"help" : _("Amount of time that the application of redo data on the standby database lags behind the primary database"),
"unit" : "s",
"color" : "#006040",
}
metric_info["direct_io"] = {
"title" : _("Direct I/O"),
"unit" : "bytes/s",
"color" : "#006040",
}
metric_info["buffered_io"] = {
"title" : _("Buffered I/O"),
"unit" : "bytes/s",
"color" : "#006040",
}
metric_info["write_cache_usage"] = {
"title" : _("Write cache usage"),
"unit" : "%",
"color" : "#030303",
}
metric_info["total_cache_usage"] = {
"title" : _("Total cache usage"),
"unit" : "%",
"color" : "#0ae86d",
}
# TODO: "title" darf nicht mehr info enthalten als die ID
# TODO: Was heißt Garbage collection? Dauer? Anzahl pro Zeit?
# Größe in MB???
metric_info["gc_reclaimed_redundant_memory_areas"] = {
"title" : _("Reclaimed redundant memory areas"),
"help" : _("The garbage collector attempts to reclaim garbage, or memory occupied by objects that are no longer in use by a program"),
"unit" : "count",
"color" : "31/a",
}
# TODO: ? GCs/sec? oder Avg time? Oder was?
metric_info["gc_reclaimed_redundant_memory_areas_rate"] = {
"title" : _("Reclaiming redundant memory areas"),
"unit" : "1/s",
"color" : "32/a",
}
metric_info["net_data_recv"] = {
"title" : _("Net data received"),
"unit" : "bytes/s",
"color" : "41/b",
}
metric_info["net_data_sent"] = {
"title" : _("Net data sent"),
"unit" : "bytes/s",
"color" : "42/a",
}
for ty, unit in [ ("requests", "1/s"), ("bytes", "bytes/s"), ("secs", "1/s") ]:
metric_info[ty + "_cmk_views"] = {
"title" : _("Check_MK: Views"),
"unit" : unit,
"color" : "#ff8080",
}
metric_info[ty + "_cmk_wato"] = {
"title" : _("Check_MK: WATO"),
"unit" : unit,
"color" : "#377cab",
}
metric_info[ty + "_cmk_bi"] = {
"title" : _("Check_MK: BI"),
"unit" : unit,
"color" : "#4eb0f2",
}
metric_info[ty + "_cmk_snapins"] = {
"title" : _("Check_MK: Snapins"),
"unit" : unit,
"color" : "#ff4040",
}
metric_info[ty + "_cmk_dashboards"] = {
"title" : _("Check_MK: Dashboards"),
"unit" : unit,
"color" : "#4040ff",
}
metric_info[ty + "_cmk_other"] = {
"title" : _("Check_MK: Other"),
"unit" : unit,
"color" : "#5bb9eb",
}
metric_info[ty + "_nagvis_snapin"] = {
"title" : _("NagVis: Snapin"),
"unit" : unit,
"color" : "#f2904e",
}
metric_info[ty + "_nagvis_ajax"] = {
"title" : _("NagVis: AJAX"),
"unit" : unit,
"color" : "#af91eb",
}
metric_info[ty + "_nagvis_other"] = {
"title" : _("NagVis: Other"),
"unit" : unit,
"color" : "#f2df40",
}
metric_info[ty + "_images"] = {
"title" : _("Image"),
"unit" : unit,
"color" : "#91cceb",
}
metric_info[ty + "_styles"] = {
"title" : _("Styles"),
"unit" : unit,
"color" : "#c6f24e",
}
metric_info[ty + "_scripts"] = {
"title" : _("Scripts"),
"unit" : unit,
"color" : "#4ef26c",
}
metric_info[ty + "_other"] = {
"title" : _("Other"),
"unit" : unit,
"color" : "#4eeaf2",
}
metric_info["total_modems"] = {
"title" : _("Total number of modems"),
"unit" : "count",
"color" : "12/c",
}
metric_info["active_modems"] = {
"title" : _("Active modems"),
"unit" : "count",
"color" : "14/c",
}
metric_info["registered_modems"] = {
"title" : _("Registered modems"),
"unit" : "count",
"color" : "16/c",
}
metric_info["registered_desktops"] = {
"title" : _("Registered desktops"),
"unit" : "count",
"color" : "16/d",
}
metric_info["channel_utilization"] = {
"title" : _("Channel utilization"),
"unit" : "%",
"color" : "24/c",
}
metric_info["frequency"] = {
"title" : _("Frequency"),
"unit" : "hz",
"color" : "11/c",
}
metric_info["battery_capacity"] = {
"title" : _("Battery capacity"),
"unit" : "%",
"color" : "11/c",
}
metric_info["battery_current"] = {
"title" : _("Battery electrical current"),
"unit" : "a",
"color" : "15/a",
}
metric_info["battery_temp"] = {
"title" : _("Battery temperature"),
"unit" : "c",
"color" : "#ffb030",
}
metric_info["connector_outlets"] = {
"title" : _("Connector outlets"),
"unit" : "count",
"color" : "51/a",
}
metric_info["qos_dropped_bytes_rate"] = {
"title" : _("QoS dropped bits"),
"unit" : "bits/s",
"color" : "41/a",
}
metric_info["qos_outbound_bytes_rate"] = {
"title" : _("QoS outbound bits"),
"unit" : "bits/s",
"color" : "26/a",
}
metric_info["apache_state_startingup"] = {
"title" : _("Starting up"),
"unit" : "count",
"color" : "11/a",
}
metric_info["apache_state_waiting"] = {
"title" : _("Waiting"),
"unit" : "count",
"color" : "14/a",
}
metric_info["apache_state_logging"] = {
"title" : _("Logging"),
"unit" : "count",
"color" : "21/a",
}
metric_info["apache_state_dns"] = {
"title" : _("DNS lookup"),
"unit" : "count",
"color" : "24/a",
}
metric_info["apache_state_sending_reply"] = {
"title" : _("Sending reply"),
"unit" : "count",
"color" : "31/a",
}
metric_info["apache_state_reading_request"] = {
"title" : _("Reading request"),
"unit" : "count",
"color" : "34/a",
}
metric_info["apache_state_closing"] = {
"title" : _("Closing connection"),
"unit" : "count",
"color" : "41/a",
}
metric_info["apache_state_idle_cleanup"] = {
"title" : _("Idle clean up of worker"),
"unit" : "count",
"color" : "44/a",
}
metric_info["apache_state_finishing"] = {
"title" : _("Gracefully finishing"),
"unit" : "count",
"color" : "46/b",
}
metric_info["apache_state_keep_alive"] = {
"title" : _("Keepalive"),
"unit" : "count",
"color" : "53/b",
}
metric_info["http_bandwidth"] = {
"title" : _("HTTP bandwidth"),
"unit" : "bytes/s",
"color" : "53/b",
}
metric_info["time_connect"] = {
"title" : _("Time to connect"),
"unit" : "s",
"color" : "11/a",
}
metric_info["time_ssl"] = {
"title" : _("Time to negotiate SSL"),
"unit" : "s",
"color" : "13/a",
}
metric_info["time_headers"] = {
"title" : _("Time to send request"),
"unit" : "s",
"color" : "15/a",
}
metric_info["time_firstbyte"] = {
"title" : _("Time to receive start of response"),
"unit" : "s",
"color" : "26/a",
}
metric_info["time_transfer"] = {
"title" : _("Time to receive full response"),
"unit" : "s",
"color" : "41/a",
}
for volume_info in [ "NFS", "NFSv4", "CIFS", "SAN", "FCP", "ISCSI" ]:
for what, unit in [ ("data", "bytes"), ("latency", "s"), ("ios", "1/s") ]:
volume = volume_info.lower()
metric_info["%s_read_%s" % (volume, what)] = {
"title" : _( "%s read %s") % (volume_info, what),
"unit" : unit,
"color" : "31/a",
}
metric_info["%s_write_%s" % (volume, what)] = {
"title" : _( "%s write %s") % (volume_info, what),
"unit" : unit,
"color" : "44/a",
}
metric_info["nfsv4_1_ios"] = {
"title" : _( "NFSv4.1 operations"),
"unit" : "1/s",
"color" : "31/a",
}
metric_info["harddrive_power_cycles"] = {
"title" : _("Harddrive power cycles"),
"unit" : "count",
"color" : "11/a",
}
metric_info["harddrive_reallocated_sectors"] = {
"title" : _("Harddrive reallocated sectors"),
"unit" : "count",
"color" : "14/a",
}
metric_info["harddrive_reallocated_events"] = {
"title" : _("Harddrive reallocated events"),
"unit" : "count",
"color" : "21/a",
}
metric_info["harddrive_spin_retries"] = {
"title" : _("Harddrive spin retries"),
"unit" : "count",
"color" : "24/a",
}
metric_info["harddrive_pending_sectors"] = {
"title" : _("Harddrive pending sectors"),
"unit" : "count",
"color" : "31/a",
}
metric_info["harddrive_cmd_timeouts"] = {
"title" : _("Harddrive command timeouts"),
"unit" : "count",
"color" : "34/a",
}
metric_info["harddrive_end_to_end_errors"] = {
"title" : _("Harddrive end-to-end errors"),
"unit" : "count",
"color" : "41/a",
}
metric_info["harddrive_uncorrectable_erros"] = {
"title" : _("Harddrive uncorrectable errors"),
"unit" : "count",
"color" : "44/a",
}
metric_info["harddrive_udma_crc_errors"] = {
"title" : _("Harddrive UDMA CRC errors"),
"unit" : "count",
"color" : "46/a",
}
metric_info["ap_devices_total"] = {
"title" : _("Total devices"),
"unit" : "count",
"color" : "51/a"
}
metric_info["ap_devices_drifted"] = {
"title" : _("Time drifted devices"),
"unit" : "count",
"color" : "23/a"
}
metric_info["ap_devices_not_responding"] = {
"title" : _("Not responding devices"),
"unit" : "count",
"color" : "14/a"
}
metric_info["request_rate"] = {
"title" : _("Request rate"),
"unit" : "1/s",
"color" : "34/a",
}
metric_info["error_rate"] = {
"title" : _("Error rate"),
"unit" : "1/s",
"color" : "14/a",
}
metric_info["citrix_load"] = {
"title" : _("Citrix Load"),
"unit" : "%",
"color" : "34/a",
}
metric_info["storage_processor_util"] = {
"title" : _("Storage Processor Utilization"),
"unit" : "%",
"color" : "34/a",
}
metric_info["storage_used"] = {
"title" : _("Storage space used"),
"unit" : "bytes",
"color" : "36/a",
}
metric_info["managed_object_count"] = {
"title" : _("Managed Objects"),
"unit" : "count",
"color" : "45/a"
}
metric_info["active_vpn_tunnels"] = {
"title" : _("Active VPN Tunnels"),
"unit" : "count",
"color" : "43/a"
}
metric_info["active_vpn_users"] = {
"title" : _("Active VPN Users"),
"unit" : "count",
"color" : "23/a"
}
metric_info["active_vpn_websessions"] = {
"title" : _("Active VPN Web Sessions"),
"unit" : "count",
"color" : "33/a"
}
metric_info["o2_percentage"] = {
"title" : _("Current O2 percentage"),
"unit" : "%",
"color" : "42/a"
}
metric_info["current_users"] = {
"title" : _("Current Users"),
"unit" : "count",
"color" : "23/a"
}
metric_info["average_latency"] = {
"title" : _("Average Latency"),
"unit" : "s",
"color" : "35/a"
}
metric_info["time_in_GC"] = {
"title" : _("Time spent in GC"),
"unit" : "%",
"color" : "16/a"
}
metric_info["db_read_latency"] = {
"title" : _("Read latency"),
"unit" : "s",
"color" : "35/a",
}
metric_info["db_read_recovery_latency"] = {
"title" : _("Read recovery latency"),
"unit" : "s",
"color" : "31/a",
}
metric_info["db_write_latency"] = {
"title" : _("Write latency"),
"unit" : "s",
"color" : "45/a",
}
metric_info["db_log_latency"] = {
"title" : _("Log latency"),
"unit" : "s",
"color" : "25/a",
}
metric_info["total_active_sessions"] = {
"title" : _("Total Active Sessions"),
"unit" : "count",
"color" : "#888888",
}
metric_info["tcp_active_sessions"] = {
"title" : _("Active TCP Sessions"),
"unit" : "count",
"color" : "#888800",
}
metric_info["udp_active_sessions"] = {
"title" : _("Active UDP sessions"),
"unit" : "count",
"color" : "#880088",
}
metric_info["icmp_active_sessions"] = {
"title" : _("Active ICMP Sessions"),
"unit" : "count",
"color" : "#008888"
}
metric_info["sslproxy_active_sessions"] = {
"title" : _("Active SSL Proxy sessions"),
"unit" : "count",
"color" : "#11FF11",
}
for what, descr, color in [
("busy", "too many", "11/a"),
("unhealthy", "not attempted", "13/a"),
("req", "requests", "15/a"),
("recycle", "recycles", "21/a"),
("retry", "retry", "23/a"),
("fail", "failures", "25/a"),
("toolate", "was closed", "31/a"),
("conn", "success", "33/a"),
("reuse", "reuses", "35/a")
]:
metric_info_key = "varnish_backend_%s_rate" % what
metric_info[metric_info_key] = {
"title" : _("Backend Conn. %s") % descr,
"unit" : "1/s",
"color" : color,
}
for what, descr, color in [
("hit", "hits", "11/a"),
("miss", "misses", "13/a"),
("hitpass", "hits for pass", "21/a")
]:
metric_info_key = "varnish_cache_%s_rate" % what
metric_info[metric_info_key] = {
"title" : _("Cache %s") % descr,
"unit" : "1/s",
"color" : color,
}
for what, descr, color in [
("drop", _("Connections dropped"), "12/a"),
("req", _("Client requests received"), "22/a"),
("conn", _("Client connections accepted"), "32/a"),
("drop_late", _("Connection dropped late"), "42/a"),
]:
metric_info_key = "varnish_client_%s_rate" % what
metric_info[metric_info_key] = {
"title" : descr,
"unit" : "1/s",
"color" : color,
}
for what, descr, color in [
("oldhttp", _("Fetch pre HTTP/1.1 closed"), "11/a"),
("head", _("Fetch head"), "13/a"),
("eof", _("Fetch EOF"), "15/a"),
("zero", _("Fetch zero length"), "21/a"),
("304", _("Fetch no body (304)"), "23/a"),
("1xx", _("Fetch no body (1xx)"), "25/a"),
("204", _("Fetch no body (204)"), "31/a"),
("length", _("Fetch with length"), "33/a"),
("failed", _("Fetch failed"), "35/a"),
("bad", _("Fetch had bad headers"), "41/a"),
("close", _("Fetch wanted close"), "43/a"),
("chunked", _("Fetch chunked"), "45/a"),
]:
metric_info_key = "varnish_fetch_%s_rate" % what
metric_info[metric_info_key] = {
"title" : descr,
"unit" : "1/s",
"color" : color,
}
for what, descr, color in [
("expired", _("Expired objects"), "21/a"),
("lru_nuked", _("LRU nuked objects"), "31/a"),
("lru_moved", _("LRU moved objects"), "41/a"),
]:
metric_info_key = "varnish_objects_%s_rate" % what
metric_info[metric_info_key] = {
"title" : descr,
"unit" : "1/s",
"color" : color,
}
for what, descr, color in [
("", _("Worker threads"), "11/a"),
("_lqueue", _("Work request queue length"), "13/a"),
("_create", _("Worker threads created"), "15/a"),
("_drop", _("Dropped work requests"), "21/a"),
("_failed", _("Worker threads not created"), "23/a"),
("_queued", _("Queued work requests"), "25/a"),
("_max", _("Worker threads limited"), "31/a"),
]:
metric_info_key = "varnish_worker%s_rate" % what
metric_info[metric_info_key] = {
"title" : descr,
"unit" : "1/s",
"color" : color,
}
# ESI = Edge Side Includes
metric_info["varnish_esi_errors_rate"] = {
"title" : _("ESI Errors"),
"unit" : "1/s",
"color" : "13/a",
}
metric_info["varnish_esi_warnings_rate"] = {
"title" : _("ESI Warnings"),
"unit" : "1/s",
"color" : "21/a",
}
metric_info["varnish_backend_success_ratio"] = {
"title" : _("Varnish Backend success ratio"),
"unit" : "%",
"color" : "#60c0c0",
}
metric_info["varnish_worker_thread_ratio"] = {
"title" : _("Varnish Worker thread ratio"),
"unit" : "%",
"color" : "#60c0c0",
}
metric_info["rx_light"] = {
"title" : _("RX Signal Power"),
"unit" : "dbm",
"color" : "35/a"
}
metric_info["tx_light"] = {
"title" : _("TX Signal Power"),
"unit" : "dbm",
"color" : "15/a"
}
for i in range(10):
metric_info["rx_light_%d" % i] = {
"title" : _("RX Signal Power Lane %d") % (i + 1),
"unit" : "dbm",
"color" : "35/b",
}
metric_info["tx_light_%d" % i] = {
"title" : _("TX Signal Power Lane %d") % (i + 1),
"unit" : "dbm",
"color" : "15/b",
}
metric_info["port_temp_%d" % i] = {
"title" : _("Temperature Lane %d") % (i + 1),
"unit" : "dbm",
"color" : indexed_color(i * 3 + 2, 30),
}
metric_info["locks_per_batch"] = {
"title" : _("Locks/Batch"),
"unit" : "",
"color" : "21/a"
}
metric_info["page_reads_sec"] = {
"title" : _("Page Reads"),
"unit" : "1/s",
"color" : "33/b"
}
metric_info["page_writes_sec"] = {
"title" : _("Page Writes"),
"unit" : "1/s",
"color" : "14/a"
}
metric_info["page_lookups_sec"] = {
"title" : _("Page Lookups"),
"unit" : "1/s",
"color" : "42/a"
}
metric_info["failed_search"] = {
"title" : _("Failed search requests"),
"unit": "1/s",
"color": "42/a"
}
metric_info["failed_location"] = {
"title" : _("Failed Get Locations Requests"),
"unit": "1/s",
"color": "42/a"
}
metric_info["failed_ad"] = {
"title" : _("Timed out Active Directory Requests"),
"unit": "1/s",
"color": "42/a"
}
metric_info["http_5xx"] = {
"title" : _("HTTP 5xx Responses"),
"unit": "1/s",
"color": "42/a"
}
metric_info["message_processing_time"] = {
"title" : _("Average Incoming Message Processing Time"),
"unit": "s",
"color": "42/a"
}
metric_info["asp_requests_rejected"] = {
"title" : _("ASP Requests Rejected"),
"unit": "1/s",
"color": "42/a"
}
metric_info["failed_file_requests"] = {
"title" : _("Failed File Requests"),
"unit": "1/s",
"color": "42/a"
}
metric_info["join_failures"] = {
"title" : _("Join Launcher Service Failures"),
"unit": "count",
"color": "42/a"
}
metric_info["failed_validate_cert"] = {
"title" : _("Failed validate cert calls"),
"unit": "count",
"color": "42/a"
}
metric_info["incoming_responses_dropped"] = {
"title" : _("Incoming Responses Dropped"),
"unit": "1/s",
"color": "42/a"
}
metric_info["incoming_requests_dropped"] = {
"title" : _("Incoming Requests Dropped"),
"unit": "1/s",
"color": "42/a"
}
metric_info["queue_latency"] = {
"title" : _("Queue Latency"),
"unit": "s",
"color": "42/a"
}
metric_info["sproc_latency"] = {
"title" : _("Sproc Latency"),
"unit": "s",
"color": "42/a"
}
metric_info["throttled_requests"] = {
"title" : _("Throttled requests"),
"unit": "1/s",
"color": "42/a"
}
metric_info["503_responses"] = {
"title" : _("Local 503 Responses"),
"unit": "1/s",
"color": "42/a"
}
metric_info["incoming_messages_timed_out"] = {
"title" : _("Incoming Messages Timed out"),
"unit": "count",
"color": "42/a"
}
metric_info["incomplete_calls"] = {
"title" : _("Incomplete Calls"),
"unit": "1/s",
"color": "42/a"
}
metric_info["create_conference_latency"] = {
"title" : _("Create Conference Latency"),
"unit": "s",
"color": "42/a"
}
metric_info["allocation_latency"] = {
"title" : _("Allocation Latency"),
"unit": "s",
"color": "42/a"
}
metric_info["avg_holding_time_incoming_messages"] = {
"title" : _("Average Holding Time For Incoming Messages"),
"unit": "s",
"color": "42/a"
}
metric_info["flow_controlled_connections"] = {
"title" : _("Flow-controlled Connections"),
"unit": "count",
"color": "42/a"
}
metric_info["avg_outgoing_queue_delay"] = {
"title" : _("Average Outgoing Queue Delay"),
"unit": "s",
"color": "42/a"
}
metric_info["sends_timed_out"] = {
"title" : _("Sends Timed-Out"),
"unit": "1/s",
"color": "42/a"
}
metric_info["authentication_errors"] = {
"title" : _("Authentication Errors"),
"unit": "1/s",
"color": "42/a"
}
metric_info["load_call_failure_index"] = {
"title" : _("Load Call Failure Index"),
"unit": "count",
"color": "42/a"
}
metric_info["failed_calls_because_of_proxy"] = {
"title" : _("Failed calls caused by unexpected interaction from proxy"),
"unit": "count",
"color": "42/a"
}
metric_info["failed_calls_because_of_gateway"] = {
"title" : _("Failed calls caused by unexpected interaction from gateway"),
"unit": "count",
"color": "42/a"
}
metric_info["media_connectivity_failure"] = {
"title" : _("Media Connectivity Check Failure"),
"unit": "count",
"color": "42/a"
}
metric_info["failed_requests"] = {
"title" : _("Bad Requests Received"),
"unit": "count",
"color": "42/a"
}
metric_info["udp_failed_auth"] = {
"title" : _("UDP Authentication Failures"),
"unit": "1/s",
"color": "42/a"
}
metric_info["tcp_failed_auth"] = {
"title" : _("TCP Authentication Failures"),
"unit": "1/s",
"color": "42/a"
}
metric_info["udp_allocate_requests_exceeding_port_limit"] = {
"title" : _("UDP Allocate Requests Exceeding Port Limit"),
"unit": "1/s",
"color": "42/a"
}
metric_info["tcp_allocate_requests_exceeding_port_limit"] = {
"title" : _("TCP Allocate Requests Exceeding Port Limit"),
"unit": "1/s",
"color": "42/a"
}
metric_info["udp_packets_dropped"] = {
"title" : _("UDP Packets Dropped"),
"unit": "1/s",
"color": "42/a"
}
metric_info["tcp_packets_dropped"] = {
"title" : _("TCP Packets Dropped"),
"unit": "1/s",
"color": "42/a"
}
metric_info["connections_throttled"] = {
"title": _("Throttled Server Connections"),
"unit": "count",
"color": "42/a"
}
metric_info["failed_outbound_streams"] = {
"title": _("Failed outbound stream establishes"),
"unit": "1/s",
"color" : "26/a",
}
metric_info["failed_inbound_streams"] = {
"title": _("Failed inbound stream establishes"),
"unit": "1/s",
"color" : "31/a",
}
skype_mobile_devices = [("android", "Android", "33/a"),
("iphone", "iPhone", "42/a"),
("ipad", "iPad", "45/a"),
("mac", "Mac", "23/a")]
for device, name, color in skype_mobile_devices:
metric_info["active_sessions_%s" % device] = {
"title" : _("Active Sessions (%s)") % name,
"unit": "count",
"color": color
}
metric_info["requests_processing"] = {
"title" : _("Requests in Processing"),
"unit": "count",
"color": "12/a"
}
for what, descr, unit, color in [
("db_cpu", "DB CPU time", "1/s", "11/a"),
("db_time", "DB time", "1/s", "15/a"),
("buffer_hit_ratio", "buffer hit ratio", "%", "21/a"),
("physical_reads", "physical reads", "1/s", "43/b"),
("physical_writes", "physical writes", "1/s", "26/a"),
("db_block_gets", "block gets", "1/s", "13/a"),
("db_block_change", "block change", "1/s", "15/a"),
("consistent_gets", "consistent gets", "1/s", "23/a"),
("free_buffer_wait", "free buffer wait", "1/s", "25/a"),
("buffer_busy_wait", "buffer busy wait", "1/s", "41/a"),
("library_cache_hit_ratio", "library cache hit ratio", "%", "21/b"),
("pins_sum", "pins sum", "1/s", "41/a"),
("pin_hits_sum", "pin hits sum", "1/s", "46/a")]:
metric_info["oracle_%s" % what] = {
"title" : _("ORACLE %s") % descr,
"unit" : unit,
"color" : color,
}
metric_info["dhcp_requests"] = {
"title" : _("DHCP received requests"),
"unit" : "count",
"color" : "14/a",
}
metric_info["dhcp_releases"] = {
"title" : _("DHCP received releases"),
"unit" : "count",
"color" : "21/a",
}
metric_info["dhcp_declines"] = {
"title" : _("DHCP received declines"),
"unit" : "count",
"color" : "24/a",
}
metric_info["dhcp_informs"] = {
"title" : _("DHCP received informs"),
"unit" : "count",
"color" : "31/a",
}
metric_info["dhcp_others"] = {
"title" : _("DHCP received other messages"),
"unit" : "count",
"color" : "34/a",
}
metric_info["dhcp_offers"] = {
"title" : _("DHCP sent offers"),
"unit" : "count",
"color" : "12/a",
}
metric_info["dhcp_acks"] = {
"title" : _("DHCP sent acks"),
"unit" : "count",
"color" : "15/a",
}
metric_info["dhcp_nacks"] = {
"title" : _("DHCP sent nacks"),
"unit" : "count",
"color" : "22/b",
}
metric_info["dns_successes"] = {
"title" : _("DNS successful responses"),
"unit" : "count",
"color" : "11/a",
}
metric_info["dns_referrals"] = {
"title" : _("DNS referrals"),
"unit" : "count",
"color" : "14/a",
}
metric_info["dns_recursion"] = {
"title" : _("DNS queries received using recursion"),
"unit" : "count",
"color" : "21/a",
}
metric_info["dns_failures"] = {
"title" : _("DNS failed queries"),
"unit" : "count",
"color" : "24/a",
}
metric_info["dns_nxrrset"] = {
"title" : _("DNS queries received for non-existent record"),
"unit" : "count",
"color" : "31/a",
}
metric_info["dns_nxdomain"] = {
"title" : _("DNS queries received for non-existent domain"),
"unit" : "count",
"color" : "34/a",
}
metric_info["filehandler_perc"] = {
"title" : _("Used file handles"),
"unit" : "%",
"color" : "#4800ff",
}
#.
# .--Checks--------------------------------------------------------------.
# | ____ _ _ |
# | / ___| |__ ___ ___| | _____ |
# | | | | '_ \ / _ \/ __| |/ / __| |
# | | |___| | | | __/ (__| <\__ \ |
# | \____|_| |_|\___|\___|_|\_\___/ |
# | |
# +----------------------------------------------------------------------+
# | How various checks' performance data translate into the known |
# | metrics |
# '----------------------------------------------------------------------'
check_metrics["check_mk_active-icmp"] = {
"rta" : { "scale" : m },
"rtmax" : { "scale" : m },
"rtmin" : { "scale" : m },
}
check_metrics["check-mk-host-ping"] = {
"rta" : { "scale" : m },
"rtmax" : { "scale" : m },
"rtmin" : { "scale" : m },
}
check_metrics["check-mk-ping"] = {
"rta" : { "scale" : m },
"rtmax" : { "scale" : m },
"rtmin" : { "scale" : m },
}
check_metrics["check-mk-host-ping-cluster"] = {
"~.*rta" : { "name" : "rta", "scale": m },
"~.*pl" : { "name" : "pl", "scale": m },
"~.*rtmax" : { "name" : "rtmax", "scale": m },
"~.*rtmin" : { "name" : "rtmin", "scale": m },
}
check_metrics["check_mk_active-mail_loop"] = {
"duration" : { "name": "mails_received_time" }
}
check_metrics["check_mk_active-http"] = {
"time" : { "name": "response_time" },
"size" : { "name": "http_bandwidth" },
}
check_metrics["check_mk_active-tcp"] = {
"time" : { "name": "response_time" }
}
check_metrics["check-mk-host-tcp"] = {
"time" : { "name": "response_time" }
}
check_metrics["check_mk-netapp_api_volumes"] = {
"nfs_read_latency" : { "scale" : m },
"nfs_write_latency" : { "scale" : m },
"cifs_read_latency" : { "scale" : m },
"cifs_write_latency" : { "scale" : m },
"san_read_latency" : { "scale" : m },
"san_write_latency" : { "scale" : m },
"fcp_read_latency" : { "scale" : m },
"fcp_write_latency" : { "scale" : m },
"iscsi_read_latency" : { "scale" : m },
"iscsi_write_latency" : { "scale" : m },
}
check_metrics["check_mk_active-tcp"] = {
"time" : { "name": "response_time" }
}
check_metrics["check_mk-citrix_serverload"] = {
"perf" : { "name" : "citrix_load", "scale" : 0.01 }
}
check_metrics["check_mk-postfix_mailq"] = {
"length" : { "name" : "mail_queue_deferred_length" },
"size" : { "name" : "mail_queue_deferred_size" },
"~mail_queue_.*_size" : { "name" : "mail_queue_active_size" },
"~mail_queue_.*_length" : { "name" : "mail_queue_active_length" },
}
check_metrics["check-mk-host-tcp"] = {
"time" : { "name": "response_time" }
}
check_metrics["check_mk-jolokia_metrics.gc"] = {
"CollectionCount" : { "name" : "gc_reclaimed_redundant_memory_areas" },
"CollectionTime" : { "name" : "gc_reclaimed_redundant_memory_areas_rate", "scale" : 1 / 60.0 },
}
check_metrics["check_mk-rmon_stats"] = {
"0-63b" : { "name" : "rmon_packets_63" },
"64-127b" : { "name" : "rmon_packets_127" },
"128-255b" : { "name" : "rmon_packets_255" },
"256-511b" : { "name" : "rmon_packets_511" },
"512-1023b" : { "name" : "rmon_packets_1023" },
"1024-1518b": { "name" : "rmon_packets_1518" },
}
check_metrics["check_mk-cpu.loads"] = {
"load5" : { "auto_graph" : False }
}
check_metrics["check_mk-ucd_cpu_load"] = {
"load5" : { "auto_graph" : False }
}
check_metrics["check_mk-hpux_cpu"] = {
"wait" : { "name" : "io_wait" }
}
check_metrics["check_mk-hitachi_hnas_cpu"] = {
"cpu_util" : { "name" : "util" }
}
check_metrics["check_mk-statgrab_disk"] = {
"read" : { "name" : "disk_read_throughput" },
"write" : { "name" : "disk_write_throughput" }
}
check_metrics["check_mk-ibm_svc_systemstats.diskio"] = {
"read" : { "name" : "disk_read_throughput" },
"write" : { "name" : "disk_write_throughput" }
}
check_metrics["check_mk-ibm_svc_nodestats.diskio"] = {
"read" : { "name" : "disk_read_throughput" },
"write" : { "name" : "disk_write_throughput" }
}
check_metrics["check_mk-netscaler_mem"] = {
"mem" : { "name" : "mem_used" }
}
ram_used_swap_translation = {
"ramused" : { "name" : "mem_used", "scale" : MB },
"swapused" : { "name" : "swap_used", "scale" : MB },
"memused" : { "name" : "total_used", "auto_graph" : False, "scale" : MB },
}
check_metrics["check_mk-statgrab_mem"] = ram_used_swap_translation
check_metrics["check_mk-hr_mem"] = ram_used_swap_translation
check_metrics["check_mk-mem.used"] = {
"ramused" : { "name" : "mem_used", "scale" : MB },
"swapused" : { "name" : "swap_used", "scale" : MB },
"memused" : { "name" : "mem_total", "scale" : MB },
"shared" : { "name" : "mem_lnx_shmem", "scale" : MB },
"pagetable" : { "name" : "mem_lnx_page_tables", "scale" : MB },
"mapped" : { "name" : "mem_lnx_mapped", "scale" : MB },
"committed_as" : { "name" : "mem_lnx_committed_as", "scale" : MB },
}
check_metrics["check_mk-esx_vsphere_vm.mem_usage"] = {
"host" : { "name" : "mem_esx_host" },
"guest" : { "name" : "mem_esx_guest" },
"ballooned" : { "name" : "mem_esx_ballooned" },
"shared" : { "name" : "mem_esx_shared" },
"private" : { "name" : "mem_esx_private" },
}
check_metrics["check_mk-ibm_svc_nodestats.disk_latency"] = {
"read_latency" : { "scale" : m },
"write_latency" : { "scale" : m },
}
check_metrics["check_mk-ibm_svc_systemstats.disk_latency"] = {
"read_latency" : { "scale" : m },
"write_latency" : { "scale" : m },
}
check_metrics["check_mk-netapp_api_disk.summary"] = {
"total_disk_capacity" : { "name" : "disk_capacity" },
"total_disks" : { "name" : "disks" },
}
check_metrics["check_mk-emc_isilon_iops"] = {
"iops" : { "name" : "disk_ios" }
}
check_metrics["check_mk-vms_system.ios"] = {
"direct" : { "name" : "direct_io" },
"buffered" : { "name" : "buffered_io" }
}
check_metrics["check_mk-kernel"] = {
"ctxt" : { "name": "context_switches" },
"pgmajfault" : { "name": "major_page_faults" },
"processes" : { "name": "process_creations" },
}
check_metrics["check_mk-oracle_jobs"] = {
"duration" : { "name" : "job_duration" }
}
check_metrics["check_mk-vms_system.procs"] = {
"procs" : { "name" : "processes" }
}
check_metrics["check_mk-jolokia_metrics.tp"] = {
"currentThreadCount" : { "name" : "threads_idle" },
"currentThreadsBusy" : { "name" : "threads_busy" },
}
check_metrics["check_mk-aix_memory"] = {
"ramused" : { "name" : "mem_used", "scale": MB },
"swapused" : { "name" : "swap_used", "scale": MB }
}
check_metrics["check_mk-mem.win"] = {
"memory" : { "name" : "mem_used", "scale" : MB },
"pagefile" : { "name" : "pagefile_used", "scale" : MB },
"mem_total" : { "auto_graph" : False, "scale" : MB },
"pagefile_total" : { "auto_graph" : False, "scale" : MB},
}
check_metrics["check_mk-brocade_mlx.module_mem"] = {
"memused" : { "name" : "mem_used" }
}
check_metrics["check_mk-jolokia_metrics.mem"] = {
"heap" : { "name" : "mem_heap" , "scale" : MB },
"nonheap" : { "name" : "mem_nonheap", "scale" : MB }
}
check_metrics["check_mk-jolokia_metrics.threads"] = {
"ThreadRate" : { "name" : "threads_rate" },
"ThreadCount" : { "name" : "threads" },
"DeamonThreadCount" : { "name" : "threads_daemon" },
"PeakThreadCount" : { "name" : "threads_max" },
"TotalStartedThreadCount" : { "name" : "threads_total" },
}
check_metrics["check_mk-mem.linux"] = {
"cached" : { "name" : "mem_lnx_cached", },
"buffers" : { "name" : "mem_lnx_buffers", },
"slab" : { "name" : "mem_lnx_slab", },
"active_anon" : { "name" : "mem_lnx_active_anon", },
"active_file" : { "name" : "mem_lnx_active_file", },
"inactive_anon" : { "name" : "mem_lnx_inactive_anon", },
"inactive_file" : { "name" : "mem_lnx_inactive_file", },
"active" : { "name" : "mem_lnx_active", },
"inactive" : { "name" : "mem_lnx_inactive", },
"dirty" : { "name" : "mem_lnx_dirty", },
"writeback" : { "name" : "mem_lnx_writeback", },
"nfs_unstable" : { "name" : "mem_lnx_nfs_unstable", },
"bounce" : { "name" : "mem_lnx_bounce", },
"writeback_tmp" : { "name" : "mem_lnx_writeback_tmp", },
"total_total" : { "name" : "mem_lnx_total_total", },
"committed_as" : { "name" : "mem_lnx_committed_as", },
"commit_limit" : { "name" : "mem_lnx_commit_limit", },
"shmem" : { "name" : "mem_lnx_shmem", },
"kernel_stack" : { "name" : "mem_lnx_kernel_stack", },
"page_tables" : { "name" : "mem_lnx_page_tables", },
"mlocked" : { "name" : "mem_lnx_mlocked", },
"huge_pages_total" : { "name" : "mem_lnx_huge_pages_total", },
"huge_pages_free" : { "name" : "mem_lnx_huge_pages_free", },
"huge_pages_rsvd" : { "name" : "mem_lnx_huge_pages_rsvd", },
"huge_pages_surp" : { "name" : "mem_lnx_huge_pages_surp", },
"vmalloc_total" : { "name" : "mem_lnx_vmalloc_total", },
"vmalloc_used" : { "name" : "mem_lnx_vmalloc_used", },
"vmalloc_chunk" : { "name" : "mem_lnx_vmalloc_chunk", },
"hardware_corrupted" : { "name" : "mem_lnx_hardware_corrupted", },
# Several computed values should not be graphed because they
# are already contained in the other graphs. Or because they
# are bizarre
"caches" : { "name" : "caches", "auto_graph" : False },
"swap_free" : { "name" : "swap_free", "auto_graph" : False },
"mem_free" : { "name" : "mem_free", "auto_graph" : False },
"sreclaimable" : { "name" : "mem_lnx_sreclaimable", "auto_graph" : False },
"pending" : { "name" : "mem_lnx_pending", "auto_graph" : False },
"sunreclaim" : { "name" : "mem_lnx_sunreclaim", "auto_graph" : False },
"anon_huge_pages" : { "name" : "mem_lnx_anon_huge_pages", "auto_graph" : False },
"anon_pages" : { "name" : "mem_lnx_anon_pages", "auto_graph" : False },
"mapped" : { "name" : "mem_lnx_mapped", "auto_graph" : False },
"active" : { "name" : "mem_lnx_active", "auto_graph" : False },
"inactive" : { "name" : "mem_lnx_inactive", "auto_graph" : False },
"total_used" : { "name" : "mem_lnx_total_used", "auto_graph" : False },
"unevictable" : { "name" : "mem_lnx_unevictable", "auto_graph" : False },
"cma_free" : { "auto_graph" : False },
"cma_total" : { "auto_graph" : False },
}
check_metrics["check_mk-mem.vmalloc"] = {
"used" : { "name" : "mem_lnx_vmalloc_used" },
"chunk" : { "name" : "mem_lnx_vmalloc_chunk" }
}
tcp_conn_stats_translation = {
"SYN_SENT" : { "name": "tcp_syn_sent" },
"SYN_RECV" : { "name": "tcp_syn_recv" },
"ESTABLISHED" : { "name": "tcp_established" },
"LISTEN" : { "name": "tcp_listen" },
"TIME_WAIT" : { "name": "tcp_time_wait" },
"LAST_ACK" : { "name": "tcp_last_ack" },
"CLOSE_WAIT" : { "name": "tcp_close_wait" },
"CLOSED" : { "name": "tcp_closed" },
"CLOSING" : { "name": "tcp_closing" },
"FIN_WAIT1" : { "name": "tcp_fin_wait1" },
"FIN_WAIT2" : { "name": "tcp_fin_wait2" },
"BOUND" : { "name": "tcp_bound" },
"IDLE" : { "name": "tcp_idle" },
}
check_metrics["check_mk-tcp_conn_stats"] = tcp_conn_stats_translation
check_metrics["check_mk-datapower_tcp"] = tcp_conn_stats_translation
check_metrics["check_mk_active-disk_smb"] = {
"~.*" : { "name" : "fs_used" }
}
df_translation = {
"~(?!inodes_used|fs_size|growth|trend|fs_provisioning|"
"uncommitted|overprovisioned).*$" : { "name" : "fs_used", "scale" : MB },
"fs_size" : { "scale" : MB },
"growth" : { "name" : "fs_growth", "scale" : MB / 86400.0 },
"trend" : { "name" : "fs_trend", "scale" : MB / 86400.0 },
}
check_metrics["check_mk-df"] = df_translation
check_metrics["check_mk-esx_vsphere_datastores"] = df_translation
check_metrics["check_mk-netapp_api_aggr"] = df_translation
check_metrics["check_mk-vms_df"] = df_translation
check_metrics["check_mk-vms_diskstat.df"] = df_translation
check_metrics["check_disk"] = df_translation
check_metrics["check_mk-df_netapp"] = df_translation
check_metrics["check_mk-df_netapp32"] = df_translation
check_metrics["check_mk-zfsget"] = df_translation
check_metrics["check_mk-hr_fs"] = df_translation
check_metrics["check_mk-oracle_asm_diskgroup"] = df_translation
check_metrics["check_mk-esx_vsphere_counters.ramdisk"] = df_translation
check_metrics["check_mk-hitachi_hnas_span"] = df_translation
check_metrics["check_mk-hitachi_hnas_volume"] = df_translation
check_metrics["check_mk-emcvnx_raidgroups.capacity"] = df_translation
check_metrics["check_mk-emcvnx_raidgroups.capacity_contiguous"] = df_translation
check_metrics["check_mk-ibm_svc_mdiskgrp"] = df_translation
check_metrics["check_mk-fast_lta_silent_cubes.capacity"] = df_translation
check_metrics["check_mk-fast_lta_volumes"] = df_translation
check_metrics["check_mk-libelle_business_shadow.archive_dir"] = df_translation
check_metrics["check_mk-netapp_api_volumes"] = df_translation
check_metrics["check_mk-netapp_api_qtree_quota"] = df_translation
check_metrics["check_mk-emc_isilon_quota"] = df_translation
check_metrics["check_mk-emc_isilon_ifs"] = df_translation
check_metrics["check_mk-mongodb_collections"] = df_translation
disk_utilization_translation = { "disk_utilization" : { "scale" : 100.0 } }
check_metrics["check_mk-diskstat"] = disk_utilization_translation
check_metrics["check_mk-emc_vplex_director_stats"] = disk_utilization_translation
check_metrics["check_mk-emc_vplex_volumes"] = disk_utilization_translation
check_metrics["check_mk-esx_vsphere_counters.diskio"] = disk_utilization_translation
check_metrics["check_mk-hp_msa_controller.io"] = disk_utilization_translation
check_metrics["check_mk-hp_msa_disk.io"] = disk_utilization_translation
check_metrics["check_mk-hp_msa_volume.io"] = disk_utilization_translation
check_metrics["check_mk-winperf_phydisk"] = disk_utilization_translation
check_metrics["check_mk-arbor_peakflow_sp.disk_usage"] = disk_utilization_translation
check_metrics["check_mk-arbor_peakflow_tms.disk_usage"] = disk_utilization_translation
check_metrics["check_mk-arbor_pravail.disk_usage"] = disk_utilization_translation
# in=0;;;0; inucast=0;;;; innucast=0;;;; indisc=0;;;; inerr=0;0.01;0.1;; out=0;;;0; outucast=0;;;; outnucast=0;;;; outdisc=0;;;; outerr=0;0.01;0.1;; outqlen=0;;;0;
if_translation = {
"in" : { "name": "if_in_bps", "scale": 8 },
"out" : { "name": "if_out_bps", "scale": 8 },
"indisc" : { "name": "if_in_discards" },
"inerr" : { "name": "if_in_errors" },
"outdisc" : { "name": "if_out_discards" },
"outerr" : { "name": "if_out_errors" },
"inucast" : { "name": "if_in_unicast" },
"innucast" : { "name": "if_in_non_unicast" },
"outucast" : { "name": "if_out_unicast" },
"outnucast" : { "name": "if_out_non_unicast" },
}
check_metrics["check_mk-esx_vsphere_counters"] = if_translation
check_metrics["check_mk-esx_vsphere_counters.if"] = if_translation
check_metrics["check_mk-fritz"] = if_translation
check_metrics["check_mk-fritz.wan_if"] = if_translation
check_metrics["check_mk-hitachi_hnas_fc_if"] = if_translation
check_metrics["check_mk-if64"] = if_translation
check_metrics["check_mk-if64adm"] = if_translation
check_metrics["check_mk-hpux_if"] = if_translation
check_metrics["check_mk-if64_tplink"] = if_translation
check_metrics["check_mk-if_lancom"] = if_translation
check_metrics["check_mk-if"] = if_translation
check_metrics["check_mk-lnx_if"] = if_translation
check_metrics["check_mk-mcdata_fcport"] = if_translation
check_metrics["check_mk-netapp_api_if"] = if_translation
check_metrics["check_mk-statgrab_net"] = if_translation
check_metrics["check_mk-ucs_bladecenter_if"] = if_translation
check_metrics["check_mk-vms_if"] = if_translation
check_metrics["check_mk-winperf_if"] = if_translation
check_metrics["check_mk-emc_vplex_if"] = if_translation
check_metrics["check_mk-brocade_fcport"] = {
"in" : { "name": "fc_rx_bytes", },
"out" : { "name": "fc_tx_bytes", },
"rxframes" : { "name": "fc_rx_frames", },
"txframes" : { "name": "fc_tx_frames", },
"rxcrcs" : { "name": "fc_crc_errors" },
"rxencoutframes" : { "name": "fc_encouts" },
"rxencinframes" : { "name": "fc_encins" },
"c3discards" : { "name": "fc_c3discards" },
"notxcredits" : { "name": "fc_notxcredits" },
}
check_metrics["check_mk-fc_port"] = {
"in" : { "name": "fc_rx_bytes", },
"out" : { "name": "fc_tx_bytes", },
"rxobjects" : { "name": "fc_rx_frames", },
"txobjects" : { "name": "fc_tx_frames", },
"rxcrcs" : { "name": "fc_crc_errors" },
"rxencoutframes" : { "name": "fc_encouts" },
"c3discards" : { "name": "fc_c3discards" },
"notxcredits" : { "name": "fc_notxcredits" },
}
check_metrics["check_mk-qlogic_fcport"] = {
"in" : { "name" : "fc_rx_bytes", },
"out" : { "name" : "fc_tx_bytes", },
"rxframes" : { "name" : "fc_rx_frames", },
"txframes" : { "name" : "fc_tx_frames", },
"link_failures" : { "name" : "fc_link_fails" },
"sync_losses" : { "name" : "fc_sync_losses" },
"prim_seq_proto_errors" : { "name" : "fc_prim_seq_errors" },
"invalid_tx_words" : { "name" : "fc_invalid_tx_words" },
"discards" : { "name" : "fc_c2c3_discards" },
"invalid_crcs" : { "name" : "fc_invalid_crcs" },
"address_id_errors" : { "name" : "fc_address_id_errors" },
"link_reset_ins" : { "name" : "fc_link_resets_in" },
"link_reset_outs" : { "name" : "fc_link_resets_out" },
"ols_ins" : { "name" : "fc_offline_seqs_in" },
"ols_outs" : { "name" : "fc_offline_seqs_out" },
"c2_fbsy_frames" : { "name" : "fc_c2_fbsy_frames" },
"c2_frjt_frames" : { "name" : "fc_c2_frjt_frames" },
}
check_metrics["check_mk-mysql.innodb_io"] = {
"read" : { "name" : "disk_read_throughput" },
"write": { "name" : "disk_write_throughput" }
}
check_metrics["check_mk-esx_vsphere_counters.diskio"] = {
"read" : { "name" : "disk_read_throughput" },
"write" : { "name" : "disk_write_throughput" },
"ios" : { "name" : "disk_ios" },
"latency" : { "name" : "disk_latency" },
"disk_utilization" : { "scale" : 100.0 },
}
check_metrics["check_mk-emcvnx_disks"] = {
"read" : { "name" : "disk_read_throughput" },
"write": { "name" : "disk_write_throughput" }
}
check_metrics["check_mk-diskstat"] = {
"read" : { "name" : "disk_read_throughput" },
"write": { "name" : "disk_write_throughput" },
"disk_utilization" : { "scale" : 100.0 },
}
check_metrics["check_mk-ibm_svc_systemstats.iops"] = {
"read" : { "name" : "disk_read_ios" },
"write" : { "name" : "disk_write_ios" }
}
check_metrics["check_mk-dell_powerconnect_temp"] = {
"temperature" : { "name" : "temp" }
}
check_metrics["check_mk-bluecoat_diskcpu"] = {
"value" : { "name" : "generic_util" }
}
check_metrics["check_mk-ipmi_sensors"] = {
"value" : { "name" : "temp" }
}
check_metrics["check_mk-ipmi"] = {
"ambient_temp" : { "name" : "temp" }
}
check_metrics["check_mk-wagner_titanus_topsense.airflow_deviation"] = {
"airflow_deviation" : { "name" : "deviation_airflow" }
}
check_metrics["check_mk-wagner_titanus_topsense.chamber_deviation"] = {
"chamber_deviation" : { "name" : "deviation_calibration_point" }
}
check_metrics["check_mk-apc_symmetra"] = {
"OutputLoad" : { "name" : "output_load" },
"batcurr" : { "name" : "battery_current" },
"systemp" : { "name" : "battery_temp" },
"capacity" : { "name" : "battery_capacity" },
"runtime" : { "name" : "lifetime_remaining", "scale" : 60 },
}
check_metrics["check_mk-apc_symmetra.temp"] = {
"systemp" : { "name" : "battery_temp" },
}
check_metrics["check_mk-apc_symmetra.elphase"] = {
"OutputLoad" : { "name" : "output_load" },
"batcurr" : { "name" : "battery_current" },
}
check_metrics["check_mk-kernel.util"] = {
"wait" : { "name" : "io_wait" },
"guest" : { "name" : "cpu_util_guest" },
"steal" : { "name" : "cpu_util_steal" },
}
check_metrics["check_mk-lparstat_aix.cpu_util"] = {
"wait" : { "name" : "io_wait" }
}
check_metrics["check_mk-ucd_cpu_util"] = {
"wait" : { "name" : "io_wait" }
}
check_metrics["check_mk-vms_cpu"] = {
"wait" : { "name" : "io_wait" }
}
check_metrics["check_mk-vms_sys.util"] = {
"wait" : { "name" : "io_wait" }
}
check_metrics["check_mk-winperf.cpuusage"] = {
"cpuusage" : { "name" : "util" }
}
check_metrics["check_mk-h3c_lanswitch_cpu"] = {
"usage" : { "name" : "util" }
}
check_metrics["check_mk-h3c_lanswitch_cpu"] = {
"usage" : { "name" : "util" }
}
check_metrics["check_mk-brocade_mlx.module_cpu"] = {
"cpu_util1" : { "name" : "util1s" },
"cpu_util5" : { "name" : "util5s" },
"cpu_util60" : { "name" : "util1" },
"cpu_util200" : { "name" : "util5" },
}
check_metrics["check_mk-dell_powerconnect"] = {
"load" : { "name" : "util" },
"loadavg 60s" : { "name" : "util1" },
"loadavg 5m" : { "name" : "util5" },
}
check_metrics["check_mk-ibm_svc_nodestats.cache"] = {
"write_cache_pc" : { "name" : "write_cache_usage" },
"total_cache_pc" : { "name" : "total_cache_usage" }
}
check_metrics["check_mk-ibm_svc_systemstats.cache"] = {
"write_cache_pc" : { "name" : "write_cache_usage" },
"total_cache_pc" : { "name" : "total_cache_usage" }
}
check_metrics["check_mk-esx_vsphere_hostsystem.mem_usage"] = {
"usage" : { "name" : "mem_used" },
"mem_total" : { "auto_graph" : False },
}
check_metrics["check_mk-esx_vsphere_hostsystem.mem_usage_cluster"] = {
"usage" : { "name" : "mem_used" },
"mem_total" : { "auto_graph" : False },
}
check_metrics["check_mk-ibm_svc_host"] = {
"active" : { "name" : "hosts_active" },
"inactive" : { "name" : "hosts_inactive" },
"degraded" : { "name" : "hosts_degraded" },
"offline" : { "name" : "hosts_offline" },
"other" : { "name" : "hosts_other" },
}
check_metrics["check_mk-juniper_screenos_mem"] = {
"usage" : { "name" : "mem_used" }
}
check_metrics["check_mk-juniper_trpz_mem"] = {
"usage" : { "name" : "mem_used" }
}
check_metrics["check_mk-ibm_svc_nodestats.iops"] = {
"read" : { "name" : "disk_read_ios" },
"write": { "name" : "disk_write_ios" }
}
check_metrics["check_mk-openvpn_clients"] = {
"in" : { "name" : "if_in_octets" },
"out": { "name" : "if_out_octets" }
}
check_metrics["check_mk-f5_bigip_interfaces"] = {
"bytes_in" : { "name" : "if_in_octets" },
"bytes_out": { "name" : "if_out_octets" }
}
check_metrics["check_mk-mbg_lantime_state"] = {
"offset" : { "name" : "time_offset", "scale" : 0.000001 }
} # convert us -> sec
check_metrics["check_mk-mbg_lantime_ng_state"] = {
"offset" : { "name" : "time_offset", "scale" : 0.000001 }
} # convert us -> sec
check_metrics["check_mk-systemtime"] = {
"offset" : { "name" : "time_offset" }
}
check_metrics["check_mk-ntp"] = {
"offset" : { "name" : "time_offset", "scale" : m },
"jitter" : { "scale" : m },
}
check_metrics["check_mk-chrony"] = {
"offset" : { "name" : "time_offset", "scale" : m }
}
check_metrics["check_mk-ntp.time"] = {
"offset" : { "name" : "time_offset", "scale" : m },
"jitter" : { "scale" : m },
}
check_metrics["check_mk-adva_fsp_if"] = {
"output_power" : { "name" : "output_signal_power_dbm" },
"input_power" : { "name" : "input_signal_power_dbm" }
}
check_metrics["check_mk-allnet_ip_sensoric.tension"] = {
"tension" : { "name" : "voltage_percent" }
}
check_metrics["check_mk-apache_status"] = {
"Uptime" : { "name" : "uptime" },
"IdleWorkers" : { "name" : "idle_workers" },
"BusyWorkers" : { "name" : "busy_workers" },
"IdleServers" : { "name" : "idle_servers" },
"BusyServers" : { "name" : "busy_servers" },
"OpenSlots" : { "name" : "open_slots" },
"TotalSlots" : { "name" : "total_slots" },
"CPULoad" : { "name" : "load1" },
"ReqPerSec" : { "name" : "requests_per_second" },
"BytesPerSec" : { "name" : "direkt_io" },
"ConnsTotal" : { "name" : "connections" },
"ConnsAsyncWriting" : { "name" : "connections_async_writing" },
"ConnsAsyncKeepAlive" : { "name" : "connections_async_keepalive" },
"ConnsAsyncClosing" : { "name" : "connections_async_closing" },
"State_StartingUp" : { "name" : "apache_state_startingup" },
"State_Waiting" : { "name" : "apache_state_waiting" },
"State_Logging" : { "name" : "apache_state_logging" },
"State_DNS" : { "name" : "apache_state_dns" },
"State_SendingReply" : { "name" : "apache_state_sending_reply" },
"State_ReadingRequest" : { "name" : "apache_state_reading_request" },
"State_Closing" : { "name" : "apache_state_closing" },
"State_IdleCleanup" : { "name" : "apache_state_idle_cleanup" },
"State_Finishing" : { "name" : "apache_state_finishing" },
"State_Keepalive" : { "name" : "apache_state_keep_alive" },
}
check_metrics["check_mk-ups_socomec_out_voltage"] = {
"out_voltage" : { "name" : "voltage" }
}
check_metrics["check_mk-hp_blade_psu"] = {
"output" : { "name" : "power" }
}
check_metrics["check_mk-apc_rackpdu_power"] = {
"amperage" : { "name" : "current" }
}
check_metrics["check_mk-apc_ats_output"] = {
"volt" : { "name" : "voltage" },
"watt" : { "name" : "power"},
"ampere": { "name": "current"},
"load_perc" : { "name": "output_load" }
}
check_metrics["check_mk-ups_out_load"] = {
"out_load" : { "name": "output_load" },
"out_voltage" : { "name": "voltage" },
}
check_metrics["check_mk-raritan_pdu_outletcount"] = {
"outletcount" : { "name" : "connector_outlets" }
}
check_metrics["check_mk-docsis_channels_upstream"] = {
"total" : { "name" : "total_modems" },
"active" : { "name" : "active_modems" },
"registered" : { "name" : "registered_modems" },
"util" : { "name" : "channel_utilization" },
"frequency" : { "scale" : 1000000.0 },
"codewords_corrected" : { "scale" : 100.0 },
"codewords_uncorrectable" : { "scale" : 100.0 },
}
check_metrics["check_mk-docsis_channels_downstream"] = {
"power" : { "name" : "downstream_power" },
}
check_metrics["check_mk-zfs_arc_cache"] = {
"hit_ratio" : { "name": "cache_hit_ratio", },
"size" : { "name": "caches", "scale" : MB },
"arc_meta_used" : { "name": "zfs_metadata_used", "scale" : MB },
"arc_meta_limit": { "name": "zfs_metadata_limit", "scale" : MB },
"arc_meta_max" : { "name": "zfs_metadata_max", "scale" : MB },
}
check_metrics["check_mk-zfs_arc_cache.l2"] = {
"l2_size" : { "name": "zfs_l2_size" },
"l2_hit_ratio" : { "name": "zfs_l2_hit_ratio", },
}
check_metrics["check_mk-postgres_sessions"] = {
"total": {"name": "total_sessions"},
"running": {"name": "running_sessions"}
}
check_metrics["check_mk-oracle_sessions"] = {
"sessions" : {"name": "running_sessions"}
}
check_metrics["check_mk-oracle_logswitches"] = {
"logswitches" : { "name" : "logswitches_last_hour" }
}
check_metrics["check_mk-oracle_dataguard_stats"] = {
"apply_lag" : { "name" : "database_apply_lag" }
}
check_metrics["check_mk-oracle_performance"] = {
"DB_CPU" : { "name" : "oracle_db_cpu" },
"DB_time" : { "name" : "oracle_db_time" },
"buffer_hit_ratio" : { "name" : "oracle_buffer_hit_ratio" },
"db_block_gets" : { "name" : "oracle_db_block_gets" },
"db_block_change" : { "name" : "oracle_db_block_change" },
"consistent_gets" : { "name" : "oracle_db_block_gets" },
"physical_reads" : { "name" : "oracle_physical_reads" },
"physical_writes" : { "name" : "oracle_physical_writes" },
"free_buffer_wait" : { "name" : "oracle_free_buffer_wait" },
"buffer_busy_wait" : { "name" : "oracle_buffer_busy_wait" },
"library_cache_hit_ratio" : { "name" : "oracle_library_cache_hit_ratio" },
"pinssum" : { "name" : "oracle_pins_sum" },
"pinhitssum" : { "name" : "oracle_pin_hits_sum" },
}
check_metrics["check_mk-db2_logsize"] = {
"~[_/]": { "name": "fs_used", "scale" : MB }
}
check_metrics["check_mk-steelhead_connections"] = {
"active" : { "name" : "fw_connections_active" },
"established" : { "name" : "fw_connections_established" },
"halfOpened" : { "name" : "fw_connections_halfopened" },
"halfClosed" : { "name" : "fw_connections_halfclosed" },
"passthrough" : { "name" : "fw_connections_passthrough" },
}
check_metrics["check_mk-oracle_tablespaces"] = {
"size" : { "name" : "tablespace_size" },
"used" : { "name" : "tablespace_used" },
"max_size" : { "name" : "tablespace_max_size" },
}
check_metrics["check_mk-mssql_tablespaces"] = {
"size" : { "name" : "database_size" },
"unallocated" : { "name" : "unallocated_size" },
"reserved" : { "name" : "reserved_size" },
"data" : { "name" : "data_size" },
"indexes" : { "name" : "indexes_size" },
"unused" : { "name" : "unused_size" },
}
check_metrics["check_mk-f5_bigip_vserver"] = {
"conn_rate" : { "name" : "connections_rate" }
}
check_metrics["check_mk-arcserve_backup"] = {
"size" : { "name" : "backup_size" }
}
check_metrics["check_mk-oracle_rman"] = {
"age" : { "name" : "backup_age" }
}
check_metrics["check_mk-veeam_client"] = {
"totalsize" : { "name" : "backup_size" },
"duration" : { "name" : "backup_duration" },
"avgspeed" : { "name" : "backup_avgspeed" },
}
check_metrics["check_mk-cups_queues"] = {
"jobs" : { "name" : "printer_queue" }
}
check_metrics["check_mk-printer_pages"] = {
"pages" : { "name" : "pages_total" }
}
check_metrics["check_mk-livestatus_status"] = {
"host_checks" : { "name" : "host_check_rate" },
"service_checks" : { "name" : "service_check_rate" },
"connections" : { "name" : "livestatus_connect_rate" },
"requests" : { "name" : "livestatus_request_rate" },
"log_messages" : { "name" : "log_message_rate" },
}
check_metrics["check_mk-cisco_wlc_clients"] = {
"clients" : { "name" : "connections" }
}
check_metrics["check_mk-cisco_qos"] = {
"drop" : { "name" : "qos_dropped_bytes_rate" },
"post" : { "name" : "qos_outbound_bytes_rate" },
}
check_metrics["check_mk-hivemanager_devices"] = {
"clients_count" : { "name" : "connections" }
}
check_metrics["check_mk-ibm_svc_license"] = {
"licensed" : { "name" : "licenses" }
}
check_metrics["check_mk-tsm_stagingpools"] = {
"free" : { "name" : "tapes_free" },
"free" : { "name" : "tapes_total" },
"util" : { "name" : "tapes_util" }
}
check_metrics["check_mk-hpux_tunables.shmseg"] = {
"segments" : { "name" : "shared_memory_segments" }
}
check_metrics["check_mk-hpux_tunables.semmns"] = {
"entries" : { "name" : "semaphores" }
}
check_metrics["check_mk-hpux_tunables.maxfiles_lim"] = {
"files" : { "name" : "files_open" }
}
check_metrics["check_mk-win_dhcp_pools"] = {
"free" : { "name" : "free_dhcp_leases" },
"used" : { "name" : "used_dhcp_leases" },
"pending" : { "name" : "pending_dhcp_leases" }
}
check_metrics["check_mk-lparstat_aix"] = {
"sys" : { "name" : "system" },
"wait" : { "name" : "io_wait" },
}
check_metrics["check_mk-netapp_fcpio"] = {
"read" : { "name" : "disk_read_throughput" },
"write" : { "name" : "disk_write_throughput" },
}
check_metrics["check_mk-netapp_api_vf_stats.traffic"] = {
"read_bytes" : { "name" : "disk_read_throughput" },
"write_bytes" : { "name" : "disk_write_throughput" },
"read_ops" : { "name" : "disk_read_ios" },
"write_ops" : { "name" : "disk_write_ios" },
}
check_metrics["check_mk-job"] = {
"reads" : { "name" : "disk_read_throughput" },
"writes" : { "name" : "disk_write_throughput" },
"real_time": { "name" : "job_duration" },
}
ps_translation = {
"count" : { "name" : "processes" },
"vsz" : { "name" : "process_virtual_size", "scale" : KB, },
"rss" : { "name" : "process_resident_size", "scale" : KB, },
"pcpu" : { "name" : "util" },
"pcpuavg" : { "name" : "util_average" },
}
check_metrics["check_mk-smart.stats"] = {
"Power_On_Hours" : { "name" : "uptime", "scale" : 3600 },
"Power_Cycle_Count" : { "name" : "harddrive_power_cycle" },
"Reallocated_Sector_Ct" : { "name" : "harddrive_reallocated_sectors" },
"Reallocated_Event_Count" : { "name" : "harddrive_reallocated_events" },
"Spin_Retry_Count" : { "name" : "harddrive_spin_retries" },
"Current_Pending_Sector" : { "name" : "harddrive_pending_sectors" },
"Command_Timeout" : { "name" : "harddrive_cmd_timeouts" },
"End-to-End_Error" : { "name" : "harddrive_end_to_end_errors" },
"Reported_Uncorrect" : { "name" : "harddrive_uncorrectable_errors" },
"UDMA_CRC_Error_Count" : { "name" : "harddrive_udma_crc_errors" },
}
check_metrics["check_mk-ps"] = ps_translation
check_metrics["check_mk-ps.perf"] = ps_translation
check_metrics["check_mk-mssql_counters.sqlstats"] = {
"batch_requests/sec" : { "name" : "requests_per_second" },
"sql_compilations/sec" : { "name" : "requests_per_second" },
"sql_re-compilations/sec" : { "name" : "requests_per_second" },
}
check_metrics["check_mk-cisco_mem"] = {
"mem_used" : { "name" : "mem_used_percent" }
}
check_metrics["check_mk-cisco_sys_mem"] = {
"mem_used" : { "name" : "mem_used_percent" }
}
check_metrics["check_mk-cisco_mem_asa"] = {
"mem_used" : { "name" : "mem_used_percent" }
}
check_metrics["check_mk-fortigate_sessions_base"] = {
"session" : { "name" : "active_sessions" }
}
#.
# .--Perf-O-Meters-------------------------------------------------------.
# | ____ __ ___ __ __ _ |
# | | _ \ ___ _ __ / _| / _ \ | \/ | ___| |_ ___ _ __ ___ |
# | | |_) / _ \ '__| |_ _____| | | |_____| |\/| |/ _ \ __/ _ \ '__/ __| |
# | | __/ __/ | | _|_____| |_| |_____| | | | __/ || __/ | \__ \ |
# | |_| \___|_| |_| \___/ |_| |_|\___|\__\___|_| |___/ |
# | |
# +----------------------------------------------------------------------+
# | Definition of Perf-O-Meters |
# '----------------------------------------------------------------------'
# If multiple Perf-O-Meters apply, the first applicable Perf-O-Meter in the list will
# be the one appearing in the GUI.
# Types of Perf-O-Meters:
# linear -> multiple values added from left to right
# logarithmic -> one value in a logarithmic scale
# dual -> two Perf-O-Meters next to each other, the first one from right to left
# stacked -> two Perf-O-Meters of type linear, logarithmic or dual, stack vertically
# The label of dual and stacked is taken from the definition of the contained Perf-O-Meters
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_used_percent" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "ap_devices_drifted", "ap_devices_not_responding" ],
"total" : "ap_devices_total",
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "execution_time" ],
"total" : 90.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "session_rate",
"half_value" : 50.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "uptime",
"half_value" : 2592000.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "age",
"half_value" : 2592000.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "runtime",
"half_value" : 864000.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "job_duration",
"half_value" : 120.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "response_time",
"half_value" : 10,
"exponent" : 4,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "mails_received_time",
"half_value" : 5,
"exponent" : 3,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_perm_used"],
"total" : "mem_perm_used:max",
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_heap"],
"total" : "mem_heap:max",
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_nonheap"],
"total" : "mem_nonheap:max",
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "pressure",
"half_value" : 0.5,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "pressure_pa",
"half_value" : 10,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "cifs_share_users",
"half_value" : 10,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "connector_outlets",
"half_value" : 20,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "licenses",
"half_value" : 500,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "sync_latency",
"half_value" : 5,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "mail_latency",
"half_value" : 5,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "backup_size",
"half_value" : 150*GB,
"exponent" : 2.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "fw_connections_active",
"half_value" : 100,
"exponent" : 2,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "checkpoint_age",
"half_value" : 86400,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "backup_age",
"half_value" : 86400,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "backup_age",
"half_value" : 86400,
"exponent" : 2,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "read_latency",
"half_value" : 5,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "write_latency",
"half_value" : 5,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "logswitches_last_hour",
"half_value" : 15,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "database_apply_lag",
"half_value" : 2500,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "processes",
"half_value" : 100,
"exponent" : 2,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "total_cache_usage" ],
"total" : 100.0,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "mem_heap",
"half_value" : 100 * MB,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "mem_nonheap",
"half_value" : 100*MB,
"exponent" : 2,
}
]))
perfometer_info.append(("stacked", [
{
"type" : "linear",
"segments" : [ "threads_idle" ],
"total" : "threads_idle:max",
},
{
"type" : "linear",
"segments" : [ "threads_busy" ],
"total" : "threads_busy:max",
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "rta",
"half_value" : 0.1,
"exponent" : 4
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "execution_time" ],
"total" : 90.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "load1",
"half_value" : 4.0,
"exponent" : 2.0
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "temp",
"half_value" : 40.0,
"exponent" : 1.2
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "context_switches",
"half_value" : 1000.0,
"exponent" : 2.0
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "major_page_faults",
"half_value" : 1000.0,
"exponent" : 2.0
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "process_creations",
"half_value" : 1000.0,
"exponent" : 2.0
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "threads",
"half_value" : 400.0,
"exponent" : 2.0
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "user", "system", "idle", "nice" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "user", "system", "idle", "io_wait" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "user", "system", "io_wait" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "fpga_util", ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "util", ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "generic_util", ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "util1", ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "citrix_load" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "database_size",
"half_value" : GB,
"exponent" : 5.0,
})
# Filesystem check with over-provisioning
perfometer_info.append({
"type" : "linear",
"condition" : "fs_provisioning(%),100,>",
"segments" : [
"fs_used(%)",
"100,fs_used(%),-#e3fff9",
"fs_provisioning(%),100.0,-#ffc030",
],
"total" : "fs_provisioning(%)",
"label" : ( "fs_used(%)", "%" ),
})
# Filesystem check with provisioning, but not over-provisioning
perfometer_info.append({
"type" : "linear",
"condition" : "fs_provisioning(%),100,<=",
"segments" : [
"fs_used(%)",
"fs_provisioning(%),fs_used(%),-#ffc030",
"100,fs_provisioning(%),fs_used(%),-,-#e3fff9",
],
"total" : 100,
"label" : ( "fs_used(%)", "%" ),
})
# Filesystem without over-provisioning
perfometer_info.append({
"type" : "linear",
"segments" : [
"fs_used(%)",
"100.0,fs_used(%),-#e3fff9",
],
"total" : 100,
"label" : ( "fs_used(%)", "%" ),
})
# TODO total = None?
perfometer_info.append(("linear", ( [ "mem_used", "swap_used", "caches", "mem_free", "swap_free" ], None, ("mem_total,mem_used,+,swap_used,/,100,*", "%"))))
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_used" ],
"total" : "mem_total",
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_used(%)" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "time_offset",
"half_value" : 1.0,
"exponent" : 10.0,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "tablespace_wasted",
"half_value" : 1000000,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "indexspace_wasted",
"half_value" : 1000000,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "linear",
"segments" : [ "running_sessions" ],
"total" : "total_sessions",
})
# TODO total : None?
perfometer_info.append({
"type" : "linear",
"segments" : [ "shared_locks", "exclusive_locks" ],
"total" : None,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "connections",
"half_value": 50,
"exponent" : 2
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "connection_time",
"half_value" : 0.2,
"exponent" : 2,
})
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "input_signal_power_dbm",
"half_value" : 4,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "output_signal_power_dbm",
"half_value" : 4,
"exponent" : 2,
}
]))
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "if_out_unicast_octets,if_out_non_unicast_octets,+",
"half_value" : 5000000,
"exponent" : 5,
},
{
"type" : "logarithmic",
"metric" : "if_in_octets",
"half_value" : 5000000,
"exponent" : 5,
}
]))
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "read_blocks",
"half_value" : 50000000,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "write_blocks",
"half_value" : 50000000,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "running_sessions",
"half_value": 10,
"exponent" : 2
})
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "deadlocks",
"half_value" : 50,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "lockwaits",
"half_value" : 50,
"exponent" : 2,
}
]))
# TODO: max fehlt
perfometer_info.append({
"type" : "linear",
"segments" : [ "sort_overflow" ],
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "mem_used" ],
"total" : "mem_used:max",
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "tablespace_used" ],
"total" : "tablespace_max_size",
})
perfometer_info.append(("stacked", [
("dual", [
{
"type" : "linear",
"label" : None,
"segments" : [ "total_hitratio" ],
"total": 100
},
{
"type" : "linear",
"label" : None,
"segments" : [ "data_hitratio" ],
"total" : 100
}
]),
("dual", [
{
"type" : "linear",
"label" : None,
"segments" : [ "index_hitratio" ],
"total" : 100
},
{
"type" : "linear",
"label" : None,
"segments" : [ "xda_hitratio" ],
"total" : 100
}
])
]))
perfometer_info.append({
"type" : "linear",
"segments" : [ "output_load" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "power",
"half_value" : 1000,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "current",
"half_value" : 10,
"exponent" : 4,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "voltage",
"half_value" : 220.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "energy",
"half_value" : 10000,
"exponent" : 3,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "voltage_percent" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "humidity" ],
"total" : 100.0,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "requests_per_second",
"half_value" : 10,
"exponent" : 5,
},
{
"type" : "logarithmic",
"metric" : "busy_workers",
"half_value" : 10,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "linear",
"segments" : [ "cache_hit_ratio" ],
"total" : 100,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "varnish_worker_thread_ratio" ],
"total" : 100,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "varnish_backend_success_ratio" ],
"total" : 100,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "zfs_l2_hit_ratio" ],
"total" : 100,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "signal_noise",
"half_value" : 50.0,
"exponent" : 2.0,
},
{
"type" : "linear",
"segments" : [ "codewords_corrected", "codewords_uncorrectable" ],
"total" : 1.0,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "signal_noise",
"half_value" : 50.0,
"exponent" : 2.0
}) # Fallback if no codewords are available
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "disk_read_throughput",
"half_value" : 5000000,
"exponent" : 10,
},
{
"type" : "logarithmic",
"metric" : "disk_write_throughput",
"half_value" : 5000000,
"exponent" : 10,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "disk_ios",
"half_value": 30,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "disk_capacity",
"half_value": 25*TB,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "printer_queue",
"half_value" : 10,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "pages_total",
"half_value": 60000,
"exponent" : 2,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "supply_toner_cyan" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "supply_toner_magenta" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "supply_toner_yellow" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "supply_toner_black" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "supply_toner_other" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "smoke_ppm" ],
"total" : 10,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "smoke_perc" ],
"total" : 100,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "health_perc" ],
"total" : 100,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "deviation_calibration_point" ],
"total" : 10,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "deviation_airflow" ],
"total" : 10,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "airflow",
"half_value" : 300,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "fluidflow",
"half_value" : 0.2,
"exponent" : 5,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "direct_io",
"half_value" : 25,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "buffered_io",
"half_value" : 25,
"expoent" : 2,
}
]))
# TODO: :max should be the default?
perfometer_info.append({
"type" : "linear",
"segments" : [ "free_dhcp_leases" ],
"total" : "free_dhcp_leases:max",
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "host_check_rate",
"half_value" : 50,
"exponent" : 5,
},
{
"type" : "logarithmic",
"metric" : "service_check_rate",
"half_value" : 200,
"exponent" : 5,
}
]))
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "normal_updates",
"half_value" : 10,
"exponent" : 2,
},
{
"type" : "logarithmic",
"metric" : "security_updates",
"half_value" : 10,
"exponent" : 2,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "registered_phones",
"half_value" : 50,
"exponent" : 3,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "call_legs",
"half_value" : 10,
"exponent" : 2,
})
perfometer_info.append(("stacked", [
{
"type" : "logarithmic",
"metric" : "mail_queue_deferred_length",
"half_value" : 10000,
"exponent" : 5,
},
{
"type" : "logarithmic",
"metric" : "mail_queue_active_length",
"half_value" : 10000,
"exponent" : 5,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "mail_queue_deferred_length",
"half_value" : 10000,
"exponent" : 5
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "messages_inbound,messages_outbound,+",
"half_value" : 100,
"exponent" : 5,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "tapes_util" ],
"total" : 100.0,
})
perfometer_info.append(("dual", [
{
"type" : "linear",
"segments" : [ "qos_dropped_bytes_rate" ],
"total" : "qos_dropped_bytes_rate:max"
},
{
"type" : "linear",
"segments" : [ "qos_outbound_bytes_rate" ],
"total" : "qos_outbound_bytes_rate:max"
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "semaphore_ids",
"half_value": 50,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "segments",
"half_value": 10,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "semaphores",
"half_value": 2500,
"exponent" : 2,
})
perfometer_info.append(("dual", [
{
"type" : "logarithmic",
"metric" : "fc_rx_bytes",
"half_value" : 30 * MB,
"exponent" : 3,
},
{
"type" : "logarithmic",
"metric" : "fc_tx_bytes",
"half_value" : 30 * MB,
"exponent" : 3,
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "request_rate",
"half_value" : 100,
"exponent" : 2,
})
perfometer_info.append({
"type" : "logarithmic",
"metric" : "mem_pages_rate",
"half_value" : 5000,
"exponent" : 2,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "storage_processor_util" ],
"total" : 100.0,
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "active_vpn_tunnels" ],
"total" : "active_vpn_tunnels:max"
})
for x in reversed(range(1, MAX_NUMBER_HOPS)):
perfometer_info.append(("dual", [
{
"type" : "linear",
"segments" : [ "hop_%d_pl" % x ],
"total" : 100.0,
},
{
"type" : "logarithmic",
"metric" : "hop_%d_rta" % x,
"half_value" : 0.1,
"exponent" : 4
}
]))
perfometer_info.append({
"type" : "logarithmic",
"metric" : "oracle_db_cpu",
"half_value" : 50.0,
"exponent" : 2,
})
perfometer_info.append({
"type" : "linear",
"segments" : ["active_sessions_%s" % device
for device, name, color in skype_mobile_devices],
# there is no limit and no way to determine the max so far for
# all segments
})
perfometer_info.append({
"type" : "linear",
"segments" : [ "filehandler_perc" ],
"total" : 100.0,
})
#.
# .--Graphs--------------------------------------------------------------.
# | ____ _ |
# | / ___|_ __ __ _ _ __ | |__ ___ |
# | | | _| '__/ _` | '_ \| '_ \/ __| |
# | | |_| | | | (_| | |_) | | | \__ \ |
# | \____|_| \__,_| .__/|_| |_|___/ |
# | |_| |
# +----------------------------------------------------------------------+
# | Definitions of time series graphs |
# '----------------------------------------------------------------------'
# Beware: The order of the list elements of graph_info is actually important.
# It determines the order of graphs of a service, which in turn is used by
# the report definitions to determine which graph to include.
# Order of metrics in graph definitions important if you use only 'area':
# The first one must be the bigger one, then descending.
# Example: ('tablespace_size', 'area'),
# ('tablespace_used', 'area')
graph_info.append({
"title" : _("Context switches"),
"metrics" : [
( "vol_context_switches", "area" ),
( "invol_context_switches", "stack" ),
],
})
graph_info.append({
"title" : _("Busy and idle workers"),
"metrics" : [
( "busy_workers", "area" ),
( "idle_workers", "stack" ),
],
})
graph_info.append({
"title" : _("Busy and idle servers"),
"metrics" : [
( "busy_servers", "area" ),
( "idle_servers", "stack" ),
],
})
graph_info.append({
"title" : _("Total and open slots"),
"metrics" : [
( "total_slots", "area" ),
( "open_slots", "area" ),
],
})
graph_info.append({
"title" : _("Connections"),
"metrics" : [
( "connections_async_writing", "area" ),
( "connections_async_keepalive", "stack" ),
( "connections_async_closing", "stack" ),
( "connections", "line" ),
],
})
graph_info.append({
"title" : _("Apache status"),
"metrics" : [
( "apache_state_startingup", "area" ),
( "apache_state_waiting", "stack" ),
( "apache_state_logging", "stack" ),
( "apache_state_dns", "stack" ),
( "apache_state_sending_reply", "stack" ),
( "apache_state_reading_request", "stack" ),
( "apache_state_closing", "stack" ),
( "apache_state_idle_cleanup", "stack" ),
( "apache_state_finishing", "stack" ),
( "apache_state_keep_alive", "stack" ),
],
})
graph_info.append({
"title" : _("Battery currents"),
"metrics" : [
( "battery_current", "area" ),
( "current", "stack" ),
],
})
graph_info.append({
"metrics" : [
( "battery_capacity", "area" ),
],
"range" : (0,100),
})
graph_info.append({
"title" : _("QoS class traffic"),
"metrics" : [
( "qos_outbound_bytes_rate,8,*@bits/s", "area", _("Qos outbound bits")),
( "qos_dropped_bytes_rate,8,*@bits/s", "-area", _("QoS dropped bits")),
],
})
graph_info.append({
"title" : _("Read and written blocks"),
"metrics" : [
( "read_blocks", "area" ),
( "write_blocks","-area" ),
],
})
graph_info.append({
"title" : _("RMON packets per second"),
"metrics" : [
( "broadcast_packets", "area" ),
( "multicast_packets", "stack" ),
( "rmon_packets_63", "stack" ),
( "rmon_packets_127", "stack" ),
( "rmon_packets_255", "stack" ),
( "rmon_packets_511", "stack" ),
( "rmon_packets_1023", "stack" ),
( "rmon_packets_1518", "stack" ),
],
})
graph_info.append({
"title" : _("Threads"),
"metrics" : [
( "threads", "area" ),
( "threads_daemon", "stack" ),
( "threads_max", "stack" ),
],
})
graph_info.append({
"title" : _("Threadpool"),
"metrics" : [
( "threads_busy", "stack" ),
( "threads_idle", "stack" ),
],
})
graph_info.append({
"title" : _("Disk latency"),
"metrics" : [
( "read_latency", "area" ),
( "write_latency", "-area" )
],
})
graph_info.append({
"title" : _("Read / Write queue length"),
"metrics" : [
( "disk_read_ql", "area" ),
( "disk_write_ql", "-area" )
],
})
graph_info.append({
"title" : _("Backup time"),
"metrics" : [
( "checkpoint_age", "area" ),
( "backup_age", "stack" )
],
})
graph_info.append({
"title" : _("NTP time offset"),
"metrics" : [
( "time_offset", "area" ),
( "jitter", "line" )
],
"scalars" : [
( "time_offset:crit", _("Upper critical level")),
( "time_offset:warn", _("Upper warning level")),
( "0,time_offset:warn,-", _("Lower warning level")),
( "0,time_offset:crit,-", _("Lower critical level")),
],
"range" : ( "0,time_offset:crit,-", "time_offset:crit" ),
})
graph_info.append({
"metrics" : [ ( "total_cache_usage", "area" ) ],
"range" : (0, 100),
})
graph_info.append({
"title" : _("ZFS meta data"),
"metrics" : [
( "zfs_metadata_max", "area" ),
( "zfs_metadata_used", "area" ),
( "zfs_metadata_limit", "line" ),
],
})
graph_info.append({
"title" : _("Cache hit ratio"),
"metrics" : [
( "cache_hit_ratio", "area" ),
( "prefetch_metadata_hit_ratio", "line" ),
( "prefetch_data_hit_ratio", "area" ),
],
})
graph_info.append({
"title" : _("Citrix Serverload"),
"metrics" : [
( "citrix_load", "area" ),
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("Used CPU Time"),
"metrics" : [
( "user_time", "area" ),
( "children_user_time", "stack" ),
( "system_time", "stack" ),
( "children_system_time", "stack" ),
( "user_time,children_user_time,system_time,children_system_time,+,+,+#888888", "line", _("Total") ),
],
"omit_zero_metrics" : True,
})
graph_info.append({
"title" : _("CPU Time"),
"metrics" : [
( "user_time", "area" ),
( "system_time", "stack" ),
( "user_time,system_time,+", "line", _("Total") ),
],
"conflicting_metrics" : [ "children_user_time" ],
})
graph_info.append({
"title" : _("Tapes utilization"),
"metrics" : [
( "tapes_free", "area" ),
( "tapes_total", "line" ),
],
"scalars" : [
"tapes_free:warn",
"tapes_free:crit",
]
})
graph_info.append({
"title" : _("Storage Processor utilization"),
"metrics" : [
( "storage_processor_util", "area" ),
],
"scalars" : [
"storage_processor_util:warn",
"storage_processor_util:crit",
]
})
graph_info.append({
"title" : _("CPU Load - %(load1:max@count) CPU Cores"),
"metrics" : [
( "load1", "area" ),
( "load15", "line" ),
],
"scalars" : [
"load1:warn",
"load1:crit",
],
"optional_metrics" : [ "load15" ],
})
graph_info.append({
"title" : _( "FGPA utilization" ),
"metrics" : [
( "fpga_util", "area" ),
],
"scalars" : [
"fpga_util:warn",
"fpga_util:crit",
],
"range" : (0, 100),
})
graph_info.append({
"metrics" : [
( "util", "area" ),
( "util_average", "line" ),
],
"scalars" : [
"util:warn",
"util:crit",
],
"range" : (0, 100),
"optional_metrics": [ "util_average" ],
"conflicting_metrics" : [ "user" ],
})
graph_info.append({
"title" : _("CPU utilization (%(util:max@count) CPU Threads)"),
"metrics" : [
( "util,user,-#ff6000", "stack", _("Privileged") ),
( "user", "area" ),
( "util#008000", "line", _("Total") ),
],
"scalars" : [
"util:warn",
"util:crit",
],
"range" : (0, 100),
})
graph_info.append({
"title" : _( "CPU utilization" ),
"metrics" : [
( "util1", "area" ),
( "util15", "line" )
],
"scalars" : [
"util1:warn",
"util1:crit",
],
"range" : (0, 100),
})
graph_info.append({
"title" : _( "Per Core utilization" ),
"metrics" : [
( "cpu_core_util_%d" % num, "line" )
for num in range(MAX_CORES)
],
"range" : (0, 100),
"optional_metrics" : [
"cpu_core_util_%d" % num
for num in range(2, MAX_CORES)
]
})
graph_info.append({
"metrics" : [
( "fs_used", "area" ),
( "fs_size,fs_used,-#e3fff9", "stack", _("Free space") ),
( "fs_size", "line" ),
],
"scalars" : [
"fs_used:warn",
"fs_used:crit",
],
"range" : (0, "fs_used:max"),
})
graph_info.append({
"title" : _("Growing"),
"metrics" : [
( "fs_growth.max,0,MAX", "area", _("Growth"), ),
],
})
graph_info.append({
"title" : _("Shrinking"),
"consolidation_function": "min",
"metrics" : [
( "fs_growth.min,0,MIN,-1,*#299dcf", "-area", _("Shrinkage") ),
],
})
graph_info.append({
"metrics" : [
( "fs_trend", "line" ),
],
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "idle", "stack" ),
( "nice", "stack" ),
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "idle", "stack" ),
( "io_wait", "stack" ),
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "io_wait", "stack" ),
( "user,system,io_wait,+,+#004080", "line", _("Total") ),
],
"conflicting_metrics" : [
"cpu_util_guest",
"cpu_util_steal",
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "io_wait", "stack" ),
( "cpu_util_steal", "stack" ),
( "user,system,io_wait,cpu_util_steal,+,+,+#004080", "line", _("Total") ),
],
"conflicting_metrics" : [
"cpu_util_guest",
],
"omit_zero_metrics" : True,
"range" : (0, 100),
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "io_wait", "stack" ),
( "cpu_util_guest", "stack" ),
( "cpu_util_steal", "stack" ),
( "user,system,io_wait,cpu_util_guest,cpu_util_steal,+,+,+,+#004080", "line", _("Total") ),
],
"omit_zero_metrics" : True,
"range" : (0, 100),
})
graph_info.append({
"title" : _("CPU utilization"),
"metrics" : [
( "user", "area" ),
( "system", "stack" ),
( "interrupt", "stack" ),
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("Wasted space of tables and indexes"),
"metrics" : [
( "tablespace_wasted", "area" ),
( "indexspace_wasted", "stack" ),
],
"legend_scale" : MB,
"legend_precision" : 2,
})
graph_info.append({
"title": _("Firewall connections"),
"metrics" : [
( "fw_connections_active", "stack" ),
( "fw_connections_established", "stack" ),
( "fw_connections_halfopened", "stack" ),
( "fw_connections_halfclosed", "stack" ),
( "fw_connections_passthrough", "stack" ),
],
})
graph_info.append({
"title": _("Time to connect"),
"metrics" : [
( "connection_time", "area" ),
],
"legend_scale" : m,
})
graph_info.append({
"title": _("Number of total and running sessions"),
"metrics" : [
( "running_sessions", "line" ),
( "total_sessions", "line" ),
],
"legend_precision" : 0
})
graph_info.append({
"title": _("Number of shared and exclusive locks"),
"metrics" : [
( "shared_locks", "area" ),
( "exclusive_locks", "stack" ),
],
"legend_precision" : 0
})
# diskstat checks
graph_info.append({
"metrics" : [
( "disk_utilization", "area" ),
],
"range" : (0, 100),
})
graph_info.append({
"title" : _("Disk throughput"),
"metrics" : [
( "disk_read_throughput", "area" ),
( "disk_write_throughput", "-area" ),
],
"legend_scale" : MB,
})
graph_info.append({
"title" : _("Disk I/O operations"),
"metrics" : [
( "disk_read_ios", "area" ),
( "disk_write_ios", "-area" ),
],
})
graph_info.append({
"title" : _("Direct and buffered I/O operations"),
"metrics" : [
( "direct_io", "stack" ),
( "buffered_io", "stack" ),
],
})
graph_info.append({
"title" : _("Average request size"),
"metrics" : [
( "disk_average_read_request_size", "area" ),
( "disk_average_write_request_size", "-area" ),
],
"legend_scale" : KB,
})
graph_info.append({
"title" : _("Average end to end wait time"),
"metrics" : [
( "disk_average_read_wait", "area" ),
( "disk_average_write_wait", "-area" ),
],
})
graph_info.append({
"metrics" : [
( "disk_latency", "area" ),
],
})
graph_info.append({
"metrics" : [
( "disk_queue_length", "area" ),
],
})
graph_info.append({
"title" : _( "Spare and broken disks"),
"metrics" : [
( "disks", "area" ),
( "spare_disks", "stack" ),
( "failed_disks", "stack" ),
],
})
graph_info.append({
"title" : _( "Database sizes" ),
"metrics" : [
( "database_size", "area" ),
( "unallocated_size", "stack" ),
( "reserved_size", "stack" ),
( "data_size", "stack" ),
( "indexes_size", "stack" ),
( "unused_size", "stack" ),
],
"optional_metrics" : [
"unallocated_size",
"reserved_size",
"data_size",
"indexes_size",
"unused_size",
],
"legend_scale" : MB,
})
# TODO: Warum ist hier überall line? Default ist Area.
# Kann man die hit ratios nicht schön stacken? Ist
# nicht total die Summe der anderen?
graph_info.append({
"title" : _("Bufferpool Hitratios"),
"metrics" : [
( "total_hitratio", "line" ),
( "data_hitratio", "line" ),
( "index_hitratio", "line" ),
( "xda_hitratio", "line" ),
],
})
graph_info.append({
"metrics" : [
( "deadlocks", "area" ),
( "lockwaits", "stack" ),
],
})
graph_info.append({
"metrics" : [
( "sort_overflow", "area" ),
],
})
graph_info.append({
"title" : _( "Tablespace sizes" ),
"metrics" : [
( "tablespace_size", "area" ),
( "tablespace_used", "area" ),
],
"scalars" : [
"tablespace_size:warn",
"tablespace_size:crit",
],
"range" : (0, "tablespace_max_size"),
})
# Printer
graph_info.append({
"metrics" : [
( "printer_queue", "area" )
],
"range" : (0, 10),
})
graph_info.append({
"metrics" : [
( "supply_toner_cyan", "area" )
],
"range" : (0, 100),
})
graph_info.append({
"metrics" : [
( "supply_toner_magenta", "area" )
],
"range" : (0, 100),
})
graph_info.append({
"metrics" : [
( "supply_toner_yellow", "area" )
],
"range" : (0, 100),
})
graph_info.append({
"metrics" : [
( "supply_toner_black", "area" )
],
"range" : (0, 100),
})
graph_info.append({
"metrics" : [
( "supply_toner_other", "area" )
],
"range" : (0, 100),
})
graph_info.append({
"title" : _( "Printed pages" ),
"metrics" : [
( "pages_color_a4", "stack" ),
( "pages_color_a3", "stack" ),
( "pages_bw_a4", "stack" ),
( "pages_bw_a3", "stack" ),
( "pages_color", "stack" ),
( "pages_bw", "stack" ),
( "pages_total", "line" ),
],
"optional_metrics" : [
"pages_color_a4",
"pages_color_a3",
"pages_bw_a4",
"pages_bw_a3",
"pages_color",
"pages_bw",
],
"range" : (0, "pages_total:max"),
})
# Networking
graph_info.append({
"title" : _("Bandwidth"),
"metrics" : [
( "if_in_octets,8,*@bits/s", "area", _("Input bandwidth") ),
( "if_out_octets,8,*@bits/s", "-area", _("Output bandwidth") ),
],
})
# Same but for checks that have been translated in to bits/s
graph_info.append({
"title" : _("Bandwidth"),
"metrics" : [
( "if_in_bps", "area", ),
( "if_out_bps", "-area", ),
],
})
graph_info.append({
"title" : _("Packets"),
"metrics" : [
( "if_in_pkts", "area" ),
( "if_out_non_unicast", "-area" ),
( "if_out_unicast", "-stack" ),
],
})
graph_info.append({
"title" : _("Traffic"),
"metrics" : [
( "if_in_octets", "area" ),
( "if_out_non_unicast_octets", "-area" ),
( "if_out_unicast_octets", "-stack" ),
],
})
graph_info.append({
"title" : _("WLAN errors, reset operations and transmission retries"),
"metrics" : [
( "wlan_physical_errors", "area" ),
( "wlan_resets", "stack" ),
( "wlan_retries", "stack" ),
],
})
# TODO: show this graph instead of Bandwidth if this is configured
# in the check's parameters. But is this really a good solution?
# We could use a condition on if_in_octets:min. But if this value
# is missing then evaluating the condition will fail. Solution
# could be using 0 for bits and 1 for octets and making sure that
# this value is not used anywhere.
# graph_info.append({
# "title" : _("Octets"),
# "metrics" : [
# ( "if_in_octets", "area" ),
# ( "if_out_octets", "-area" ),
# ],
# })
graph_info.append({
"title" : _("Packets"),
"metrics" : [
( "if_in_unicast", "area" ),
( "if_in_non_unicast", "stack" ),
( "if_out_unicast", "-area" ),
( "if_out_non_unicast", "-stack" ),
],
})
graph_info.append({
"title" : _("Errors"),
"metrics" : [
( "if_in_errors", "area" ),
( "if_in_discards", "stack" ),
( "if_out_errors", "-area" ),
( "if_out_discards", "-stack" ),
],
})
graph_info.append({
"title" : _("RAM + Swap used"),
"metrics" : [
("mem_used", "area"),
("swap_used", "stack"),
],
"conflicting_metrics" : [ "swap_total" ],
"scalars" : [
( "swap_used:max,mem_used:max,+#008080", _("Total RAM + SWAP installed") ),
( "mem_used:max#80ffff", _("Total RAM installed") ),
],
"range" : (0, "swap_used:max,mem_used:max,+"),
})
graph_info.append({
"metrics" : [
("mem_used_percent", "area"),
],
"scalars" : [
"mem_used_percent:warn",
"mem_used_percent:crit",
],
"range" : (0, 100),
})
# Linux memory graphs. They are a lot...
graph_info.append({
"title" : _("RAM + Swap overview"),
"metrics" : [
("mem_total", "area"),
("swap_total", "stack"),
("mem_used", "area"),
("swap_used", "stack"),
],
})
graph_info.append({
"title" : _("Swap"),
"metrics" : [
("swap_total", "area"),
("swap_used", "area"),
("swap_cached", "stack"),
],
})
graph_info.append({
"title" : _("Caches"),
"metrics" : [
("mem_lnx_slab", "stack"),
("swap_cached", "stack"),
("mem_lnx_buffers", "stack"),
("mem_lnx_cached", "stack"),
],
})
graph_info.append({
"title" : _("Active and Inactive Memory"),
"metrics" : [
("mem_lnx_inactive_anon", "stack"),
("mem_lnx_inactive_file", "stack"),
("mem_lnx_active_anon", "stack"),
("mem_lnx_active_file", "stack"),
],
})
# TODO: Show this graph only, if the previous graph
# is not possible. This cannot be done with a condition,
# since we currently cannot state a condition on non-existing
# metrics.
graph_info.append({
"title" : _("Active and Inactive Memory"),
"metrics" : [
("mem_lnx_active", "area"),
("mem_lnx_inactive", "area"),
],
"conflicting_metrics" : [ "mem_lnx_active_anon" ],
})
graph_info.append({
"title" : _("RAM used"),
"metrics" : [
("mem_used", "area"),
],
"scalars" : [
("mem_used:max#000000", "Maximum"),
("mem_used:warn", "Warning"),
("mem_used:crit", "Critical"),
],
"range" : (0, "mem_used:max"),
})
graph_info.append({
"title" : _("Commit Charge"),
"metrics" : [
("pagefile_used", "area"),
],
"scalars" : [
("pagefile_used:max#000000", "Maximum"),
("pagefile_used:warn", "Warning"),
("pagefile_used:crit", "Critical"),
],
"range" : (0, "pagefile_used:max"),
})
graph_info.append({
"title" : _("Filesystem Writeback"),
"metrics" : [
("mem_lnx_dirty", "area"),
("mem_lnx_writeback", "stack"),
("mem_lnx_nfs_unstable", "stack"),
("mem_lnx_bounce", "stack"),
("mem_lnx_writeback_tmp", "stack"),
],
})
graph_info.append({
"title" : _("Memory committing"),
"metrics" : [
("mem_lnx_total_total", "area"),
("mem_lnx_committed_as", "area"),
("mem_lnx_commit_limit", "stack"),
],
})
graph_info.append({
"title" : _("Memory that cannot be swapped out"),
"metrics" : [
("mem_lnx_kernel_stack", "area"),
("mem_lnx_page_tables", "stack"),
("mem_lnx_mlocked", "stack"),
],
})
graph_info.append({
"title" : _("Huge Pages"),
"metrics" : [
("mem_lnx_huge_pages_total", "area"),
("mem_lnx_huge_pages_free", "area"),
("mem_lnx_huge_pages_rsvd", "area"),
("mem_lnx_huge_pages_surp", "line"),
],
})
graph_info.append({
"title" : _("VMalloc Address Space"),
"metrics" : [
("mem_lnx_vmalloc_total", "area"),
("mem_lnx_vmalloc_used", "area"),
("mem_lnx_vmalloc_chunk", "stack"),
],
})
# TODO: Warum ohne total? Dürfte eigentlich nicht
# vorkommen.
graph_info.append({
"title" : _("VMalloc Address Space"),
"metrics" : [
("mem_lnx_vmalloc_used", "area"),
("mem_lnx_vmalloc_chunk", "stack"),
],
})
graph_info.append({
"title" : _("Heap and non-heap memory"),
"metrics" : [
( "mem_heap", "area" ),
( "mem_nonheap", "stack" ),
],
"conflicting_metrics" : [
"mem_heap_committed",
"mem_nonheap_committed",
],
})
graph_info.append({
"title" : _("Heap memory usage"),
"metrics" : [
( "mem_heap_committed", "area" ),
( "mem_heap", "area" ),
],
"scalars" : [
"mem_heap:warn",
"mem_heap:crit",
]
})
graph_info.append({
"title" : _("Non-heap memory usage"),
"metrics" : [
( "mem_nonheap_committed", "area" ),
( "mem_nonheap", "area" ),
],
"scalars" : [
"mem_nonheap:warn",
"mem_nonheap:crit",
"mem_nonheap:max",
]
})
graph_info.append({
"title" : _("Private and shared memory"),
"metrics" : [
("mem_esx_shared", "area"),
("mem_esx_private", "area"),
],
})
graph_info.append({
"title" : _("TCP Connection States"),
"metrics" : [
( "tcp_listen", "stack"),
( "tcp_syn_sent", "stack"),
( "tcp_syn_recv", "stack"),
( "tcp_established", "stack"),
( "tcp_time_wait", "stack"),
( "tcp_last_ack", "stack"),
( "tcp_close_wait", "stack"),
( "tcp_closed", "stack"),
( "tcp_closing", "stack"),
( "tcp_fin_wait1", "stack"),
( "tcp_fin_wait2", "stack"),
( "tcp_bound", "stack"),
( "tcp_idle", "stack"),
],
"omit_zero_metrics" : True,
})
graph_info.append({
"title" : _("Hosts"),
"metrics" : [
( "hosts_active", "stack"),
( "hosts_inactive", "stack"),
( "hosts_degraded", "stack"),
( "hosts_offline", "stack"),
( "hosts_other", "stack"),
],
})
graph_info.append({
"title" : _("Hosts"),
"metrics" : [
( "hosts_inactive", "stack"),
( "hosts_degraded", "stack"),
( "hosts_offline", "stack"),
( "hosts_other", "stack"),
],
})
graph_info.append({
"title" : _("Host and Service Checks"),
"metrics" : [
( "host_check_rate", "stack" ),
( "service_check_rate", "stack" ),
],
})
graph_info.append({
"title" : _("Number of Monitored Hosts and Services"),
"metrics" : [
( "monitored_hosts", "stack" ),
( "monitored_services", "stack" ),
],
})
graph_info.append({
"title" : _("Livestatus Connects and Requests"),
"metrics" : [
( "livestatus_request_rate", "area" ),
( "livestatus_connect_rate", "area" ),
],
})
graph_info.append({
"title" : _("Event Console performance"),
"metrics" : [
( "average_message_rate", "area" ),
( "average_rule_trie_rate", "area" ),
( "average_rule_hit_rate", "area" ),
( "average_drop_rate", "area" ),
( "average_event_rate", "area" ),
( "average_connect_rate", "area" ),
],
})
graph_info.append({
"title" : _("Event Console times"),
"metrics" : [
( "average_request_time", "stack" ),
( "average_processing_time", "stack" ),
],
})
graph_info.append({
"title" : _("Livestatus Requests per Connection"),
"metrics" : [
( "livestatus_request_rate,livestatus_connect_rate,/#88aa33", "area",
_("Average requests per connection")),
],
})
graph_info.append({
"title" : _("Check helper usage"),
"metrics" : [
( "helper_usage_cmk", "area" ),
( "helper_usage_generic", "area" ),
],
})
graph_info.append({
"title" : _("Average check latency"),
"metrics" : [
( "average_latency_cmk", "area" ),
( "average_latency_generic", "area" ),
],
})
graph_info.append({
"title" : _("Pending updates"),
"metrics" : [
( "normal_updates", "stack" ),
( "security_updates", "stack" ),
],
})
graph_info.append({
"title" : _("DHCP Leases"),
"metrics" : [
( "used_dhcp_leases", "area" ),
( "free_dhcp_leases", "stack" ),
( "pending_dhcp_leases", "stack" ),
],
"scalars" : [
"free_dhcp_leases:warn",
"free_dhcp_leases:crit",
],
"range" : (0, "free_dhcp_leases:max"),
"omit_zero_metrics" : True,
"optional_metrics" : [
"pending_dhcp_leases"
]
})
#graph_info.append({
# "title" : _("Used DHCP Leases"),
# "metrics" : [
# ( "used_dhcp_leases", "area" ),
# ],
# "range" : (0, "used_dhcp_leases:max"),
# "scalars" : [
# "used_dhcp_leases:warn",
# "used_dhcp_leases:crit",
# ("used_dhcp_leases:max#000000", _("Total number of leases")),
# ]
#})
graph_info.append({
"title" : _("Handled Requests"),
"metrics" : [
("requests_cmk_views", "stack"),
("requests_cmk_wato", "stack"),
("requests_cmk_bi", "stack"),
("requests_cmk_snapins", "stack"),
("requests_cmk_dashboards", "stack"),
("requests_cmk_other", "stack"),
("requests_nagvis_snapin", "stack"),
("requests_nagvis_ajax", "stack"),
("requests_nagvis_other", "stack"),
("requests_images", "stack"),
("requests_styles", "stack"),
("requests_scripts", "stack"),
("requests_other", "stack"),
],
"omit_zero_metrics" : True,
})
graph_info.append({
"title" : _("Time spent for various page types"),
"metrics" : [
("secs_cmk_views", "stack"),
("secs_cmk_wato", "stack"),
("secs_cmk_bi", "stack"),
("secs_cmk_snapins", "stack"),
("secs_cmk_dashboards", "stack"),
("secs_cmk_other", "stack"),
("secs_nagvis_snapin", "stack"),
("secs_nagvis_ajax", "stack"),
("secs_nagvis_other", "stack"),
("secs_images", "stack"),
("secs_styles", "stack"),
("secs_scripts", "stack"),
("secs_other", "stack"),
],
"omit_zero_metrics" : True,
})
graph_info.append({
"title" : _("Bytes sent"),
"metrics" : [
("bytes_cmk_views", "stack"),
("bytes_cmk_wato", "stack"),
("bytes_cmk_bi", "stack"),
("bytes_cmk_snapins", "stack"),
("bytes_cmk_dashboards", "stack"),
("bytes_cmk_other", "stack"),
("bytes_nagvis_snapin", "stack"),
("bytes_nagvis_ajax", "stack"),
("bytes_nagvis_other", "stack"),
("bytes_images", "stack"),
("bytes_styles", "stack"),
("bytes_scripts", "stack"),
("bytes_other", "stack"),
],
"omit_zero_metrics" : True,
})
graph_info.append({
"title" : _("Amount of mails in queues"),
"metrics" : [
( "mail_queue_deferred_length", "stack" ),
( "mail_queue_active_length", "stack" ),
],
})
graph_info.append({
"title" : _("Size of mails in queues"),
"metrics" : [
( "mail_queue_deferred_size", "stack" ),
( "mail_queue_active_size", "stack" ),
],
})
graph_info.append({
"title" : _("Inbound and Outbound Messages"),
"metrics" : [
( "messages_outbound", "stack" ),
( "messages_inbound", "stack" ),
],
})
graph_info.append({
"title" : _("Modems"),
"metrics" : [
( "active_modems", "area" ),
( "registered_modems", "line" ),
( "total_modems", "line" ),
],
})
graph_info.append({
"title" : _("Net data traffic"),
"metrics" : [
( "net_data_recv", "stack" ),
( "net_data_sent", "stack" ),
],
})
graph_info.append({
"title" : _("Number of processes"),
"metrics" : [
( "processes", "area" ),
]
})
graph_info.append({
"title" : _("Size of processes"),
"metrics" : [
( "process_resident_size", "area" ),
( "process_virtual_size", "stack" ),
( "process_resident_size", "area" ),
( "process_mapped_size", "stack" ),
],
"optional_metrics": [ "process_mapped_size" ]
})
graph_info.append({
"title" : _("Size per process"),
"metrics" : [
( "process_resident_size,processes,/", "area", _("Average resident size per process") ),
( "process_virtual_size,processes,/", "stack", _("Average virtual size per process") ),
]
})
graph_info.append({
"title" : _("Throughput"),
"metrics" : [
("fc_tx_bytes", "-area"),
("fc_rx_bytes", "area"),
],
})
graph_info.append({
"title" : _("Frames"),
"metrics" : [
("fc_tx_frames", "-area"),
("fc_rx_frames", "area"),
],
})
graph_info.append({
"title" : _("Errors"),
"metrics" : [
( "fc_crc_errors", "area" ),
( "fc_c3discards", "stack" ),
( "fc_notxcredits", "stack" ),
( "fc_encouts", "stack" ),
( "fc_encins", "stack" ),
( "fc_bbcredit_zero", "stack" ),
],
"optional_metrics" : [
"fc_encins",
"fc_bbcredit_zero",
],
})
graph_info.append({
"title" : _("Errors"),
"metrics" : [
( "fc_link_fails", "stack" ),
( "fc_sync_losses", "stack" ),
( "fc_prim_seq_errors", "stack" ),
( "fc_invalid_tx_words", "stack" ),
( "fc_invalid_crcs", "stack" ),
( "fc_address_id_errors", "stack" ),
( "fc_link_resets_in", "stack" ),
( "fc_link_resets_out", "stack" ),
( "fc_offline_seqs_in", "stack" ),
( "fc_offline_seqs_out", "stack" ),
( "fc_c2c3_discards", "stack" ),
( "fc_c2_fbsy_frames", "stack" ),
( "fc_c2_frjt_frames", "stack" ),
]
})
for what, text in [ ("nfs", "NFS"),
("cifs", "CIFS"),
("san", "SAN"),
("fcp", "FCP"),
("iscsi", "iSCSI"),
("nfsv4", "NFSv4"),
("nfsv4_1", "NFSv4.1"),
]:
graph_info.append({
"title" : _("%s traffic") % text,
"metrics" : [
("%s_read_data" % what, "-area"),
("%s_write_data" % what, "area"),
],
})
graph_info.append({
"title" : _("%s latency") % text,
"metrics" : [
("%s_read_latency" % what, "-area"),
("%s_write_latency" % what, "area"),
],
})
graph_info.append({
"title" : _("Harddrive health statistic"),
"metrics" : [
("harddrive_power_cycle", "stack"),
("harddrive_reallocated_sectors", "stack"),
("harddrive_reallocated_events", "stack"),
("harddrive_spin_retries", "stack"),
("harddrive_pending_sectors", "stack"),
("harddrive_cmd_timeouts", "stack"),
("harddrive_end_to_end_errors", "stack"),
("harddrive_uncorrectable_errors", "stack"),
("harddrive_udma_crc_errors", "stack"),
],
})
graph_info.append({
"title" : _("Access point statistics"),
"metrics" : [
( "ap_devices_total", "area"),
( "ap_devices_drifted", "area"),
( "ap_devices_not_responding", "stack"),
]
})
graph_info.append({
"title" : _("Round trip average"),
"metrics" : [
( "rtmax", "area" ),
( "rtmin", "area" ),
( "rta", "line" ),
],
})
for idx in range(1, MAX_NUMBER_HOPS):
graph_info.append({
"title" : _("Hop %d Round trip average") % idx,
"metrics" : [
( "hop_%d_rtmax" % idx, "area" ),
( "hop_%d_rtmin" % idx, "area" ),
( "hop_%d_rta" % idx, "line" ),
( "hop_%d_rtstddev" % idx, "line" ),
( "hop_%d_response_time" % idx, "line" ),
],
})
graph_info.append({
"title" : _("Hop %d Packet loss") % idx,
"metrics" : [
( "hop_%d_pl" % idx, "area" ),
],
})
def create_hop_response_graph():
max_hops = MAX_NUMBER_HOPS
new_graph = {
"title" : _("Hop response times"),
"metrics": [],
"optional_metrics": [],
}
for idx in range(1, max_hops):
color = indexed_color(idx, max_hops)
new_graph["metrics"].append( ("hop_%d_response_time%s" % (idx, parse_color_into_hexrgb(color)), "line") )
if idx > 0:
new_graph["optional_metrics"].append( ("hop_%d_response_time" % (idx + 1)) )
graph_info.append(new_graph)
create_hop_response_graph()
graph_info.append({
"metrics" : [
( "mem_perm_used", "area" )
],
"scalars" : [
"mem_perm_used:warn",
"mem_perm_used:crit",
("mem_perm_used:max#000000", _("Max Perm used")),
],
"range" : (0, "mem_perm_used:max")
})
graph_info.append({
"title" : _("Palo Alto Sessions"),
"metrics" : [ ("tcp_active_sessions", "area"),
("udp_active_sessions", "stack"),
("icmp_active_sessions", "stack"),
("sslproxy_active_sessions", "stack"),
],
})
graph_info.append({
"title" : _("Varnish Backend Connections"),
"metrics" : [
( "varnish_backend_busy_rate", "line" ),
( "varnish_backend_unhealthy_rate", "line" ),
( "varnish_backend_req_rate", "line" ),
( "varnish_backend_recycle_rate", "line" ),
( "varnish_backend_retry_rate", "line" ),
( "varnish_backend_fail_rate", "line" ),
( "varnish_backend_toolate_rate", "line" ),
( "varnish_backend_conn_rate", "line" ),
( "varnish_backend_reuse_rate", "line" ),
],
})
graph_info.append({
"title" : _("Varnish Cache"),
"metrics" : [
( "varnish_cache_miss_rate", "line" ),
( "varnish_cache_hit_rate", "line" ),
( "varnish_cache_hitpass_rate", "line" ),
],
})
graph_info.append({
"title" : _("Varnish Clients"),
"metrics" : [
( "varnish_client_req_rate", "line" ),
( "varnish_client_conn_rate", "line" ),
( "varnish_client_drop_rate", "line" ),
( "varnish_client_drop_late_rate", "line" ),
],
})
graph_info.append({
"title" : _("Varnish ESI Errors and Warnings"),
"metrics" : [
( "varnish_esi_errors_rate", "line" ),
( "varnish_esi_warnings_rate", "line" ),
],
})
graph_info.append({
"title" : _("Varnish Fetch"),
"metrics" : [
( "varnish_fetch_oldhttp_rate", "line" ),
( "varnish_fetch_head_rate", "line" ),
( "varnish_fetch_eof_rate", "line" ),
( "varnish_fetch_zero_rate", "line" ),
( "varnish_fetch_304_rate", "line" ),
( "varnish_fetch_length_rate", "line" ),
( "varnish_fetch_failed_rate", "line" ),
( "varnish_fetch_bad_rate", "line" ),
( "varnish_fetch_close_rate", "line" ),
( "varnish_fetch_1xx_rate", "line" ),
( "varnish_fetch_chunked_rate", "line" ),
( "varnish_fetch_204_rate", "line" ),
],
})
graph_info.append({
"title" : _("Varnish Objects"),
"metrics" : [
( "varnish_objects_expired_rate", "line"),
( "varnish_objects_lru_nuked_rate", "line"),
( "varnish_objects_lru_moved_rate", "line"),
],
})
graph_info.append({
"title" : _("Varnish Worker"),
"metrics" : [
( "varnish_worker_lqueue_rate", "line" ),
( "varnish_worker_create_rate", "line" ),
( "varnish_worker_drop_rate", "line" ),
( "varnish_worker_rate", "line" ),
( "varnish_worker_failed_rate", "line" ),
( "varnish_worker_queued_rate", "line" ),
( "varnish_worker_max_rate", "line" ),
],
})
graph_info.append({
"title" : _("Optical Signal Power"),
"metrics" : [
( "rx_light", "line" ),
( "tx_light", "line" )
]
})
for i in range(10):
graph_info.append({
"title" : _("Optical Signal Power Lane %d") % i,
"metrics" : [
( "rx_light_%d" % i, "line" ),
( "tx_light_%d" % i, "line" )
]
})
graph_info.append({
"title" : _("Page Activity"),
"metrics" : [
("page_reads_sec", "area" ),
("page_writes_sec", "-area"),
]
})
graph_info.append({
"title" : _("Datafile Sizes"),
"metrics" : [
("allocated_size", "line" ),
("data_size", "area" )
]
})
graph_info.append({
"title" : _("Authentication Failures"),
"metrics" : [
("udp_failed_auth", "line"),
("tcp_failed_auth", "line")
]
})
graph_info.append({
"title" : _("Allocate Requests Exceeding Port Limit"),
"metrics" : [
("udp_allocate_requests_exceeding_port_limit", "line"),
("tcp_allocate_requests_exceeding_port_limit", "line")
]
})
graph_info.append({
"title" : _("Packets Dropped"),
"metrics" : [
("udp_packets_dropped", "line"),
("tcp_packets_dropped", "line"),
]
})
graph_info.append({
"title" : _("Active Sessions"),
"metrics" : [("active_sessions_%s" % device, idx == 0 and "area" or "stack")
for idx, (device, name, color) in enumerate(skype_mobile_devices[::-1])]
})
graph_info.append({
"title" : _("Streams"),
"metrics" : [
("failed_inbound_streams", "area"),
("failed_outbound_streams", "-area")
]
})
graph_info.append({
"title" : _("ORACLE physical IO"),
"metrics" : [
("oracle_physical_reads", "area"),
("oracle_physical_writes", "-area"),
]
})
graph_info.append({
"title" : _("ORACLE DB time statistics"),
"metrics" : [
("oracle_db_cpu", "line"),
("oracle_db_time", "line"),
]
})
graph_info.append({
"title" : _("ORACLE buffer pool statistics"),
"metrics" : [
("oracle_db_block_gets", "line"),
("oracle_db_block_change", "line"),
("oracle_consistent_gets", "line"),
("oracle_free_buffer_wait", "line"),
("oracle_buffer_busy_wait", "line"),
],
})
graph_info.append({
"title" : _("ORACLE library cache statistics"),
"metrics" : [
("oracle_pins_sum", "line"),
("oracle_pin_hits_sum", "line"),
],
})
graph_info.append({
"title" : _("DHCP statistics (received messages)"),
"metrics" : [
( "dhcp_discovery", "area" ),
( "dhcp_requests", "stack" ),
( "dhcp_releases", "stack" ),
( "dhcp_declines", "stack" ),
( "dhcp_informs", "stack" ),
( "dhcp_others", "stack" ),
]
})
graph_info.append({
"title" : _("DHCP statistics (sent messages)"),
"metrics" : [
( "dhcp_offers", "area" ),
( "dhcp_acks", "stack" ),
( "dhcp_nacks", "stack" ),
]
})
graph_info.append({
"title" : _("DNS statistics"),
"metrics" : [
( "dns_successes", "area" ),
( "dns_referrals", "stack" ),
( "dns_recursion", "stack" ),
( "dns_failures", "stack" ),
( "dns_nxrrset", "stack" ),
( "dns_nxdomain", "stack" ),
]
})
graph_info.append({
"title" : _("Connection durations"),
"metrics" : [
( "connections_duration_min", "line" ),
( "connections_duration_max", "line" ),
( "connections_duration_mean", "line" ),
]
})
graph_info.append({
"title" : _("HTTP Timings"),
"metrics" : [
( "time_connect", "area", _("Connect") ),
( "time_ssl", "stack", _("Negotiate SSL") ),
( "time_headers", "stack", _("Send request") ),
( "time_transfer", "stack", _("Receive full response") ),
( "time_firstbyte", "line", _("Receive start of response") ),
( "response_time", "line", _("Roundtrip") ),
],
"optional_metrics" : [ "time_ssl" ],
})
graph_info.append({
"title" : _("Web gateway statistics"),
"metrics" : [
( "infections_rate", "stack" ),
( "connections_blocked_rate", "stack" ),
],
})
graph_info.append({
"title" : _("Web gateway miscellaneous statistics"),
"metrics" : [
( "open_network_sockets", "stack" ),
( "connections", "stack" ),
],
})
| gpl-2.0 |
adrianholovaty/django | tests/regressiontests/utils/regex_helper.py | 33 | 1767 | from django.utils import regex_helper
from django.utils import unittest
class NormalizeTests(unittest.TestCase):
def test_empty(self):
pattern = r""
expected = [(u'', [])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_escape(self):
pattern = r"\\\^\$\.\|\?\*\+\(\)\["
expected = [(u'\\^$.|?*+()[', [])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_group_positional(self):
pattern = r"(.*)-(.+)"
expected = [(u'%(_0)s-%(_1)s', ['_0', '_1'])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_group_ignored(self):
pattern = r"(?i)(?L)(?m)(?s)(?u)(?#)"
expected = [(u'', [])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_group_noncapturing(self):
pattern = r"(?:non-capturing)"
expected = [(u'non-capturing', [])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_group_named(self):
pattern = r"(?P<first_group_name>.*)-(?P<second_group_name>.*)"
expected = [(u'%(first_group_name)s-%(second_group_name)s',
['first_group_name', 'second_group_name'])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
def test_group_backreference(self):
pattern = r"(?P<first_group_name>.*)-(?P=first_group_name)"
expected = [(u'%(first_group_name)s-%(first_group_name)s',
['first_group_name'])]
result = regex_helper.normalize(pattern)
self.assertEqual(result, expected)
| bsd-3-clause |
susansls/zulip | zerver/management/commands/bulk_change_user_name.py | 42 | 1299 | from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_change_full_name
from zerver.models import UserProfile, get_user_profile_by_email
class Command(BaseCommand):
help = """Change the names for many users."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('data_file', metavar='<data file>', type=str,
help="file containing rows of the form <email>,<desired name>")
def handle(self, *args, **options):
# type: (*Any, **str) -> None
data_file = options['data_file']
with open(data_file, "r") as f:
for line in f:
email, new_name = line.strip().split(",", 1)
try:
user_profile = get_user_profile_by_email(email)
old_name = user_profile.full_name
print("%s: %s -> %s" % (email, old_name, new_name))
do_change_full_name(user_profile, new_name)
except UserProfile.DoesNotExist:
print("* E-mail %s doesn't exist in the system, skipping." % (email,))
| apache-2.0 |
abhishekjairath/codeyard | commit/lib/python2.7/site-packages/setuptools/command/sdist.py | 149 | 8498 | import os
import re
import sys
from glob import glob
import pkg_resources
from distutils.command.sdist import sdist as _sdist
from distutils.util import convert_path
from distutils import log
from setuptools import svn_utils
READMES = ('README', 'README.rst', 'README.txt')
def walk_revctrl(dirname=''):
"""Find all files under revision control"""
for ep in pkg_resources.iter_entry_points('setuptools.file_finders'):
for item in ep.load()(dirname):
yield item
#TODO will need test case
class re_finder(object):
"""
Finder that locates files based on entries in a file matched by a
regular expression.
"""
def __init__(self, path, pattern, postproc=lambda x: x):
self.pattern = pattern
self.postproc = postproc
self.entries_path = convert_path(path)
def _finder(self, dirname, filename):
f = open(filename,'rU')
try:
data = f.read()
finally:
f.close()
for match in self.pattern.finditer(data):
path = match.group(1)
# postproc was formerly used when the svn finder
# was an re_finder for calling unescape
path = self.postproc(path)
yield svn_utils.joinpath(dirname, path)
def find(self, dirname=''):
path = svn_utils.joinpath(dirname, self.entries_path)
if not os.path.isfile(path):
# entries file doesn't exist
return
for path in self._finder(dirname,path):
if os.path.isfile(path):
yield path
elif os.path.isdir(path):
for item in self.find(path):
yield item
__call__ = find
def _default_revctrl(dirname=''):
'Primary svn_cvs entry point'
for finder in finders:
for item in finder(dirname):
yield item
finders = [
re_finder('CVS/Entries', re.compile(r"^\w?/([^/]+)/", re.M)),
svn_utils.svn_finder,
]
class sdist(_sdist):
"""Smart sdist that finds anything supported by revision control"""
user_options = [
('formats=', None,
"formats for source distribution (comma-separated list)"),
('keep-temp', 'k',
"keep the distribution tree around after creating " +
"archive file(s)"),
('dist-dir=', 'd',
"directory to put the source distribution archive(s) in "
"[default: dist]"),
]
negative_opt = {}
def run(self):
self.run_command('egg_info')
ei_cmd = self.get_finalized_command('egg_info')
self.filelist = ei_cmd.filelist
self.filelist.append(os.path.join(ei_cmd.egg_info,'SOURCES.txt'))
self.check_readme()
# Run sub commands
for cmd_name in self.get_sub_commands():
self.run_command(cmd_name)
# Call check_metadata only if no 'check' command
# (distutils <= 2.6)
import distutils.command
if 'check' not in distutils.command.__all__:
self.check_metadata()
self.make_distribution()
dist_files = getattr(self.distribution,'dist_files',[])
for file in self.archive_files:
data = ('sdist', '', file)
if data not in dist_files:
dist_files.append(data)
def __read_template_hack(self):
# This grody hack closes the template file (MANIFEST.in) if an
# exception occurs during read_template.
# Doing so prevents an error when easy_install attempts to delete the
# file.
try:
_sdist.read_template(self)
except:
sys.exc_info()[2].tb_next.tb_frame.f_locals['template'].close()
raise
# Beginning with Python 2.7.2, 3.1.4, and 3.2.1, this leaky file handle
# has been fixed, so only override the method if we're using an earlier
# Python.
has_leaky_handle = (
sys.version_info < (2,7,2)
or (3,0) <= sys.version_info < (3,1,4)
or (3,2) <= sys.version_info < (3,2,1)
)
if has_leaky_handle:
read_template = __read_template_hack
def add_defaults(self):
standards = [READMES,
self.distribution.script_name]
for fn in standards:
if isinstance(fn, tuple):
alts = fn
got_it = 0
for fn in alts:
if os.path.exists(fn):
got_it = 1
self.filelist.append(fn)
break
if not got_it:
self.warn("standard file not found: should have one of " +
', '.join(alts))
else:
if os.path.exists(fn):
self.filelist.append(fn)
else:
self.warn("standard file '%s' not found" % fn)
optional = ['test/test*.py', 'setup.cfg']
for pattern in optional:
files = list(filter(os.path.isfile, glob(pattern)))
if files:
self.filelist.extend(files)
# getting python files
if self.distribution.has_pure_modules():
build_py = self.get_finalized_command('build_py')
self.filelist.extend(build_py.get_source_files())
# This functionality is incompatible with include_package_data, and
# will in fact create an infinite recursion if include_package_data
# is True. Use of include_package_data will imply that
# distutils-style automatic handling of package_data is disabled
if not self.distribution.include_package_data:
for _, src_dir, _, filenames in build_py.data_files:
self.filelist.extend([os.path.join(src_dir, filename)
for filename in filenames])
if self.distribution.has_ext_modules():
build_ext = self.get_finalized_command('build_ext')
self.filelist.extend(build_ext.get_source_files())
if self.distribution.has_c_libraries():
build_clib = self.get_finalized_command('build_clib')
self.filelist.extend(build_clib.get_source_files())
if self.distribution.has_scripts():
build_scripts = self.get_finalized_command('build_scripts')
self.filelist.extend(build_scripts.get_source_files())
def check_readme(self):
for f in READMES:
if os.path.exists(f):
return
else:
self.warn(
"standard file not found: should have one of " +', '.join(READMES)
)
def make_release_tree(self, base_dir, files):
_sdist.make_release_tree(self, base_dir, files)
# Save any egg_info command line options used to create this sdist
dest = os.path.join(base_dir, 'setup.cfg')
if hasattr(os,'link') and os.path.exists(dest):
# unlink and re-copy, since it might be hard-linked, and
# we don't want to change the source version
os.unlink(dest)
self.copy_file('setup.cfg', dest)
self.get_finalized_command('egg_info').save_version_info(dest)
def _manifest_is_not_generated(self):
# check for special comment used in 2.7.1 and higher
if not os.path.isfile(self.manifest):
return False
fp = open(self.manifest, 'rbU')
try:
first_line = fp.readline()
finally:
fp.close()
return first_line != '# file GENERATED by distutils, do NOT edit\n'.encode()
def read_manifest(self):
"""Read the manifest file (named by 'self.manifest') and use it to
fill in 'self.filelist', the list of files to include in the source
distribution.
"""
log.info("reading manifest file '%s'", self.manifest)
manifest = open(self.manifest, 'rbU')
for line in manifest:
# The manifest must contain UTF-8. See #303.
if sys.version_info >= (3,):
try:
line = line.decode('UTF-8')
except UnicodeDecodeError:
log.warn("%r not UTF-8 decodable -- skipping" % line)
continue
# ignore comments and blank lines
line = line.strip()
if line.startswith('#') or not line:
continue
self.filelist.append(line)
manifest.close()
| mit |
DeepGnosis/keras | examples/antirectifier.py | 7 | 3321 | '''The example demonstrates how to write custom layers for Keras.
We build a custom activation layer called 'Antirectifier',
which modifies the shape of the tensor that passes through it.
We need to specify two methods: `get_output_shape_for` and `call`.
Note that the same result can also be achieved via a Lambda layer.
Because our custom layer is written with primitives from the Keras
backend (`K`), our code can run both on TensorFlow and Theano.
'''
from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Dropout, Layer, Activation
from keras.datasets import mnist
from keras import backend as K
from keras.utils import np_utils
class Antirectifier(Layer):
'''This is the combination of a sample-wise
L2 normalization with the concatenation of the
positive part of the input with the negative part
of the input. The result is a tensor of samples that are
twice as large as the input samples.
It can be used in place of a ReLU.
# Input shape
2D tensor of shape (samples, n)
# Output shape
2D tensor of shape (samples, 2*n)
# Theoretical justification
When applying ReLU, assuming that the distribution
of the previous output is approximately centered around 0.,
you are discarding half of your input. This is inefficient.
Antirectifier allows to return all-positive outputs like ReLU,
without discarding any data.
Tests on MNIST show that Antirectifier allows to train networks
with twice less parameters yet with comparable
classification accuracy as an equivalent ReLU-based network.
'''
def get_output_shape_for(self, input_shape):
shape = list(input_shape)
assert len(shape) == 2 # only valid for 2D tensors
shape[-1] *= 2
return tuple(shape)
def call(self, x, mask=None):
x -= K.mean(x, axis=1, keepdims=True)
x = K.l2_normalize(x, axis=1)
pos = K.relu(x)
neg = K.relu(-x)
return K.concatenate([pos, neg], axis=1)
# global parameters
batch_size = 128
nb_classes = 10
nb_epoch = 40
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
# build the model
model = Sequential()
model.add(Dense(256, input_shape=(784,)))
model.add(Antirectifier())
model.add(Dropout(0.1))
model.add(Dense(256))
model.add(Antirectifier())
model.add(Dropout(0.1))
model.add(Dense(10))
model.add(Activation('softmax'))
# compile the model
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# train the model
model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
# next, compare with an equivalent network
# with2x bigger Dense layers and ReLU
| mit |
kevinsung/OpenFermion | src/openfermion/utils/_channel_state.py | 1 | 7442 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module to manipulate basic models of quantum channels"""
from functools import reduce
from itertools import chain
from numpy import array, conj, dot, eye, kron, log2, sqrt
def _verify_channel_inputs(density_matrix, probability, target_qubit):
r"""Verifies input parameters for channels
Args:
density_matrix (numpy.ndarray): Density matrix of the system
probability (float): Probability error is applied p \in [0, 1]
target_qubit (int): target for the channel error.
Returns:
new_density_matrix(numpy.ndarray): Density matrix with the channel
applied.
"""
n_qubits = int(log2(density_matrix.shape[0]))
if (len(density_matrix.shape) != 2 or
density_matrix.shape[0] != density_matrix.shape[1]):
raise ValueError("Error in input of density matrix to channel.")
if (probability < 0) or (probability > 1):
raise ValueError("Channel probability must be between 0 and 1.")
if (target_qubit < 0) or (target_qubit >= n_qubits):
raise ValueError("Target qubits must be within number of qubits.")
def _lift_operator(operator, n_qubits, target_qubit):
"""Lift a single qubit operator into the n_qubit space by kron product
Args:
operator (ndarray): Single qubit operator to lift into full space
n_qubits (int): Number of total qubits in the space
target_qubit (int): Qubit to act on
Return:
new_operator(Sparse Operator): Operator representing the embedding in
the full space.
"""
new_operator = (
reduce(kron,
chain(
(eye(2) for i in range(0, target_qubit)),
[operator],
(eye(2) for i in range(target_qubit + 1, n_qubits)))))
return new_operator
def amplitude_damping_channel(density_matrix, probability, target_qubit,
transpose=False):
r"""Apply an amplitude damping channel
Applies an amplitude damping channel with a given probability to the target
qubit in the density_matrix.
Args:
density_matrix (numpy.ndarray): Density matrix of the system
probability (float): Probability error is applied p \in [0, 1]
target_qubit (int): target for the channel error.
transpose (bool): Conjugate transpose channel operators, useful for
acting on Hamiltonians in variational channel state models
Returns:
new_density_matrix(numpy.ndarray): Density matrix with the channel
applied.
"""
_verify_channel_inputs(density_matrix, probability, target_qubit)
n_qubits = int(log2(density_matrix.shape[0]))
E0 = _lift_operator(array([[1.0, 0.0],
[0.0, sqrt(1.0 - probability)]], dtype=complex),
n_qubits, target_qubit)
E1 = _lift_operator(array([[0.0, sqrt(probability)],
[0.0, 0.0]], dtype=complex),
n_qubits, target_qubit)
if transpose:
E0 = E0.T
E1 = E1.T
new_density_matrix = (dot(E0, dot(density_matrix, E0.T)) +
dot(E1, dot(density_matrix, E1.T)))
return new_density_matrix
def dephasing_channel(density_matrix, probability, target_qubit,
transpose=False):
r"""Apply a dephasing channel
Applies an amplitude damping channel with a given probability to the target
qubit in the density_matrix.
Args:
density_matrix (numpy.ndarray): Density matrix of the system
probability (float): Probability error is applied p \in [0, 1]
target_qubit (int): target for the channel error.
transpose (bool): Conjugate transpose channel operators, useful for
acting on Hamiltonians in variational channel state models
Returns:
new_density_matrix (numpy.ndarray): Density matrix with the channel
applied.
"""
_verify_channel_inputs(density_matrix, probability, target_qubit)
n_qubits = int(log2(density_matrix.shape[0]))
E0 = _lift_operator(sqrt(1.0 - probability/2.) * eye(2),
n_qubits, target_qubit)
E1 = _lift_operator(sqrt(probability/2.) *
array([[1.0, 0.0], [1.0, -1.0]]),
n_qubits, target_qubit)
if transpose:
E0 = E0.T
E1 = E1.T
new_density_matrix = (dot(E0, dot(density_matrix, E0.T)) +
dot(E1, dot(density_matrix, E1.T)))
return new_density_matrix
def depolarizing_channel(density_matrix, probability, target_qubit,
transpose=False):
r"""Apply a depolarizing channel
Applies an amplitude damping channel with a given probability to the target
qubit in the density_matrix.
Args:
density_matrix (numpy.ndarray): Density matrix of the system
probability (float): Probability error is applied p \in [0, 1]
target_qubit (int/str): target for the channel error, if given special
value "all", then a total depolarizing channel is applied.
transpose (bool): Dummy parameter to match signature of other
channels but depolarizing channel is symmetric under
conjugate transpose.
Returns:
new_density_matrix (numpy.ndarray): Density matrix with the channel
applied.
"""
n_qubits = int(log2(density_matrix.shape[0]))
# Toggle depolarizing channel on all qubits
if isinstance(target_qubit, str) and target_qubit.lower() == "all":
dimension = density_matrix.shape[0]
new_density_matrix = ((1.0 - probability) * density_matrix +
probability * eye(dimension) / float(dimension))
return new_density_matrix
# For any other case, depolarize only the target qubit
_verify_channel_inputs(density_matrix, probability, target_qubit)
E0 = _lift_operator(sqrt(1.0 - probability) * eye(2),
n_qubits, target_qubit)
E1 = _lift_operator(sqrt(probability / 3.) * array([[0.0, 1.0],
[1.0, 0.0]]),
n_qubits, target_qubit)
E2 = _lift_operator(sqrt(probability / 3.) * array([[0.0, -1.0j],
[1.0j, 0.0]]),
n_qubits, target_qubit)
E3 = _lift_operator(sqrt(probability / 3.) * array([[1.0, 0.0],
[0.0, -1.0]]),
n_qubits, target_qubit)
new_density_matrix = (dot(E0, dot(density_matrix, E0)) +
dot(E1, dot(density_matrix, E1)) +
dot(E2, dot(density_matrix, E2)) +
dot(E3, dot(density_matrix, E3)))
return new_density_matrix
| apache-2.0 |
songmonit/CTTMSONLINE_V8 | addons/stock/tests/common.py | 247 | 4483 | # -*- coding: utf-8 -*-
from openerp.tests import common
class TestStockCommon(common.TransactionCase):
def setUp(self):
super(TestStockCommon, self).setUp()
self.ProductObj = self.env['product.product']
self.UomObj = self.env['product.uom']
self.PartnerObj = self.env['res.partner']
self.ModelDataObj = self.env['ir.model.data']
self.StockPackObj = self.env['stock.pack.operation']
self.StockQuantObj = self.env['stock.quant']
self.PickingObj = self.env['stock.picking']
self.MoveObj = self.env['stock.move']
self.InvObj = self.env['stock.inventory']
self.InvLineObj = self.env['stock.inventory.line']
# Model Data
self.partner_agrolite_id = self.ModelDataObj.xmlid_to_res_id('base.res_partner_2')
self.partner_delta_id = self.ModelDataObj.xmlid_to_res_id('base.res_partner_4')
self.picking_type_in = self.ModelDataObj.xmlid_to_res_id('stock.picking_type_in')
self.picking_type_out = self.ModelDataObj.xmlid_to_res_id('stock.picking_type_out')
self.supplier_location = self.ModelDataObj.xmlid_to_res_id('stock.stock_location_suppliers')
self.stock_location = self.ModelDataObj.xmlid_to_res_id('stock.stock_location_stock')
self.customer_location = self.ModelDataObj.xmlid_to_res_id('stock.stock_location_customers')
self.categ_unit = self.ModelDataObj.xmlid_to_res_id('product.product_uom_categ_unit')
self.categ_kgm = self.ModelDataObj.xmlid_to_res_id('product.product_uom_categ_kgm')
# Product Created A, B, C, D
self.productA = self.ProductObj.create({'name': 'Product A'})
self.productB = self.ProductObj.create({'name': 'Product B'})
self.productC = self.ProductObj.create({'name': 'Product C'})
self.productD = self.ProductObj.create({'name': 'Product D'})
# Configure unit of measure.
self.uom_kg = self.UomObj.create({
'name': 'Test-KG',
'category_id': self.categ_kgm,
'factor_inv': 1,
'factor': 1,
'uom_type': 'reference',
'rounding': 0.000001})
self.uom_tone = self.UomObj.create({
'name': 'Test-Tone',
'category_id': self.categ_kgm,
'uom_type': 'bigger',
'factor_inv': 1000.0,
'rounding': 0.001})
self.uom_gm = self.UomObj.create({
'name': 'Test-G',
'category_id': self.categ_kgm,
'uom_type': 'smaller',
'factor': 1000.0,
'rounding': 0.001})
self.uom_mg = self.UomObj.create({
'name': 'Test-MG',
'category_id': self.categ_kgm,
'uom_type': 'smaller',
'factor': 100000.0,
'rounding': 0.001})
# Check Unit
self.uom_unit = self.UomObj.create({
'name': 'Test-Unit',
'category_id': self.categ_unit,
'factor': 1,
'uom_type': 'reference',
'rounding': 1.0})
self.uom_dozen = self.UomObj.create({
'name': 'Test-DozenA',
'category_id': self.categ_unit,
'factor_inv': 12,
'uom_type': 'bigger',
'rounding': 0.001})
self.uom_sdozen = self.UomObj.create({
'name': 'Test-SDozenA',
'category_id': self.categ_unit,
'factor_inv': 144,
'uom_type': 'bigger',
'rounding': 0.001})
self.uom_sdozen_round = self.UomObj.create({
'name': 'Test-SDozenA Round',
'category_id': self.categ_unit,
'factor_inv': 144,
'uom_type': 'bigger',
'rounding': 1.0})
# Product for different unit of measure.
self.DozA = self.ProductObj.create({'name': 'Dozon-A', 'uom_id': self.uom_dozen.id, 'uom_po_id': self.uom_dozen.id})
self.SDozA = self.ProductObj.create({'name': 'SuperDozon-A', 'uom_id': self.uom_sdozen.id, 'uom_po_id': self.uom_sdozen.id})
self.SDozARound = self.ProductObj.create({'name': 'SuperDozenRound-A', 'uom_id': self.uom_sdozen_round.id, 'uom_po_id': self.uom_sdozen_round.id})
self.UnitA = self.ProductObj.create({'name': 'Unit-A'})
self.kgB = self.ProductObj.create({'name': 'kg-B', 'uom_id': self.uom_kg.id, 'uom_po_id': self.uom_kg.id})
self.gB = self.ProductObj.create({'name': 'g-B', 'uom_id': self.uom_gm.id, 'uom_po_id': self.uom_gm.id})
| agpl-3.0 |
ganggas95/websen | websen/url.py | 1 | 3254 | from websen.helper.url_loader import url_mapper
'''
URL Untuk Admin
'''
url_mapper("/unauthorized",
"views.admin.admin_page.unauthorized", methods=['GET'])
url_mapper("/login",
"views.admin.admin_page.admin_login", methods=['GET', 'POST'])
url_mapper("/admin",
"views.admin.admin_page.admin_index", methods=['GET'])
url_mapper("/logout",
"views.admin.admin_page.admin_logout", methods=['GET'])
url_mapper("/admin/data/users",
"views.admin.admin_page.admin_users", methods=['GET','POST'])
url_mapper("/admin/data/pegawai",
"views.admin.admin_page.admin_pegawai", methods=['GET'])
url_mapper("/admin/data/pegawai/<int:pegawai_id>/delete",
"views.admin.admin_page.delete_pegawai", methods=['DELETE'])
url_mapper("/admin/data/pegawai/new",
"views.admin.admin_page.admin_pegawai_new", methods=['GET', 'POST'])
url_mapper("/admin/data/pegawai/<int:pegawai_id>/edit",
"views.admin.admin_page.admin_pegawai_edit", methods=['GET', 'POST'])
url_mapper("/admin/data/jabatan",
"views.admin.admin_page.admin_jabatan", methods=['GET'])
url_mapper("/admin/data/jabatan/<int:jab_id>/delete",
"views.admin.admin_page.delete_jabatan", methods=['DELETE'])
url_mapper("/admin/data/jabatan/new",
"views.admin.admin_page.admin_jabatan_new", methods=['GET', 'POST'])
url_mapper("/admin/data/jabatan/<int:jabatan_id>/edit",
"views.admin.admin_page.admin_jabatan_edit", methods=['GET', 'POST'])
url_mapper("/admin/data/absen",
"views.admin.admin_page.admin_absen", methods=['GET'])
url_mapper("/admin/setting/jadwal",
"views.admin.admin_page.admin_jadwal", methods=['GET','DELETE'])
url_mapper("/admin/setting/jadwal/<int:jadwal_id>/delete",
"views.admin.admin_page.delete_jadwal", methods=['DELETE'])
url_mapper("/admin/setting/jadwal/new",
"views.admin.admin_page.admin_jadwal_new", methods=['GET', 'POST'])
url_mapper("/admin/setting/jadwal/<int:jadwal_id>/edit",
"views.admin.admin_page.admin_jadwal_edit", methods=['GET', 'POST'])
url_mapper("/admin/profile",
"views.admin.admin_page.admin_profile", methods=['GET','POST'])
url_mapper("/admin/profile/foto/<int:pegawai_id>/change",
"views.admin.admin_page.admin_change_foto", methods=['POST'])
url_mapper("/admin/profile/password/change",
"views.admin.admin_page.admin_ganti_password", methods=["POST"])
url_mapper("/admin/data/absen/print",
"views.admin.admin_page.download_absens", methods=["GET"])
url_mapper("/staff", "views.staff.staff_page.staff_index", methods=['GET'])
url_mapper("/staff/profile", "views.staff.staff_page.staf_profile", methods=['GET','POST'])
url_mapper("/staff/absen", "views.staff.staff_page.staf_absen", methods=['GET'])
url_mapper("/staff/profile/foto/<int:pegawai_id>/change",
"views.staff.staff_page.staff_change_foto", methods=['POST'])
url_mapper("/staff/profile/password/change",
"views.staff.staff_page.staff_ganti_password", methods=["POST"]) | gpl-3.0 |
InspectorIncognito/visualization | AndroidRequests/migrations/0919_auto_20170427_2035.py | 1 | 1548 | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-04-27 20:35
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('AndroidRequests', '0918_transformation_reportinfo_user_latlon'),
]
operations = [
migrations.AlterField(
model_name='eventforbusstop',
name='zonification',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='AndroidRequests.ZonificationTransantiago', verbose_name=b'zonification'),
),
migrations.AlterField(
model_name='eventforbusv2',
name='zonification',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='AndroidRequests.ZonificationTransantiago', verbose_name=b'zonification'),
),
migrations.AlterField(
model_name='reportinfo',
name='zonification',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='AndroidRequests.ZonificationTransantiago', verbose_name=b'zonification'),
),
migrations.AlterField(
model_name='reportinfo',
name='stopCode',
field=models.CharField(max_length=6, null=True, verbose_name=b'StopCode'),
),
#migrations.AlterUniqueTogether(
# name='busassignment',
# unique_together=set(['uuid', 'service']),
#),
]
| gpl-3.0 |
etataurov/pytest | testing/python/approx.py | 6 | 11104 | # encoding: utf-8
import sys
import pytest
import doctest
from pytest import approx
from operator import eq, ne
from decimal import Decimal
from fractions import Fraction
inf, nan = float('inf'), float('nan')
class MyDocTestRunner(doctest.DocTestRunner):
def __init__(self):
doctest.DocTestRunner.__init__(self)
def report_failure(self, out, test, example, got):
raise AssertionError("'{}' evaluates to '{}', not '{}'".format(
example.source.strip(), got.strip(), example.want.strip()))
class TestApprox:
def test_repr_string(self):
# for some reason in Python 2.6 it is not displaying the tolerance representation correctly
plus_minus = u'\u00b1' if sys.version_info[0] > 2 else u'+-'
tol1, tol2, infr = '1.0e-06', '2.0e-06', 'inf'
if sys.version_info[:2] == (2, 6):
tol1, tol2, infr = '???', '???', '???'
assert repr(approx(1.0)) == '1.0 {pm} {tol1}'.format(pm=plus_minus, tol1=tol1)
assert repr(approx([1.0, 2.0])) == '1.0 {pm} {tol1}, 2.0 {pm} {tol2}'.format(pm=plus_minus, tol1=tol1, tol2=tol2)
assert repr(approx(inf)) == 'inf'
assert repr(approx(1.0, rel=nan)) == '1.0 {pm} ???'.format(pm=plus_minus)
assert repr(approx(1.0, rel=inf)) == '1.0 {pm} {infr}'.format(pm=plus_minus, infr=infr)
assert repr(approx(1.0j, rel=inf)) == '1j'
def test_operator_overloading(self):
assert 1 == approx(1, rel=1e-6, abs=1e-12)
assert not (1 != approx(1, rel=1e-6, abs=1e-12))
assert 10 != approx(1, rel=1e-6, abs=1e-12)
assert not (10 == approx(1, rel=1e-6, abs=1e-12))
def test_exactly_equal(self):
examples = [
(2.0, 2.0),
(0.1e200, 0.1e200),
(1.123e-300, 1.123e-300),
(12345, 12345.0),
(0.0, -0.0),
(345678, 345678),
(Decimal('1.0001'), Decimal('1.0001')),
(Fraction(1, 3), Fraction(-1, -3)),
]
for a, x in examples:
assert a == approx(x)
def test_opposite_sign(self):
examples = [
(eq, 1e-100, -1e-100),
(ne, 1e100, -1e100),
]
for op, a, x in examples:
assert op(a, approx(x))
def test_zero_tolerance(self):
within_1e10 = [
(1.1e-100, 1e-100),
(-1.1e-100, -1e-100),
]
for a, x in within_1e10:
assert x == approx(x, rel=0.0, abs=0.0)
assert a != approx(x, rel=0.0, abs=0.0)
assert a == approx(x, rel=0.0, abs=5e-101)
assert a != approx(x, rel=0.0, abs=5e-102)
assert a == approx(x, rel=5e-1, abs=0.0)
assert a != approx(x, rel=5e-2, abs=0.0)
def test_negative_tolerance(self):
# Negative tolerances are not allowed.
illegal_kwargs = [
dict(rel=-1e100),
dict(abs=-1e100),
dict(rel=1e100, abs=-1e100),
dict(rel=-1e100, abs=1e100),
dict(rel=-1e100, abs=-1e100),
]
for kwargs in illegal_kwargs:
with pytest.raises(ValueError):
1.1 == approx(1, **kwargs)
def test_inf_tolerance(self):
# Everything should be equal if the tolerance is infinite.
large_diffs = [
(1, 1000),
(1e-50, 1e50),
(-1.0, -1e300),
(0.0, 10),
]
for a, x in large_diffs:
assert a != approx(x, rel=0.0, abs=0.0)
assert a == approx(x, rel=inf, abs=0.0)
assert a == approx(x, rel=0.0, abs=inf)
assert a == approx(x, rel=inf, abs=inf)
def test_inf_tolerance_expecting_zero(self):
# If the relative tolerance is zero but the expected value is infinite,
# the actual tolerance is a NaN, which should be an error.
illegal_kwargs = [
dict(rel=inf, abs=0.0),
dict(rel=inf, abs=inf),
]
for kwargs in illegal_kwargs:
with pytest.raises(ValueError):
1 == approx(0, **kwargs)
def test_nan_tolerance(self):
illegal_kwargs = [
dict(rel=nan),
dict(abs=nan),
dict(rel=nan, abs=nan),
]
for kwargs in illegal_kwargs:
with pytest.raises(ValueError):
1.1 == approx(1, **kwargs)
def test_reasonable_defaults(self):
# Whatever the defaults are, they should work for numbers close to 1
# than have a small amount of floating-point error.
assert 0.1 + 0.2 == approx(0.3)
def test_default_tolerances(self):
# This tests the defaults as they are currently set. If you change the
# defaults, this test will fail but you should feel free to change it.
# None of the other tests (except the doctests) should be affected by
# the choice of defaults.
examples = [
# Relative tolerance used.
(eq, 1e100 + 1e94, 1e100),
(ne, 1e100 + 2e94, 1e100),
(eq, 1e0 + 1e-6, 1e0),
(ne, 1e0 + 2e-6, 1e0),
# Absolute tolerance used.
(eq, 1e-100, + 1e-106),
(eq, 1e-100, + 2e-106),
(eq, 1e-100, 0),
]
for op, a, x in examples:
assert op(a, approx(x))
def test_custom_tolerances(self):
assert 1e8 + 1e0 == approx(1e8, rel=5e-8, abs=5e0)
assert 1e8 + 1e0 == approx(1e8, rel=5e-9, abs=5e0)
assert 1e8 + 1e0 == approx(1e8, rel=5e-8, abs=5e-1)
assert 1e8 + 1e0 != approx(1e8, rel=5e-9, abs=5e-1)
assert 1e0 + 1e-8 == approx(1e0, rel=5e-8, abs=5e-8)
assert 1e0 + 1e-8 == approx(1e0, rel=5e-9, abs=5e-8)
assert 1e0 + 1e-8 == approx(1e0, rel=5e-8, abs=5e-9)
assert 1e0 + 1e-8 != approx(1e0, rel=5e-9, abs=5e-9)
assert 1e-8 + 1e-16 == approx(1e-8, rel=5e-8, abs=5e-16)
assert 1e-8 + 1e-16 == approx(1e-8, rel=5e-9, abs=5e-16)
assert 1e-8 + 1e-16 == approx(1e-8, rel=5e-8, abs=5e-17)
assert 1e-8 + 1e-16 != approx(1e-8, rel=5e-9, abs=5e-17)
def test_relative_tolerance(self):
within_1e8_rel = [
(1e8 + 1e0, 1e8),
(1e0 + 1e-8, 1e0),
(1e-8 + 1e-16, 1e-8),
]
for a, x in within_1e8_rel:
assert a == approx(x, rel=5e-8, abs=0.0)
assert a != approx(x, rel=5e-9, abs=0.0)
def test_absolute_tolerance(self):
within_1e8_abs = [
(1e8 + 9e-9, 1e8),
(1e0 + 9e-9, 1e0),
(1e-8 + 9e-9, 1e-8),
]
for a, x in within_1e8_abs:
assert a == approx(x, rel=0, abs=5e-8)
assert a != approx(x, rel=0, abs=5e-9)
def test_expecting_zero(self):
examples = [
(ne, 1e-6, 0.0),
(ne, -1e-6, 0.0),
(eq, 1e-12, 0.0),
(eq, -1e-12, 0.0),
(ne, 2e-12, 0.0),
(ne, -2e-12, 0.0),
(ne, inf, 0.0),
(ne, nan, 0.0),
]
for op, a, x in examples:
assert op(a, approx(x, rel=0.0, abs=1e-12))
assert op(a, approx(x, rel=1e-6, abs=1e-12))
def test_expecting_inf(self):
examples = [
(eq, inf, inf),
(eq, -inf, -inf),
(ne, inf, -inf),
(ne, 0.0, inf),
(ne, nan, inf),
]
for op, a, x in examples:
assert op(a, approx(x))
def test_expecting_nan(self):
examples = [
(nan, nan),
(-nan, -nan),
(nan, -nan),
(0.0, nan),
(inf, nan),
]
for a, x in examples:
# If there is a relative tolerance and the expected value is NaN,
# the actual tolerance is a NaN, which should be an error.
with pytest.raises(ValueError):
a != approx(x, rel=inf)
# You can make comparisons against NaN by not specifying a relative
# tolerance, so only an absolute tolerance is calculated.
assert a != approx(x, abs=inf)
def test_expecting_sequence(self):
within_1e8 = [
(1e8 + 1e0, 1e8),
(1e0 + 1e-8, 1e0),
(1e-8 + 1e-16, 1e-8),
]
actual, expected = zip(*within_1e8)
assert actual == approx(expected, rel=5e-8, abs=0.0)
def test_expecting_sequence_wrong_len(self):
assert [1, 2] != approx([1])
assert [1, 2] != approx([1,2,3])
def test_complex(self):
within_1e6 = [
( 1.000001 + 1.0j, 1.0 + 1.0j),
(1.0 + 1.000001j, 1.0 + 1.0j),
(-1.000001 + 1.0j, -1.0 + 1.0j),
(1.0 - 1.000001j, 1.0 - 1.0j),
]
for a, x in within_1e6:
assert a == approx(x, rel=5e-6, abs=0)
assert a != approx(x, rel=5e-7, abs=0)
def test_int(self):
within_1e6 = [
(1000001, 1000000),
(-1000001, -1000000),
]
for a, x in within_1e6:
assert a == approx(x, rel=5e-6, abs=0)
assert a != approx(x, rel=5e-7, abs=0)
def test_decimal(self):
within_1e6 = [
(Decimal('1.000001'), Decimal('1.0')),
(Decimal('-1.000001'), Decimal('-1.0')),
]
for a, x in within_1e6:
assert a == approx(x, rel=Decimal('5e-6'), abs=0)
assert a != approx(x, rel=Decimal('5e-7'), abs=0)
def test_fraction(self):
within_1e6 = [
(1 + Fraction(1, 1000000), Fraction(1)),
(-1 - Fraction(-1, 1000000), Fraction(-1)),
]
for a, x in within_1e6:
assert a == approx(x, rel=5e-6, abs=0)
assert a != approx(x, rel=5e-7, abs=0)
def test_doctests(self):
parser = doctest.DocTestParser()
test = parser.get_doctest(
approx.__doc__,
{'approx': approx},
approx.__name__,
None, None,
)
runner = MyDocTestRunner()
runner.run(test)
def test_unicode_plus_minus(self, testdir):
"""
Comparing approx instances inside lists should not produce an error in the detailed diff.
Integration test for issue #2111.
"""
testdir.makepyfile("""
import pytest
def test_foo():
assert [3] == [pytest.approx(4)]
""")
expected = '4.0e-06'
# for some reason in Python 2.6 it is not displaying the tolerance representation correctly
if sys.version_info[:2] == (2, 6):
expected = '???'
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'*At index 0 diff: 3 != 4 * {0}'.format(expected),
'=* 1 failed in *=',
])
| mit |
ajbouh/tfi | src/tfi/driverbase/metaclass.py | 1 | 4417 | import inspect
import functools
from collections import OrderedDict
from tfi.parse.docstring import GoogleDocstring
def _resolve_instance_method_tensors(instance, fn, docstring=None):
def _expand_annotation(instance, annotation, default=None):
if annotation == inspect.Signature.empty:
return default
if isinstance(annotation, dict):
return {
k: _expand_annotation(instance, v)
for k, v in annotation.items()
}
if isinstance(annotation, type):
return {'dtype': annotation}
if not isinstance(annotation, dict) and not hasattr(annotation, '__getitem__') and not hasattr(annotation, 'get'):
raise Exception("Annotation is not a dict and doesn't support both get and __getitem__: %s" % annotation)
return annotation
def _tensor_info_str(tensor):
if not tensor:
tensor = {}
shape_list = tensor.get('shape', [])
ndims = len(shape_list)
dtype = tensor.get('dtype', None)
if dtype is None:
dtype_name = 'any'
elif isinstance(dtype, type):
dtype_name = dtype.__name__
else:
dtype_name = str(dtype)
if ndims is None:
return "%s ?" % dtype_name
if len(shape_list) == 0:
shape = "scalar"
else:
shape = "<%s>" % (
", ".join(["?" if n is None else str(n) for n in shape_list]),
)
return "%s %s" % (dtype_name, shape)
def _enrich_docs(doc_fields, tensor_dict):
existing = {k: v for k, _, v in doc_fields}
return [
(
name,
_tensor_info_str(tensor_dict[name]) if name in tensor_dict else '',
existing.get(name, '')
)
for name in set([*tensor_dict.keys(), *existing.keys()])
]
sig = inspect.signature(fn)
input_annotations = OrderedDict([
(name, _expand_annotation(instance, param.annotation))
for name, param in sig.parameters.items()
])
output_annotations = OrderedDict([
(name, _expand_annotation(instance, value))
for name, value in _expand_annotation(instance, sig.return_annotation, {}).items()
])
if fn.__doc__ or docstring:
doc = GoogleDocstring(obj=fn, docstring=docstring).result()
else:
doc = {'sections': [], 'args': {}, 'returns': {}}
doc['args'] = _enrich_docs(doc['args'], input_annotations)
doc['returns'] = _enrich_docs(doc['returns'], output_annotations)
return doc, input_annotations, output_annotations
class Meta(type):
@staticmethod
def __new__(meta, classname, bases, d):
if '__tfi_del__' in d:
for name in d['__tfi_del__']:
del d[name]
del d['__tfi_del__']
if '__init__' in d:
init = d['__init__']
# Wrap __init__ to auto adapt inputs.
@functools.wraps(init)
def wrapped_init(self, *a, **k):
init(self, *a, **k)
# Once init has executed, we can bind proper methods too!
if not hasattr(self, '__tfi_signature_defs__'):
self.__tfi_signature_defs__ = OrderedDict()
self.__tfi_signature_defs_docs__ = OrderedDict()
docstrings = {}
if hasattr(self, '__tfi_docstrings__'):
docstrings = self.__tfi_docstrings__ or docstrings
for method_name, method in inspect.getmembers(self, predicate=inspect.ismethod):
if method_name.startswith('_'):
continue
docstring = docstrings.get(method_name, None)
doc, input_annotations, output_annotations = _resolve_instance_method_tensors(self, method, docstring=docstring)
self.__tfi_signature_defs_docs__[method_name] = doc
self.__tfi_signature_defs__[method_name] = dict(
inputs=input_annotations,
outputs=output_annotations)
# Remember which fields to pickle BEFORE we add methods.
if not hasattr(self, '__getstate__'):
self.__tfi_saved_fields__ = list(self.__dict__.keys())
self.__getstate__ = lambda: {k: getattr(self, k) for k in self.__tfi_saved_fields__}
self.__tfi_init__()
d['__init__'] = wrapped_init
return super(Meta, meta).__new__(meta, classname, bases, d)
| mit |
labcodes/django | tests/pagination/tests.py | 7 | 14464 | import unittest
import warnings
from datetime import datetime
from django.core.paginator import (
EmptyPage, InvalidPage, PageNotAnInteger, Paginator,
UnorderedObjectListWarning,
)
from django.test import TestCase
from .custom import ValidAdjacentNumsPaginator
from .models import Article
class PaginationTests(unittest.TestCase):
"""
Tests for the Paginator and Page classes.
"""
def check_paginator(self, params, output):
"""
Helper method that instantiates a Paginator object from the passed
params and then checks that its attributes match the passed output.
"""
count, num_pages, page_range = output
paginator = Paginator(*params)
self.check_attribute('count', paginator, count, params)
self.check_attribute('num_pages', paginator, num_pages, params)
self.check_attribute('page_range', paginator, page_range, params, coerce=list)
def check_attribute(self, name, paginator, expected, params, coerce=None):
"""
Helper method that checks a single attribute and gives a nice error
message upon test failure.
"""
got = getattr(paginator, name)
if coerce is not None:
got = coerce(got)
self.assertEqual(
expected, got,
"For '%s', expected %s but got %s. Paginator parameters were: %s"
% (name, expected, got, params)
)
def test_paginator(self):
"""
Tests the paginator attributes using varying inputs.
"""
nine = [1, 2, 3, 4, 5, 6, 7, 8, 9]
ten = nine + [10]
eleven = ten + [11]
tests = (
# Each item is two tuples:
# First tuple is Paginator parameters - object_list, per_page,
# orphans, and allow_empty_first_page.
# Second tuple is resulting Paginator attributes - count,
# num_pages, and page_range.
# Ten items, varying orphans, no empty first page.
((ten, 4, 0, False), (10, 3, [1, 2, 3])),
((ten, 4, 1, False), (10, 3, [1, 2, 3])),
((ten, 4, 2, False), (10, 2, [1, 2])),
((ten, 4, 5, False), (10, 2, [1, 2])),
((ten, 4, 6, False), (10, 1, [1])),
# Ten items, varying orphans, allow empty first page.
((ten, 4, 0, True), (10, 3, [1, 2, 3])),
((ten, 4, 1, True), (10, 3, [1, 2, 3])),
((ten, 4, 2, True), (10, 2, [1, 2])),
((ten, 4, 5, True), (10, 2, [1, 2])),
((ten, 4, 6, True), (10, 1, [1])),
# One item, varying orphans, no empty first page.
(([1], 4, 0, False), (1, 1, [1])),
(([1], 4, 1, False), (1, 1, [1])),
(([1], 4, 2, False), (1, 1, [1])),
# One item, varying orphans, allow empty first page.
(([1], 4, 0, True), (1, 1, [1])),
(([1], 4, 1, True), (1, 1, [1])),
(([1], 4, 2, True), (1, 1, [1])),
# Zero items, varying orphans, no empty first page.
(([], 4, 0, False), (0, 0, [])),
(([], 4, 1, False), (0, 0, [])),
(([], 4, 2, False), (0, 0, [])),
# Zero items, varying orphans, allow empty first page.
(([], 4, 0, True), (0, 1, [1])),
(([], 4, 1, True), (0, 1, [1])),
(([], 4, 2, True), (0, 1, [1])),
# Number if items one less than per_page.
(([], 1, 0, True), (0, 1, [1])),
(([], 1, 0, False), (0, 0, [])),
(([1], 2, 0, True), (1, 1, [1])),
((nine, 10, 0, True), (9, 1, [1])),
# Number if items equal to per_page.
(([1], 1, 0, True), (1, 1, [1])),
(([1, 2], 2, 0, True), (2, 1, [1])),
((ten, 10, 0, True), (10, 1, [1])),
# Number if items one more than per_page.
(([1, 2], 1, 0, True), (2, 2, [1, 2])),
(([1, 2, 3], 2, 0, True), (3, 2, [1, 2])),
((eleven, 10, 0, True), (11, 2, [1, 2])),
# Number if items one more than per_page with one orphan.
(([1, 2], 1, 1, True), (2, 1, [1])),
(([1, 2, 3], 2, 1, True), (3, 1, [1])),
((eleven, 10, 1, True), (11, 1, [1])),
# Non-integer inputs
((ten, '4', 1, False), (10, 3, [1, 2, 3])),
((ten, '4', 1, False), (10, 3, [1, 2, 3])),
((ten, 4, '1', False), (10, 3, [1, 2, 3])),
((ten, 4, '1', False), (10, 3, [1, 2, 3])),
)
for params, output in tests:
self.check_paginator(params, output)
def test_invalid_page_number(self):
"""
Invalid page numbers result in the correct exception being raised.
"""
paginator = Paginator([1, 2, 3], 2)
with self.assertRaises(InvalidPage):
paginator.page(3)
with self.assertRaises(PageNotAnInteger):
paginator.validate_number(None)
with self.assertRaises(PageNotAnInteger):
paginator.validate_number('x')
# With no content and allow_empty_first_page=True, 1 is a valid page number
paginator = Paginator([], 2)
self.assertEqual(paginator.validate_number(1), 1)
def test_paginate_misc_classes(self):
class CountContainer:
def count(self):
return 42
# Paginator can be passed other objects with a count() method.
paginator = Paginator(CountContainer(), 10)
self.assertEqual(42, paginator.count)
self.assertEqual(5, paginator.num_pages)
self.assertEqual([1, 2, 3, 4, 5], list(paginator.page_range))
# Paginator can be passed other objects that implement __len__.
class LenContainer:
def __len__(self):
return 42
paginator = Paginator(LenContainer(), 10)
self.assertEqual(42, paginator.count)
self.assertEqual(5, paginator.num_pages)
self.assertEqual([1, 2, 3, 4, 5], list(paginator.page_range))
def check_indexes(self, params, page_num, indexes):
"""
Helper method that instantiates a Paginator object from the passed
params and then checks that the start and end indexes of the passed
page_num match those given as a 2-tuple in indexes.
"""
paginator = Paginator(*params)
if page_num == 'first':
page_num = 1
elif page_num == 'last':
page_num = paginator.num_pages
page = paginator.page(page_num)
start, end = indexes
msg = ("For %s of page %s, expected %s but got %s. Paginator parameters were: %s")
self.assertEqual(start, page.start_index(), msg % ('start index', page_num, start, page.start_index(), params))
self.assertEqual(end, page.end_index(), msg % ('end index', page_num, end, page.end_index(), params))
def test_page_indexes(self):
"""
Paginator pages have the correct start and end indexes.
"""
ten = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
tests = (
# Each item is three tuples:
# First tuple is Paginator parameters - object_list, per_page,
# orphans, and allow_empty_first_page.
# Second tuple is the start and end indexes of the first page.
# Third tuple is the start and end indexes of the last page.
# Ten items, varying per_page, no orphans.
((ten, 1, 0, True), (1, 1), (10, 10)),
((ten, 2, 0, True), (1, 2), (9, 10)),
((ten, 3, 0, True), (1, 3), (10, 10)),
((ten, 5, 0, True), (1, 5), (6, 10)),
# Ten items, varying per_page, with orphans.
((ten, 1, 1, True), (1, 1), (9, 10)),
((ten, 1, 2, True), (1, 1), (8, 10)),
((ten, 3, 1, True), (1, 3), (7, 10)),
((ten, 3, 2, True), (1, 3), (7, 10)),
((ten, 3, 4, True), (1, 3), (4, 10)),
((ten, 5, 1, True), (1, 5), (6, 10)),
((ten, 5, 2, True), (1, 5), (6, 10)),
((ten, 5, 5, True), (1, 10), (1, 10)),
# One item, varying orphans, no empty first page.
(([1], 4, 0, False), (1, 1), (1, 1)),
(([1], 4, 1, False), (1, 1), (1, 1)),
(([1], 4, 2, False), (1, 1), (1, 1)),
# One item, varying orphans, allow empty first page.
(([1], 4, 0, True), (1, 1), (1, 1)),
(([1], 4, 1, True), (1, 1), (1, 1)),
(([1], 4, 2, True), (1, 1), (1, 1)),
# Zero items, varying orphans, allow empty first page.
(([], 4, 0, True), (0, 0), (0, 0)),
(([], 4, 1, True), (0, 0), (0, 0)),
(([], 4, 2, True), (0, 0), (0, 0)),
)
for params, first, last in tests:
self.check_indexes(params, 'first', first)
self.check_indexes(params, 'last', last)
# When no items and no empty first page, we should get EmptyPage error.
with self.assertRaises(EmptyPage):
self.check_indexes(([], 4, 0, False), 1, None)
with self.assertRaises(EmptyPage):
self.check_indexes(([], 4, 1, False), 1, None)
with self.assertRaises(EmptyPage):
self.check_indexes(([], 4, 2, False), 1, None)
def test_page_sequence(self):
"""
A paginator page acts like a standard sequence.
"""
eleven = 'abcdefghijk'
page2 = Paginator(eleven, per_page=5, orphans=1).page(2)
self.assertEqual(len(page2), 6)
self.assertIn('k', page2)
self.assertNotIn('a', page2)
self.assertEqual(''.join(page2), 'fghijk')
self.assertEqual(''.join(reversed(page2)), 'kjihgf')
def test_get_page_hook(self):
"""
A Paginator subclass can use the ``_get_page`` hook to
return an alternative to the standard Page class.
"""
eleven = 'abcdefghijk'
paginator = ValidAdjacentNumsPaginator(eleven, per_page=6)
page1 = paginator.page(1)
page2 = paginator.page(2)
self.assertIsNone(page1.previous_page_number())
self.assertEqual(page1.next_page_number(), 2)
self.assertEqual(page2.previous_page_number(), 1)
self.assertIsNone(page2.next_page_number())
def test_page_range_iterator(self):
"""
Paginator.page_range should be an iterator.
"""
self.assertIsInstance(Paginator([1, 2, 3], 2).page_range, type(range(0)))
class ModelPaginationTests(TestCase):
"""
Test pagination with Django model instances
"""
def setUp(self):
# Prepare a list of objects for pagination.
for x in range(1, 10):
a = Article(headline='Article %s' % x, pub_date=datetime(2005, 7, 29))
a.save()
def test_first_page(self):
paginator = Paginator(Article.objects.order_by('id'), 5)
p = paginator.page(1)
self.assertEqual("<Page 1 of 2>", str(p))
self.assertQuerysetEqual(p.object_list, [
"<Article: Article 1>",
"<Article: Article 2>",
"<Article: Article 3>",
"<Article: Article 4>",
"<Article: Article 5>"
])
self.assertTrue(p.has_next())
self.assertFalse(p.has_previous())
self.assertTrue(p.has_other_pages())
self.assertEqual(2, p.next_page_number())
with self.assertRaises(InvalidPage):
p.previous_page_number()
self.assertEqual(1, p.start_index())
self.assertEqual(5, p.end_index())
def test_last_page(self):
paginator = Paginator(Article.objects.order_by('id'), 5)
p = paginator.page(2)
self.assertEqual("<Page 2 of 2>", str(p))
self.assertQuerysetEqual(p.object_list, [
"<Article: Article 6>",
"<Article: Article 7>",
"<Article: Article 8>",
"<Article: Article 9>"
])
self.assertFalse(p.has_next())
self.assertTrue(p.has_previous())
self.assertTrue(p.has_other_pages())
with self.assertRaises(InvalidPage):
p.next_page_number()
self.assertEqual(1, p.previous_page_number())
self.assertEqual(6, p.start_index())
self.assertEqual(9, p.end_index())
def test_page_getitem(self):
"""
Tests proper behavior of a paginator page __getitem__ (queryset
evaluation, slicing, exception raised).
"""
paginator = Paginator(Article.objects.order_by('id'), 5)
p = paginator.page(1)
# Make sure object_list queryset is not evaluated by an invalid __getitem__ call.
# (this happens from the template engine when using eg: {% page_obj.has_previous %})
self.assertIsNone(p.object_list._result_cache)
with self.assertRaises(TypeError):
p['has_previous']
self.assertIsNone(p.object_list._result_cache)
self.assertNotIsInstance(p.object_list, list)
# Make sure slicing the Page object with numbers and slice objects work.
self.assertEqual(p[0], Article.objects.get(headline='Article 1'))
self.assertQuerysetEqual(p[slice(2)], [
"<Article: Article 1>",
"<Article: Article 2>",
]
)
# After __getitem__ is called, object_list is a list
self.assertIsInstance(p.object_list, list)
def test_paginating_unordered_queryset_raises_warning(self):
with warnings.catch_warnings(record=True) as warns:
# Prevent the RuntimeWarning subclass from appearing as an
# exception due to the warnings.simplefilter() in runtests.py.
warnings.filterwarnings('always', category=UnorderedObjectListWarning)
Paginator(Article.objects.all(), 5)
self.assertEqual(len(warns), 1)
warning = warns[0]
self.assertEqual(str(warning.message), (
"Pagination may yield inconsistent results with an unordered "
"object_list: <QuerySet [<Article: Article 1>, "
"<Article: Article 2>, <Article: Article 3>, <Article: Article 4>, "
"<Article: Article 5>, <Article: Article 6>, <Article: Article 7>, "
"<Article: Article 8>, <Article: Article 9>]>"
))
# The warning points at the Paginator caller (i.e. the stacklevel
# is appropriate).
self.assertEqual(warning.filename, __file__)
| bsd-3-clause |
rplevka/robottelo | tests/foreman/virtwho/cli/test_hyperv.py | 1 | 6205 | """Test class for Virtwho Configure CLI
:Requirement: Virt-whoConfigurePlugin
:CaseAutomation: Automated
:CaseLevel: Acceptance
:CaseComponent: Virt-whoConfigurePlugin
:Assignee: kuhuang
:TestType: Functional
:CaseImportance: High
:Upstream: No
"""
import pytest
from fauxfactory import gen_string
from robottelo.cli.host import Host
from robottelo.cli.subscription import Subscription
from robottelo.cli.virt_who_config import VirtWhoConfig
from robottelo.config import settings
from robottelo.constants import DEFAULT_ORG
from robottelo.virtwho_utils import deploy_configure_by_command
from robottelo.virtwho_utils import deploy_configure_by_script
from robottelo.virtwho_utils import get_configure_command
from robottelo.virtwho_utils import get_configure_file
from robottelo.virtwho_utils import get_configure_option
from robottelo.virtwho_utils import virtwho
@pytest.fixture()
def form_data():
form = {
'name': gen_string('alpha'),
'debug': 1,
'interval': '60',
'hypervisor-id': 'hostname',
'hypervisor-type': virtwho.hyperv.hypervisor_type,
'hypervisor-server': virtwho.hyperv.hypervisor_server,
'organization-id': 1,
'filtering-mode': 'none',
'satellite-url': settings.server.hostname,
'hypervisor-username': virtwho.hyperv.hypervisor_username,
'hypervisor-password': virtwho.hyperv.hypervisor_password,
}
return form
@pytest.fixture()
def virtwho_config(form_data):
return VirtWhoConfig.create(form_data)['general-information']
class TestVirtWhoConfigforHyperv:
@pytest.mark.tier2
def test_positive_deploy_configure_by_id(self, form_data, virtwho_config):
"""Verify " hammer virt-who-config deploy"
:id: 7cc0ad4f-e185-4d63-a2f5-1cb0245faa6c
:expectedresults: Config can be created and deployed
:CaseLevel: Integration
:CaseImportance: High
"""
assert virtwho_config['status'] == 'No Report Yet'
command = get_configure_command(virtwho_config['id'])
hypervisor_name, guest_name = deploy_configure_by_command(
command, form_data['hypervisor-type'], debug=True
)
virt_who_instance = VirtWhoConfig.info({'id': virtwho_config['id']})[
'general-information'
]['status']
assert virt_who_instance == 'OK'
hosts = [
(hypervisor_name, f'product_id={virtwho.sku.vdc_physical} and type=NORMAL'),
(guest_name, f'product_id={virtwho.sku.vdc_physical} and type=STACK_DERIVED'),
]
for hostname, sku in hosts:
host = Host.list({'search': hostname})[0]
subscriptions = Subscription.list({'organization': DEFAULT_ORG, 'search': sku})
vdc_id = subscriptions[0]['id']
if 'type=STACK_DERIVED' in sku:
for item in subscriptions:
if hypervisor_name.lower() in item['type']:
vdc_id = item['id']
break
result = Host.subscription_attach({'host-id': host['id'], 'subscription-id': vdc_id})
assert 'attached to the host successfully' in '\n'.join(result)
VirtWhoConfig.delete({'name': virtwho_config['name']})
assert not VirtWhoConfig.exists(search=('name', form_data['name']))
@pytest.mark.tier2
def test_positive_deploy_configure_by_script(self, form_data, virtwho_config):
"""Verify " hammer virt-who-config fetch"
:id: 22dc8068-c843-4ca0-acbe-0b2aef8ece31
:expectedresults: Config can be created, fetch and deploy
:CaseLevel: Integration
:CaseImportance: High
"""
assert virtwho_config['status'] == 'No Report Yet'
script = VirtWhoConfig.fetch({'id': virtwho_config['id']}, output_format='base')
hypervisor_name, guest_name = deploy_configure_by_script(
script, form_data['hypervisor-type'], debug=True
)
virt_who_instance = VirtWhoConfig.info({'id': virtwho_config['id']})[
'general-information'
]['status']
assert virt_who_instance == 'OK'
hosts = [
(hypervisor_name, f'product_id={virtwho.sku.vdc_physical} and type=NORMAL'),
(guest_name, f'product_id={virtwho.sku.vdc_physical} and type=STACK_DERIVED'),
]
for hostname, sku in hosts:
host = Host.list({'search': hostname})[0]
subscriptions = Subscription.list({'organization': DEFAULT_ORG, 'search': sku})
vdc_id = subscriptions[0]['id']
if 'type=STACK_DERIVED' in sku:
for item in subscriptions:
if hypervisor_name.lower() in item['type']:
vdc_id = item['id']
break
result = Host.subscription_attach({'host-id': host['id'], 'subscription-id': vdc_id})
assert 'attached to the host successfully' in '\n'.join(result)
VirtWhoConfig.delete({'name': virtwho_config['name']})
assert not VirtWhoConfig.exists(search=('name', form_data['name']))
@pytest.mark.tier2
def test_positive_hypervisor_id_option(self, form_data, virtwho_config):
"""Verify hypervisor_id option by hammer virt-who-config update"
:id: 8e234492-33cb-4523-abb3-582626ad704c
:expectedresults: hypervisor_id option can be updated.
:CaseLevel: Integration
:CaseImportance: Medium
"""
values = ['uuid', 'hostname']
for value in values:
VirtWhoConfig.update({'id': virtwho_config['id'], 'hypervisor-id': value})
result = VirtWhoConfig.info({'id': virtwho_config['id']})
assert result['connection']['hypervisor-id'] == value
config_file = get_configure_file(virtwho_config['id'])
command = get_configure_command(virtwho_config['id'])
deploy_configure_by_command(command, form_data['hypervisor-type'])
assert get_configure_option('hypervisor_id', config_file) == value
VirtWhoConfig.delete({'name': virtwho_config['name']})
assert not VirtWhoConfig.exists(search=('name', form_data['name']))
| gpl-3.0 |
maurofaccenda/ansible | lib/ansible/plugins/connection/accelerate.py | 36 | 13975 | # (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import socket
import struct
import time
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleConnectionFailure
from ansible.module_utils._text import to_bytes
from ansible.parsing.utils.jsonify import jsonify
from ansible.plugins.connection import ConnectionBase
from ansible.utils.encrypt import key_for_hostname, keyczar_encrypt, keyczar_decrypt
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
# the chunk size to read and send, assuming mtu 1500 and
# leaving room for base64 (+33%) encoding and header (8 bytes)
# ((1400-8)/4)*3) = 1044
# which leaves room for the TCP/IP header. We set this to a
# multiple of the value to speed up file reads.
CHUNK_SIZE=1044*20
class Connection(ConnectionBase):
''' raw socket accelerated connection '''
transport = 'accelerate'
has_pipelining = False
become_methods = frozenset(C.BECOME_METHODS).difference(['runas'])
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.conn = None
self.key = key_for_hostname(self._play_context.remote_addr)
def _connect(self):
''' activates the connection object '''
if not self._connected:
wrong_user = False
tries = 3
self.conn = socket.socket()
self.conn.settimeout(C.ACCELERATE_CONNECT_TIMEOUT)
display.vvvv("attempting connection to %s via the accelerated port %d" % (self._play_context.remote_addr, self._play_context.accelerate_port),
host=self._play_context.remote_addr)
while tries > 0:
try:
self.conn.connect((self._play_context.remote_addr,self._play_context.accelerate_port))
break
except socket.error:
display.vvvv("connection to %s failed, retrying..." % self._play_context.remote_addr, host=self._play_context.remote_addr)
time.sleep(0.1)
tries -= 1
if tries == 0:
display.vvv("Could not connect via the accelerated connection, exceeded # of tries", host=self._play_context.remote_addr)
raise AnsibleConnectionFailure("Failed to connect to %s on the accelerated port %s" % (self._play_context.remote_addr,
self._play_context.accelerate_port))
elif wrong_user:
display.vvv("Restarting daemon with a different remote_user", host=self._play_context.remote_addr)
raise AnsibleError("The accelerated daemon was started on the remote with a different user")
self.conn.settimeout(C.ACCELERATE_TIMEOUT)
if not self.validate_user():
# the accelerated daemon was started with a
# different remote_user. The above command
# should have caused the accelerate daemon to
# shutdown, so we'll reconnect.
wrong_user = True
self._connected = True
return self
def transport_test(self, connect_timeout):
''' Test the transport mechanism, if available '''
host = self._play_context.remote_addr
port = int(self._play_context.accelerate_port or 5099)
display.vvv("attempting transport test to %s:%s" % (host, port))
sock = socket.create_connection((host, port), connect_timeout)
sock.close()
def send_data(self, data):
packed_len = struct.pack('!Q',len(data))
return self.conn.sendall(packed_len + data)
def recv_data(self):
header_len = 8 # size of a packed unsigned long long
data = b""
try:
display.vvvv("in recv_data(), waiting for the header", host=self._play_context.remote_addr)
while len(data) < header_len:
d = self.conn.recv(header_len - len(data))
if not d:
display.vvvv("received nothing, bailing out", host=self._play_context.remote_addr)
return None
data += d
display.vvvv("got the header, unpacking", host=self._play_context.remote_addr)
data_len = struct.unpack('!Q',data[:header_len])[0]
data = data[header_len:]
display.vvvv("data received so far (expecting %d): %d" % (data_len, len(data)), host=self._play_context.remote_addr)
while len(data) < data_len:
d = self.conn.recv(data_len - len(data))
if not d:
display.vvvv("received nothing, bailing out", host=self._play_context.remote_addr)
return None
display.vvvv("received %d bytes" % (len(d)), host=self._play_context.remote_addr)
data += d
display.vvvv("received all of the data, returning", host=self._play_context.remote_addr)
return data
except socket.timeout:
raise AnsibleError("timed out while waiting to receive data")
def validate_user(self):
'''
Checks the remote uid of the accelerated daemon vs. the
one specified for this play and will cause the accel
daemon to exit if they don't match
'''
display.vvvv("sending request for validate_user", host=self._play_context.remote_addr)
data = dict(
mode='validate_user',
username=self._play_context.remote_user,
)
data = jsonify(data)
data = keyczar_encrypt(self.key, data)
if self.send_data(data):
raise AnsibleError("Failed to send command to %s" % self._play_context.remote_addr)
display.vvvv("waiting for validate_user response", host=self._play_context.remote_addr)
while True:
# we loop here while waiting for the response, because a
# long running command may cause us to receive keepalive packets
# ({"pong":"true"}) rather than the response we want.
response = self.recv_data()
if not response:
raise AnsibleError("Failed to get a response from %s" % self._play_context.remote_addr)
response = keyczar_decrypt(self.key, response)
response = json.loads(response)
if "pong" in response:
# it's a keepalive, go back to waiting
display.vvvv("received a keepalive packet", host=self._play_context.remote_addr)
continue
else:
display.vvvv("received the validate_user response: %s" % (response), host=self._play_context.remote_addr)
break
if response.get('failed'):
return False
else:
return response.get('rc') == 0
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
display.vvv("EXEC COMMAND %s" % cmd, host=self._play_context.remote_addr)
data = dict(
mode='command',
cmd=cmd,
executable=C.DEFAULT_EXECUTABLE,
)
data = jsonify(data)
data = keyczar_encrypt(self.key, data)
if self.send_data(data):
raise AnsibleError("Failed to send command to %s" % self._play_context.remote_addr)
while True:
# we loop here while waiting for the response, because a
# long running command may cause us to receive keepalive packets
# ({"pong":"true"}) rather than the response we want.
response = self.recv_data()
if not response:
raise AnsibleError("Failed to get a response from %s" % self._play_context.remote_addr)
response = keyczar_decrypt(self.key, response)
response = json.loads(response)
if "pong" in response:
# it's a keepalive, go back to waiting
display.vvvv("received a keepalive packet", host=self._play_context.remote_addr)
continue
else:
display.vvvv("received the response", host=self._play_context.remote_addr)
break
return (response.get('rc', None), response.get('stdout', ''), response.get('stderr', ''))
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
in_path = to_bytes(in_path, errors='surrogate_or_strict')
if not os.path.exists(in_path):
raise AnsibleFileNotFound("file or module does not exist: %s" % in_path)
fd = file(in_path, 'rb')
fstat = os.stat(in_path)
try:
display.vvv("PUT file is %d bytes" % fstat.st_size, host=self._play_context.remote_addr)
last = False
while fd.tell() <= fstat.st_size and not last:
display.vvvv("file position currently %ld, file size is %ld" % (fd.tell(), fstat.st_size), host=self._play_context.remote_addr)
data = fd.read(CHUNK_SIZE)
if fd.tell() >= fstat.st_size:
last = True
data = dict(mode='put', data=base64.b64encode(data), out_path=out_path, last=last)
if self._play_context.become:
data['user'] = self._play_context.become_user
data = jsonify(data)
data = keyczar_encrypt(self.key, data)
if self.send_data(data):
raise AnsibleError("failed to send the file to %s" % self._play_context.remote_addr)
response = self.recv_data()
if not response:
raise AnsibleError("Failed to get a response from %s" % self._play_context.remote_addr)
response = keyczar_decrypt(self.key, response)
response = json.loads(response)
if response.get('failed',False):
raise AnsibleError("failed to put the file in the requested location")
finally:
fd.close()
display.vvvv("waiting for final response after PUT", host=self._play_context.remote_addr)
response = self.recv_data()
if not response:
raise AnsibleError("Failed to get a response from %s" % self._play_context.remote_addr)
response = keyczar_decrypt(self.key, response)
response = json.loads(response)
if response.get('failed',False):
raise AnsibleError("failed to put the file in the requested location")
def fetch_file(self, in_path, out_path):
''' save a remote file to the specified path '''
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
data = dict(mode='fetch', in_path=in_path)
data = jsonify(data)
data = keyczar_encrypt(self.key, data)
if self.send_data(data):
raise AnsibleError("failed to initiate the file fetch with %s" % self._play_context.remote_addr)
fh = open(to_bytes(out_path, errors='surrogate_or_strict'), "w")
try:
bytes = 0
while True:
response = self.recv_data()
if not response:
raise AnsibleError("Failed to get a response from %s" % self._play_context.remote_addr)
response = keyczar_decrypt(self.key, response)
response = json.loads(response)
if response.get('failed', False):
raise AnsibleError("Error during file fetch, aborting")
out = base64.b64decode(response['data'])
fh.write(out)
bytes += len(out)
# send an empty response back to signify we
# received the last chunk without errors
data = jsonify(dict())
data = keyczar_encrypt(self.key, data)
if self.send_data(data):
raise AnsibleError("failed to send ack during file fetch")
if response.get('last', False):
break
finally:
# we don't currently care about this final response,
# we just receive it and drop it. It may be used at some
# point in the future or we may just have the put/fetch
# operations not send back a final response at all
response = self.recv_data()
display.vvv("FETCH wrote %d bytes to %s" % (bytes, out_path), host=self._play_context.remote_addr)
fh.close()
def close(self):
''' terminate the connection '''
# Be a good citizen
try:
self.conn.close()
except:
pass
| gpl-3.0 |
yakovenkodenis/rethinkdb | external/v8_3.30.33.16/testing/gmock/gtest/test/gtest_shuffle_test.py | 3023 | 12549 | #!/usr/bin/env python
#
# Copyright 2009 Google Inc. All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Verifies that test shuffling works."""
__author__ = '[email protected] (Zhanyong Wan)'
import os
import gtest_test_utils
# Command to run the gtest_shuffle_test_ program.
COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_shuffle_test_')
# The environment variables for test sharding.
TOTAL_SHARDS_ENV_VAR = 'GTEST_TOTAL_SHARDS'
SHARD_INDEX_ENV_VAR = 'GTEST_SHARD_INDEX'
TEST_FILTER = 'A*.A:A*.B:C*'
ALL_TESTS = []
ACTIVE_TESTS = []
FILTERED_TESTS = []
SHARDED_TESTS = []
SHUFFLED_ALL_TESTS = []
SHUFFLED_ACTIVE_TESTS = []
SHUFFLED_FILTERED_TESTS = []
SHUFFLED_SHARDED_TESTS = []
def AlsoRunDisabledTestsFlag():
return '--gtest_also_run_disabled_tests'
def FilterFlag(test_filter):
return '--gtest_filter=%s' % (test_filter,)
def RepeatFlag(n):
return '--gtest_repeat=%s' % (n,)
def ShuffleFlag():
return '--gtest_shuffle'
def RandomSeedFlag(n):
return '--gtest_random_seed=%s' % (n,)
def RunAndReturnOutput(extra_env, args):
"""Runs the test program and returns its output."""
environ_copy = os.environ.copy()
environ_copy.update(extra_env)
return gtest_test_utils.Subprocess([COMMAND] + args, env=environ_copy).output
def GetTestsForAllIterations(extra_env, args):
"""Runs the test program and returns a list of test lists.
Args:
extra_env: a map from environment variables to their values
args: command line flags to pass to gtest_shuffle_test_
Returns:
A list where the i-th element is the list of tests run in the i-th
test iteration.
"""
test_iterations = []
for line in RunAndReturnOutput(extra_env, args).split('\n'):
if line.startswith('----'):
tests = []
test_iterations.append(tests)
elif line.strip():
tests.append(line.strip()) # 'TestCaseName.TestName'
return test_iterations
def GetTestCases(tests):
"""Returns a list of test cases in the given full test names.
Args:
tests: a list of full test names
Returns:
A list of test cases from 'tests', in their original order.
Consecutive duplicates are removed.
"""
test_cases = []
for test in tests:
test_case = test.split('.')[0]
if not test_case in test_cases:
test_cases.append(test_case)
return test_cases
def CalculateTestLists():
"""Calculates the list of tests run under different flags."""
if not ALL_TESTS:
ALL_TESTS.extend(
GetTestsForAllIterations({}, [AlsoRunDisabledTestsFlag()])[0])
if not ACTIVE_TESTS:
ACTIVE_TESTS.extend(GetTestsForAllIterations({}, [])[0])
if not FILTERED_TESTS:
FILTERED_TESTS.extend(
GetTestsForAllIterations({}, [FilterFlag(TEST_FILTER)])[0])
if not SHARDED_TESTS:
SHARDED_TESTS.extend(
GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[])[0])
if not SHUFFLED_ALL_TESTS:
SHUFFLED_ALL_TESTS.extend(GetTestsForAllIterations(
{}, [AlsoRunDisabledTestsFlag(), ShuffleFlag(), RandomSeedFlag(1)])[0])
if not SHUFFLED_ACTIVE_TESTS:
SHUFFLED_ACTIVE_TESTS.extend(GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1)])[0])
if not SHUFFLED_FILTERED_TESTS:
SHUFFLED_FILTERED_TESTS.extend(GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), FilterFlag(TEST_FILTER)])[0])
if not SHUFFLED_SHARDED_TESTS:
SHUFFLED_SHARDED_TESTS.extend(
GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[ShuffleFlag(), RandomSeedFlag(1)])[0])
class GTestShuffleUnitTest(gtest_test_utils.TestCase):
"""Tests test shuffling."""
def setUp(self):
CalculateTestLists()
def testShufflePreservesNumberOfTests(self):
self.assertEqual(len(ALL_TESTS), len(SHUFFLED_ALL_TESTS))
self.assertEqual(len(ACTIVE_TESTS), len(SHUFFLED_ACTIVE_TESTS))
self.assertEqual(len(FILTERED_TESTS), len(SHUFFLED_FILTERED_TESTS))
self.assertEqual(len(SHARDED_TESTS), len(SHUFFLED_SHARDED_TESTS))
def testShuffleChangesTestOrder(self):
self.assert_(SHUFFLED_ALL_TESTS != ALL_TESTS, SHUFFLED_ALL_TESTS)
self.assert_(SHUFFLED_ACTIVE_TESTS != ACTIVE_TESTS, SHUFFLED_ACTIVE_TESTS)
self.assert_(SHUFFLED_FILTERED_TESTS != FILTERED_TESTS,
SHUFFLED_FILTERED_TESTS)
self.assert_(SHUFFLED_SHARDED_TESTS != SHARDED_TESTS,
SHUFFLED_SHARDED_TESTS)
def testShuffleChangesTestCaseOrder(self):
self.assert_(GetTestCases(SHUFFLED_ALL_TESTS) != GetTestCases(ALL_TESTS),
GetTestCases(SHUFFLED_ALL_TESTS))
self.assert_(
GetTestCases(SHUFFLED_ACTIVE_TESTS) != GetTestCases(ACTIVE_TESTS),
GetTestCases(SHUFFLED_ACTIVE_TESTS))
self.assert_(
GetTestCases(SHUFFLED_FILTERED_TESTS) != GetTestCases(FILTERED_TESTS),
GetTestCases(SHUFFLED_FILTERED_TESTS))
self.assert_(
GetTestCases(SHUFFLED_SHARDED_TESTS) != GetTestCases(SHARDED_TESTS),
GetTestCases(SHUFFLED_SHARDED_TESTS))
def testShuffleDoesNotRepeatTest(self):
for test in SHUFFLED_ALL_TESTS:
self.assertEqual(1, SHUFFLED_ALL_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_ACTIVE_TESTS:
self.assertEqual(1, SHUFFLED_ACTIVE_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_FILTERED_TESTS:
self.assertEqual(1, SHUFFLED_FILTERED_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_SHARDED_TESTS:
self.assertEqual(1, SHUFFLED_SHARDED_TESTS.count(test),
'%s appears more than once' % (test,))
def testShuffleDoesNotCreateNewTest(self):
for test in SHUFFLED_ALL_TESTS:
self.assert_(test in ALL_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_ACTIVE_TESTS:
self.assert_(test in ACTIVE_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_FILTERED_TESTS:
self.assert_(test in FILTERED_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_SHARDED_TESTS:
self.assert_(test in SHARDED_TESTS, '%s is an invalid test' % (test,))
def testShuffleIncludesAllTests(self):
for test in ALL_TESTS:
self.assert_(test in SHUFFLED_ALL_TESTS, '%s is missing' % (test,))
for test in ACTIVE_TESTS:
self.assert_(test in SHUFFLED_ACTIVE_TESTS, '%s is missing' % (test,))
for test in FILTERED_TESTS:
self.assert_(test in SHUFFLED_FILTERED_TESTS, '%s is missing' % (test,))
for test in SHARDED_TESTS:
self.assert_(test in SHUFFLED_SHARDED_TESTS, '%s is missing' % (test,))
def testShuffleLeavesDeathTestsAtFront(self):
non_death_test_found = False
for test in SHUFFLED_ACTIVE_TESTS:
if 'DeathTest.' in test:
self.assert_(not non_death_test_found,
'%s appears after a non-death test' % (test,))
else:
non_death_test_found = True
def _VerifyTestCasesDoNotInterleave(self, tests):
test_cases = []
for test in tests:
[test_case, _] = test.split('.')
if test_cases and test_cases[-1] != test_case:
test_cases.append(test_case)
self.assertEqual(1, test_cases.count(test_case),
'Test case %s is not grouped together in %s' %
(test_case, tests))
def testShuffleDoesNotInterleaveTestCases(self):
self._VerifyTestCasesDoNotInterleave(SHUFFLED_ALL_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_ACTIVE_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_FILTERED_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_SHARDED_TESTS)
def testShuffleRestoresOrderAfterEachIteration(self):
# Get the test lists in all 3 iterations, using random seed 1, 2,
# and 3 respectively. Google Test picks a different seed in each
# iteration, and this test depends on the current implementation
# picking successive numbers. This dependency is not ideal, but
# makes the test much easier to write.
[tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = (
GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)]))
# Make sure running the tests with random seed 1 gets the same
# order as in iteration 1 above.
[tests_with_seed1] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1)])
self.assertEqual(tests_in_iteration1, tests_with_seed1)
# Make sure running the tests with random seed 2 gets the same
# order as in iteration 2 above. Success means that Google Test
# correctly restores the test order before re-shuffling at the
# beginning of iteration 2.
[tests_with_seed2] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(2)])
self.assertEqual(tests_in_iteration2, tests_with_seed2)
# Make sure running the tests with random seed 3 gets the same
# order as in iteration 3 above. Success means that Google Test
# correctly restores the test order before re-shuffling at the
# beginning of iteration 3.
[tests_with_seed3] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(3)])
self.assertEqual(tests_in_iteration3, tests_with_seed3)
def testShuffleGeneratesNewOrderInEachIteration(self):
[tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = (
GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)]))
self.assert_(tests_in_iteration1 != tests_in_iteration2,
tests_in_iteration1)
self.assert_(tests_in_iteration1 != tests_in_iteration3,
tests_in_iteration1)
self.assert_(tests_in_iteration2 != tests_in_iteration3,
tests_in_iteration2)
def testShuffleShardedTestsPreservesPartition(self):
# If we run M tests on N shards, the same M tests should be run in
# total, regardless of the random seeds used by the shards.
[tests1] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '0'},
[ShuffleFlag(), RandomSeedFlag(1)])
[tests2] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[ShuffleFlag(), RandomSeedFlag(20)])
[tests3] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '2'},
[ShuffleFlag(), RandomSeedFlag(25)])
sorted_sharded_tests = tests1 + tests2 + tests3
sorted_sharded_tests.sort()
sorted_active_tests = []
sorted_active_tests.extend(ACTIVE_TESTS)
sorted_active_tests.sort()
self.assertEqual(sorted_active_tests, sorted_sharded_tests)
if __name__ == '__main__':
gtest_test_utils.Main()
| agpl-3.0 |
CydarLtd/ansible | lib/ansible/plugins/lookup/hashi_vault.py | 52 | 5795 | # (c) 2015, Jonathan Davila <jdavila(at)ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# USAGE: {{ lookup('hashi_vault', 'secret=secret/hello:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}
#
# To authenticate with a username/password against the LDAP auth backend in Vault:
#
# USAGE: {{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=ldap mount_point=ldap username=myuser password=mypassword url=http://myvault:8200')}}
#
# The mount_point param defaults to ldap, so is only required if you have a custom mount point.
#
# You can skip setting the url if you set the VAULT_ADDR environment variable
# or if you want it to default to localhost:8200
#
# NOTE: Due to a current limitation in the HVAC library there won't
# necessarily be an error if a bad endpoint is specified.
#
# Requires hvac library. Install with pip.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
ANSIBLE_HASHI_VAULT_ADDR = 'http://127.0.0.1:8200'
if os.getenv('VAULT_ADDR') is not None:
ANSIBLE_HASHI_VAULT_ADDR = os.environ['VAULT_ADDR']
class HashiVault:
def __init__(self, **kwargs):
try:
import hvac
except ImportError:
raise AnsibleError("Please pip install hvac to use this module")
self.url = kwargs.get('url', ANSIBLE_HASHI_VAULT_ADDR)
# split secret arg, which has format 'secret/hello:value' into secret='secret/hello' and secret_field='value'
s = kwargs.get('secret')
if s is None:
raise AnsibleError("No secret specified")
s_f = s.split(':')
self.secret = s_f[0]
if len(s_f)>=2:
self.secret_field = s_f[1]
else:
self.secret_field = 'value'
# if a particular backend is asked for (and its method exists) we call it, otherwise drop through to using
# token auth. this means if a particular auth backend is requested and a token is also given, then we
# ignore the token and attempt authentication against the specified backend.
#
# to enable a new auth backend, simply add a new 'def auth_<type>' method below.
#
self.auth_method = kwargs.get('auth_method')
if self.auth_method:
try:
self.client = hvac.Client(url=self.url)
# prefixing with auth_ to limit which methods can be accessed
getattr(self, 'auth_' + self.auth_method)(**kwargs)
except AttributeError:
raise AnsibleError("Authentication method '%s' not supported" % self.auth_method)
else:
self.token = kwargs.get('token', os.environ.get('VAULT_TOKEN', None))
if self.token is None and os.environ.get('HOME'):
token_filename = os.path.join(
os.environ.get('HOME'),
'.vault-token'
)
if os.path.exists(token_filename):
with open(token_filename) as token_file:
self.token = token_file.read().strip()
if self.token is None:
raise AnsibleError("No Vault Token specified")
self.client = hvac.Client(url=self.url, token=self.token)
if self.client.is_authenticated():
pass
else:
raise AnsibleError("Invalid authentication credentials specified")
def get(self):
data = self.client.read(self.secret)
if data is None:
raise AnsibleError("The secret %s doesn't seem to exist" % self.secret)
if self.secret_field=='': # secret was specified with trailing ':'
return data['data']
if self.secret_field not in data['data']:
raise AnsibleError("The secret %s does not contain the field '%s'. " % (self.secret, self.secret_field))
return data['data'][self.secret_field]
def auth_ldap(self, **kwargs):
username = kwargs.get('username')
if username is None:
raise AnsibleError("Authentication method ldap requires a username")
password = kwargs.get('password')
if password is None:
raise AnsibleError("Authentication method ldap requires a password")
mount_point = kwargs.get('mount_point')
if mount_point is None:
mount_point = 'ldap'
self.client.auth_ldap(username, password, mount_point)
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
vault_args = terms[0].split(' ')
vault_dict = {}
ret = []
for param in vault_args:
try:
key, value = param.split('=')
except ValueError as e:
raise AnsibleError("hashi_vault plugin needs key=value pairs, but received %s" % terms)
vault_dict[key] = value
vault_conn = HashiVault(**vault_dict)
for term in terms:
key = term.split()[0]
value = vault_conn.get()
ret.append(value)
return ret
| gpl-3.0 |
pelya/commandergenius | project/jni/python/src/Lib/test/test_long_future.py | 56 | 2204 | from __future__ import division
# When true division is the default, get rid of this and add it to
# test_long.py instead. In the meantime, it's too obscure to try to
# trick just part of test_long into using future division.
import unittest
from test.test_support import run_unittest
class TrueDivisionTests(unittest.TestCase):
def test(self):
huge = 1L << 40000
mhuge = -huge
self.assertEqual(huge / huge, 1.0)
self.assertEqual(mhuge / mhuge, 1.0)
self.assertEqual(huge / mhuge, -1.0)
self.assertEqual(mhuge / huge, -1.0)
self.assertEqual(1 / huge, 0.0)
self.assertEqual(1L / huge, 0.0)
self.assertEqual(1 / mhuge, 0.0)
self.assertEqual(1L / mhuge, 0.0)
self.assertEqual((666 * huge + (huge >> 1)) / huge, 666.5)
self.assertEqual((666 * mhuge + (mhuge >> 1)) / mhuge, 666.5)
self.assertEqual((666 * huge + (huge >> 1)) / mhuge, -666.5)
self.assertEqual((666 * mhuge + (mhuge >> 1)) / huge, -666.5)
self.assertEqual(huge / (huge << 1), 0.5)
self.assertEqual((1000000 * huge) / huge, 1000000)
namespace = {'huge': huge, 'mhuge': mhuge}
for overflow in ["float(huge)", "float(mhuge)",
"huge / 1", "huge / 2L", "huge / -1", "huge / -2L",
"mhuge / 100", "mhuge / 100L"]:
# XXX(cwinter) this test doesn't pass when converted to
# use assertRaises.
try:
eval(overflow, namespace)
self.fail("expected OverflowError from %r" % overflow)
except OverflowError:
pass
for underflow in ["1 / huge", "2L / huge", "-1 / huge", "-2L / huge",
"100 / mhuge", "100L / mhuge"]:
result = eval(underflow, namespace)
self.assertEqual(result, 0.0,
"expected underflow to 0 from %r" % underflow)
for zero in ["huge / 0", "huge / 0L", "mhuge / 0", "mhuge / 0L"]:
self.assertRaises(ZeroDivisionError, eval, zero, namespace)
def test_main():
run_unittest(TrueDivisionTests)
if __name__ == "__main__":
test_main()
| lgpl-2.1 |
JulyKikuAkita/PythonPrac | cs15211/BinaryTreesWithFactors.py | 1 | 4600 | __source__ = 'https://leetcode.com/problems/binary-trees-with-factors/'
# Time: O(N ^2)
# Space: O(N)
#
# Description: Leetcode # 823. Binary Trees With Factors
#
# Given an array of unique integers, each integer is strictly greater than 1.
#
# We make a binary tree using these integers and each number may be used for any number of times.
#
# Each non-leaf node's value should be equal to the product of the values of it's children.
#
# How many binary trees can we make? Return the answer modulo 10 ** 9 + 7.
#
# Example 1:
#
# Input: A = [2, 4]
# Output: 3
# Explanation: We can make these trees: [2], [4], [4, 2, 2]
# Example 2:
#
# Input: A = [2, 4, 5, 10]
# Output: 7
# Explanation: We can make these trees: [2], [4], [5], [10], [4, 2, 2], [10, 2, 5], [10, 5, 2].
#
#
# Note:
#
# 1 <= A.length <= 1000.
# 2 <= A[i] <= 10 ^ 9.
#
import unittest
#28ms 88.16%
class Solution(object):
def numFactoredBinaryTrees(self, A):
"""
:type A: List[int]
:rtype: int
"""
MOD = 10 ** 9 + 7
N = len(A)
A.sort()
dp = [1] * N
index = {x: i for i, x in enumerate(A)}
for i, x in enumerate(A):
for j in xrange(i):
if x % A[j] == 0: #A[j] will be left child
right = x / A[j]
if right in index:
dp[i] += dp[j] * dp[index[right]]
dp[i] %= MOD
return sum(dp) % MOD
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
if __name__ == '__main__':
unittest.main()
Java = '''
#Thought: https://leetcode.com/problems/binary-trees-with-factors/solution/
Approach #1: Dynamic Programming [Accepted]
Complexity Analysis
Time Complexity: O(N^2), where N is the length of A. This comes from the two for-loops iterating i and j.
Space Complexity: O(N), the space used by dp and index.
# 53ms 54.34%
class Solution {
public int numFactoredBinaryTrees(int[] A) {
int MOD = 1_000_000_007;
int N = A.length;
Arrays.sort(A);
long[] dp = new long[N];
Arrays.fill(dp, 1);
Map<Integer, Integer> index = new HashMap();
for (int i = 0 ; i < N; i++) {
index.put(A[i], i);
}
for (int i = 0; i < N; i++) {
for (int j = 0; j < i; j++) {
if (A[i] % A[j] == 0) { //A[j] is left child
int right = A[i] / A[j];
if (index.containsKey(right)) {
dp[i] = (dp[i] + dp[j] * dp[index.get(right)]) % MOD;
}
}
}
}
long ans = 0;
for ( long x: dp) ans += x;
return (int) (ans % MOD);
}
}
#31ms 86.30%
class Solution {
public int numFactoredBinaryTrees(int[] A) {
long res = 0L, mod = (long) 1000000007;
long[] dp = new long[A.length];
Arrays.fill(dp,1);
Arrays.sort(A);
for (int i = 1; i < A.length; i++) {
int s = 0, e = i - 1;
while (s <= e) {
if (A[s] * A[e] > A[i]) e--;
else if (A[s] * A[e] < A[i]) s++;
else {
dp[i] = ((dp[s] * dp[e] * (A[s] == A[e] ? 1 : 2 ) % mod + dp[i])) % mod;
s++;
}
}
}
for (long d : dp) res = ( d + res) % mod;
return (int) res;
}
}
# 19ms 100%
class Solution {
public int numFactoredBinaryTrees(int[] A) {
Arrays.sort(A);
Map<Integer, Integer> valueToIndex = new HashMap<>();
for(int i = 0; i < A.length; i++) valueToIndex.put(A[i], i);
int n = A.length;
long fResult = 0;
long[] gResults = new long[n];
for(int i = 0; i < A.length; i++){
int cur = A[i];
gResults[i] = 1;
for (int leftIndex = 0; A[leftIndex] <= Math.sqrt(cur); leftIndex++) {
int left = A[leftIndex];
if (cur % left == 0 && valueToIndex.containsKey(cur / left)) {
int right = cur / left;
int rightIndex = valueToIndex.get(right);
if (left != right) {
gResults[i] += gResults[leftIndex] * gResults[rightIndex] * 2;
} else {
gResults[i] += gResults[leftIndex] * gResults[rightIndex];
}
}
}
fResult += gResults[i];
}
return (int) (fResult % ((int) Math.pow(10, 9) + 7));
}
}
''' | apache-2.0 |
geometrybase/gensim | gensim/models/lda_worker.py | 53 | 4079 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <[email protected]>
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
"""
USAGE: %(program)s
Worker ("slave") process used in computing distributed LDA. Run this script \
on every node in your cluster. If you wish, you may even run it multiple times \
on a single machine, to make better use of multiple cores (just beware that \
memory footprint increases accordingly).
Example: python -m gensim.models.lda_worker
"""
from __future__ import with_statement
import os, sys, logging
import threading
import tempfile
try:
import Queue
except ImportError:
import queue as Queue
import Pyro4
from gensim.models import ldamodel
from gensim import utils
logger = logging.getLogger('gensim.models.lda_worker')
# periodically save intermediate models after every SAVE_DEBUG updates (0 for never)
SAVE_DEBUG = 0
class Worker(object):
def __init__(self):
self.model = None
def initialize(self, myid, dispatcher, **model_params):
self.lock_update = threading.Lock()
self.jobsdone = 0 # how many jobs has this worker completed?
self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove?
self.dispatcher = dispatcher
self.finished = False
logger.info("initializing worker #%s" % myid)
self.model = ldamodel.LdaModel(**model_params)
@Pyro4.oneway
def requestjob(self):
"""
Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.
"""
if self.model is None:
raise RuntimeError("worker must be initialized before receiving jobs")
job = None
while job is None and not self.finished:
try:
job = self.dispatcher.getjob(self.myid)
except Queue.Empty:
# no new job: try again, unless we're finished with all work
continue
if job is not None:
logger.info("worker #%s received job #%i" % (self.myid, self.jobsdone))
self.processjob(job)
self.dispatcher.jobdone(self.myid)
else:
logger.info("worker #%i stopping asking for jobs" % self.myid)
@utils.synchronous('lock_update')
def processjob(self, job):
logger.debug("starting to process job #%i" % self.jobsdone)
self.model.do_estep(job)
self.jobsdone += 1
if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0:
fname = os.path.join(tempfile.gettempdir(), 'lda_worker.pkl')
self.model.save(fname)
logger.info("finished processing job #%i" % (self.jobsdone - 1))
@utils.synchronous('lock_update')
def getstate(self):
logger.info("worker #%i returning its state after %s jobs" %
(self.myid, self.jobsdone))
result = self.model.state
assert isinstance(result, ldamodel.LdaState)
self.model.clear() # free up mem in-between two EM cycles
self.finished = True
return result
@utils.synchronous('lock_update')
def reset(self, state):
assert state is not None
logger.info("resetting worker #%i" % self.myid)
self.model.state = state
self.model.sync_state()
self.model.state.reset()
self.finished = False
@Pyro4.oneway
def exit(self):
logger.info("terminating worker #%i" % self.myid)
os._exit(0)
#endclass Worker
def main():
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logger.info("running %s" % " ".join(sys.argv))
program = os.path.basename(sys.argv[0])
# make sure we have enough cmd line parameters
if len(sys.argv) < 1:
print(globals()["__doc__"] % locals())
sys.exit(1)
utils.pyro_daemon('gensim.lda_worker', Worker(), random_suffix=True)
logger.info("finished running %s" % program)
if __name__ == '__main__':
main()
| gpl-3.0 |
cheatos101/tapiriik | tapiriik/web/views/config/__init__.py | 16 | 2223 | from tapiriik.auth import User
from tapiriik.sync import Sync
from tapiriik.services import Service
from django.shortcuts import render, redirect
from django import forms
from django.http import HttpResponse
import json
def config_save(req, service):
if not req.user:
return HttpResponse(status=403)
conn = User.GetConnectionRecord(req.user, service)
if not conn:
return HttpResponse(status=404)
conn.SetConfiguration(json.loads(req.POST["config"]))
Sync.SetNextSyncIsExhaustive(req.user, True) # e.g. if they opted to sync private activities.
return HttpResponse()
def config_flow_save(req, service):
if not req.user:
return HttpResponse(status=403)
conns = User.GetConnectionRecordsByUser(req.user)
if service not in [x.Service.ID for x in conns]:
return HttpResponse(status=404)
sourceSvc = [x for x in conns if x.Service.ID == service][0]
# the JS doesn't resolve the flow exceptions, it just passes in the expanded config flags for the edited service (which will override other flowexceptions)
flowFlags = json.loads(req.POST["flowFlags"])
for destSvc in [x for x in conns if x.Service.ID != service]:
User.SetFlowException(req.user, sourceSvc, destSvc, destSvc.Service.ID in flowFlags["forward"], None)
Sync.SetNextSyncIsExhaustive(req.user, True) # to pick up any activities left behind
return HttpResponse()
class DropboxConfigForm(forms.Form):
path = forms.CharField(label="Dropbox sync path")
syncUntagged = forms.BooleanField(label="Sync untagged activities", required=False)
def dropbox(req):
if not req.user:
return HttpResponse(status=403)
conn = User.GetConnectionRecord(req.user, "dropbox")
if req.method == "POST":
form = DropboxConfigForm(req.POST)
if form.is_valid():
conn.SetConfiguration({"SyncRoot": form.cleaned_data['path'], "UploadUntagged": form.cleaned_data['syncUntagged']})
return redirect("dashboard")
else:
conf = conn.GetConfiguration()
form = DropboxConfigForm({"path": conf["SyncRoot"], "syncUntagged": conf["UploadUntagged"]})
return render(req, "config/dropbox.html", {"form": form})
| apache-2.0 |
bheesham/servo | python/mach/mach/main.py | 96 | 20555 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# This module provides functionality for the command-line build tool
# (mach). It is packaged as a module because everything is a library.
from __future__ import absolute_import, print_function, unicode_literals
from collections import Iterable
import argparse
import codecs
import imp
import logging
import os
import sys
import traceback
import uuid
from .base import (
CommandContext,
MachError,
NoCommandError,
UnknownCommandError,
UnrecognizedArgumentError,
)
from .decorators import (
CommandArgument,
CommandProvider,
Command,
)
from .config import ConfigSettings
from .dispatcher import CommandAction
from .logging import LoggingManager
from .registrar import Registrar
MACH_ERROR = r'''
The error occurred in mach itself. This is likely a bug in mach itself or a
fundamental problem with a loaded module.
Please consider filing a bug against mach by going to the URL:
https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=mach
'''.lstrip()
ERROR_FOOTER = r'''
If filing a bug, please include the full output of mach, including this error
message.
The details of the failure are as follows:
'''.lstrip()
COMMAND_ERROR = r'''
The error occurred in the implementation of the invoked mach command.
This should never occur and is likely a bug in the implementation of that
command. Consider filing a bug for this issue.
'''.lstrip()
MODULE_ERROR = r'''
The error occurred in code that was called by the mach command. This is either
a bug in the called code itself or in the way that mach is calling it.
You should consider filing a bug for this issue.
'''.lstrip()
NO_COMMAND_ERROR = r'''
It looks like you tried to run mach without a command.
Run |mach help| to show a list of commands.
'''.lstrip()
UNKNOWN_COMMAND_ERROR = r'''
It looks like you are trying to %s an unknown mach command: %s
%s
Run |mach help| to show a list of commands.
'''.lstrip()
SUGGESTED_COMMANDS_MESSAGE = r'''
Did you want to %s any of these commands instead: %s?
'''
UNRECOGNIZED_ARGUMENT_ERROR = r'''
It looks like you passed an unrecognized argument into mach.
The %s command does not accept the arguments: %s
'''.lstrip()
INVALID_ENTRY_POINT = r'''
Entry points should return a list of command providers or directories
containing command providers. The following entry point is invalid:
%s
You are seeing this because there is an error in an external module attempting
to implement a mach command. Please fix the error, or uninstall the module from
your system.
'''.lstrip()
class ArgumentParser(argparse.ArgumentParser):
"""Custom implementation argument parser to make things look pretty."""
def error(self, message):
"""Custom error reporter to give more helpful text on bad commands."""
if not message.startswith('argument command: invalid choice'):
argparse.ArgumentParser.error(self, message)
assert False
print('Invalid command specified. The list of commands is below.\n')
self.print_help()
sys.exit(1)
def format_help(self):
text = argparse.ArgumentParser.format_help(self)
# Strip out the silly command list that would preceed the pretty list.
#
# Commands:
# {foo,bar}
# foo Do foo.
# bar Do bar.
search = 'Commands:\n {'
start = text.find(search)
if start != -1:
end = text.find('}\n', start)
assert end != -1
real_start = start + len('Commands:\n')
real_end = end + len('}\n')
text = text[0:real_start] + text[real_end:]
return text
class ContextWrapper(object):
def __init__(self, context, handler):
object.__setattr__(self, '_context', context)
object.__setattr__(self, '_handler', handler)
def __getattribute__(self, key):
try:
return getattr(object.__getattribute__(self, '_context'), key)
except AttributeError as e:
try:
ret = object.__getattribute__(self, '_handler')(self, key)
except (AttributeError, TypeError):
# TypeError is in case the handler comes from old code not
# taking a key argument.
raise e
setattr(self, key, ret)
return ret
def __setattr__(self, key, value):
setattr(object.__getattribute__(self, '_context'), key, value)
@CommandProvider
class Mach(object):
"""Main mach driver type.
This type is responsible for holding global mach state and dispatching
a command from arguments.
The following attributes may be assigned to the instance to influence
behavior:
populate_context_handler -- If defined, it must be a callable. The
callable signature is the following:
populate_context_handler(context, key=None)
It acts as a fallback getter for the mach.base.CommandContext
instance.
This allows to augment the context instance with arbitrary data
for use in command handlers.
For backwards compatibility, it is also called before command
dispatch without a key, allowing the context handler to add
attributes to the context instance.
require_conditions -- If True, commands that do not have any condition
functions applied will be skipped. Defaults to False.
"""
USAGE = """%(prog)s [global arguments] command [command arguments]
mach (German for "do") is the main interface to the Mozilla build system and
common developer tasks.
You tell mach the command you want to perform and it does it for you.
Some common commands are:
%(prog)s build Build/compile the source tree.
%(prog)s help Show full help, including the list of all commands.
To see more help for a specific command, run:
%(prog)s help <command>
"""
def __init__(self, cwd):
assert os.path.isdir(cwd)
self.cwd = cwd
self.log_manager = LoggingManager()
self.logger = logging.getLogger(__name__)
self.settings = ConfigSettings()
self.log_manager.register_structured_logger(self.logger)
self.global_arguments = []
self.populate_context_handler = None
def add_global_argument(self, *args, **kwargs):
"""Register a global argument with the argument parser.
Arguments are proxied to ArgumentParser.add_argument()
"""
self.global_arguments.append((args, kwargs))
def load_commands_from_directory(self, path):
"""Scan for mach commands from modules in a directory.
This takes a path to a directory, loads the .py files in it, and
registers and found mach command providers with this mach instance.
"""
for f in sorted(os.listdir(path)):
if not f.endswith('.py') or f == '__init__.py':
continue
full_path = os.path.join(path, f)
module_name = 'mach.commands.%s' % f[0:-3]
self.load_commands_from_file(full_path, module_name=module_name)
def load_commands_from_file(self, path, module_name=None):
"""Scan for mach commands from a file.
This takes a path to a file and loads it as a Python module under the
module name specified. If no name is specified, a random one will be
chosen.
"""
if module_name is None:
# Ensure parent module is present otherwise we'll (likely) get
# an error due to unknown parent.
if b'mach.commands' not in sys.modules:
mod = imp.new_module(b'mach.commands')
sys.modules[b'mach.commands'] = mod
module_name = 'mach.commands.%s' % uuid.uuid1().get_hex()
imp.load_source(module_name, path)
def load_commands_from_entry_point(self, group='mach.providers'):
"""Scan installed packages for mach command provider entry points. An
entry point is a function that returns a list of paths to files or
directories containing command providers.
This takes an optional group argument which specifies the entry point
group to use. If not specified, it defaults to 'mach.providers'.
"""
try:
import pkg_resources
except ImportError:
print("Could not find setuptools, ignoring command entry points",
file=sys.stderr)
return
for entry in pkg_resources.iter_entry_points(group=group, name=None):
paths = entry.load()()
if not isinstance(paths, Iterable):
print(INVALID_ENTRY_POINT % entry)
sys.exit(1)
for path in paths:
if os.path.isfile(path):
self.load_commands_from_file(path)
elif os.path.isdir(path):
self.load_commands_from_directory(path)
else:
print("command provider '%s' does not exist" % path)
def define_category(self, name, title, description, priority=50):
"""Provide a description for a named command category."""
Registrar.register_category(name, title, description, priority)
@property
def require_conditions(self):
return Registrar.require_conditions
@require_conditions.setter
def require_conditions(self, value):
Registrar.require_conditions = value
def run(self, argv, stdin=None, stdout=None, stderr=None):
"""Runs mach with arguments provided from the command line.
Returns the integer exit code that should be used. 0 means success. All
other values indicate failure.
"""
# If no encoding is defined, we default to UTF-8 because without this
# Python 2.7 will assume the default encoding of ASCII. This will blow
# up with UnicodeEncodeError as soon as it encounters a non-ASCII
# character in a unicode instance. We simply install a wrapper around
# the streams and restore once we have finished.
stdin = sys.stdin if stdin is None else stdin
stdout = sys.stdout if stdout is None else stdout
stderr = sys.stderr if stderr is None else stderr
orig_stdin = sys.stdin
orig_stdout = sys.stdout
orig_stderr = sys.stderr
sys.stdin = stdin
sys.stdout = stdout
sys.stderr = stderr
try:
if stdin.encoding is None:
sys.stdin = codecs.getreader('utf-8')(stdin)
if stdout.encoding is None:
sys.stdout = codecs.getwriter('utf-8')(stdout)
if stderr.encoding is None:
sys.stderr = codecs.getwriter('utf-8')(stderr)
return self._run(argv)
except KeyboardInterrupt:
print('mach interrupted by signal or user action. Stopping.')
return 1
except Exception as e:
# _run swallows exceptions in invoked handlers and converts them to
# a proper exit code. So, the only scenario where we should get an
# exception here is if _run itself raises. If _run raises, that's a
# bug in mach (or a loaded command module being silly) and thus
# should be reported differently.
self._print_error_header(argv, sys.stdout)
print(MACH_ERROR)
exc_type, exc_value, exc_tb = sys.exc_info()
stack = traceback.extract_tb(exc_tb)
self._print_exception(sys.stdout, exc_type, exc_value, stack)
return 1
finally:
sys.stdin = orig_stdin
sys.stdout = orig_stdout
sys.stderr = orig_stderr
def _run(self, argv):
context = CommandContext(cwd=self.cwd,
settings=self.settings, log_manager=self.log_manager,
commands=Registrar)
if self.populate_context_handler:
self.populate_context_handler(context)
context = ContextWrapper(context, self.populate_context_handler)
parser = self.get_argument_parser(context)
if not len(argv):
# We don't register the usage until here because if it is globally
# registered, argparse always prints it. This is not desired when
# running with --help.
parser.usage = Mach.USAGE
parser.print_usage()
return 0
try:
args = parser.parse_args(argv)
except NoCommandError:
print(NO_COMMAND_ERROR)
return 1
except UnknownCommandError as e:
suggestion_message = SUGGESTED_COMMANDS_MESSAGE % (e.verb, ', '.join(e.suggested_commands)) if e.suggested_commands else ''
print(UNKNOWN_COMMAND_ERROR % (e.verb, e.command, suggestion_message))
return 1
except UnrecognizedArgumentError as e:
print(UNRECOGNIZED_ARGUMENT_ERROR % (e.command,
' '.join(e.arguments)))
return 1
# Add JSON logging to a file if requested.
if args.logfile:
self.log_manager.add_json_handler(args.logfile)
# Up the logging level if requested.
log_level = logging.INFO
if args.verbose:
log_level = logging.DEBUG
self.log_manager.register_structured_logger(logging.getLogger('mach'))
write_times = True
if args.log_no_times or 'MACH_NO_WRITE_TIMES' in os.environ:
write_times = False
# Always enable terminal logging. The log manager figures out if we are
# actually in a TTY or are a pipe and does the right thing.
self.log_manager.add_terminal_logging(level=log_level,
write_interval=args.log_interval, write_times=write_times)
self.load_settings(args)
if not hasattr(args, 'mach_handler'):
raise MachError('ArgumentParser result missing mach handler info.')
handler = getattr(args, 'mach_handler')
try:
return Registrar._run_command_handler(handler, context=context,
debug_command=args.debug_command, **vars(args.command_args))
except KeyboardInterrupt as ki:
raise ki
except Exception as e:
exc_type, exc_value, exc_tb = sys.exc_info()
# The first two frames are us and are never used.
stack = traceback.extract_tb(exc_tb)[2:]
# If we have nothing on the stack, the exception was raised as part
# of calling the @Command method itself. This likely means a
# mismatch between @CommandArgument and arguments to the method.
# e.g. there exists a @CommandArgument without the corresponding
# argument on the method. We handle that here until the module
# loader grows the ability to validate better.
if not len(stack):
print(COMMAND_ERROR)
self._print_exception(sys.stdout, exc_type, exc_value,
traceback.extract_tb(exc_tb))
return 1
# Split the frames into those from the module containing the
# command and everything else.
command_frames = []
other_frames = []
initial_file = stack[0][0]
for frame in stack:
if frame[0] == initial_file:
command_frames.append(frame)
else:
other_frames.append(frame)
# If the exception was in the module providing the command, it's
# likely the bug is in the mach command module, not something else.
# If there are other frames, the bug is likely not the mach
# command's fault.
self._print_error_header(argv, sys.stdout)
if len(other_frames):
print(MODULE_ERROR)
else:
print(COMMAND_ERROR)
self._print_exception(sys.stdout, exc_type, exc_value, stack)
return 1
def log(self, level, action, params, format_str):
"""Helper method to record a structured log event."""
self.logger.log(level, format_str,
extra={'action': action, 'params': params})
def _print_error_header(self, argv, fh):
fh.write('Error running mach:\n\n')
fh.write(' ')
fh.write(repr(argv))
fh.write('\n\n')
def _print_exception(self, fh, exc_type, exc_value, stack):
fh.write(ERROR_FOOTER)
fh.write('\n')
for l in traceback.format_exception_only(exc_type, exc_value):
fh.write(l)
fh.write('\n')
for l in traceback.format_list(stack):
fh.write(l)
def load_settings(self, args):
"""Determine which settings files apply and load them.
Currently, we only support loading settings from a single file.
Ideally, we support loading from multiple files. This is supported by
the ConfigSettings API. However, that API currently doesn't track where
individual values come from, so if we load from multiple sources then
save, we effectively do a full copy. We don't want this. Until
ConfigSettings does the right thing, we shouldn't expose multi-file
loading.
We look for a settings file in the following locations. The first one
found wins:
1) Command line argument
2) Environment variable
3) Default path
"""
# Settings are disabled until integration with command providers is
# worked out.
self.settings = None
return False
for provider in Registrar.settings_providers:
provider.register_settings()
self.settings.register_provider(provider)
p = os.path.join(self.cwd, 'mach.ini')
if args.settings_file:
p = args.settings_file
elif 'MACH_SETTINGS_FILE' in os.environ:
p = os.environ['MACH_SETTINGS_FILE']
self.settings.load_file(p)
return os.path.exists(p)
def get_argument_parser(self, context):
"""Returns an argument parser for the command-line interface."""
parser = ArgumentParser(add_help=False,
usage='%(prog)s [global arguments] command [command arguments]')
# Order is important here as it dictates the order the auto-generated
# help messages are printed.
global_group = parser.add_argument_group('Global Arguments')
#global_group.add_argument('--settings', dest='settings_file',
# metavar='FILENAME', help='Path to settings file.')
global_group.add_argument('-v', '--verbose', dest='verbose',
action='store_true', default=False,
help='Print verbose output.')
global_group.add_argument('-l', '--log-file', dest='logfile',
metavar='FILENAME', type=argparse.FileType('ab'),
help='Filename to write log data to.')
global_group.add_argument('--log-interval', dest='log_interval',
action='store_true', default=False,
help='Prefix log line with interval from last message rather '
'than relative time. Note that this is NOT execution time '
'if there are parallel operations.')
global_group.add_argument('--log-no-times', dest='log_no_times',
action='store_true', default=False,
help='Do not prefix log lines with times. By default, mach will '
'prefix each output line with the time since command start.')
global_group.add_argument('-h', '--help', dest='help',
action='store_true', default=False,
help='Show this help message.')
global_group.add_argument('--debug-command', action='store_true',
help='Start a Python debugger when command is dispatched.')
for args, kwargs in self.global_arguments:
global_group.add_argument(*args, **kwargs)
# We need to be last because CommandAction swallows all remaining
# arguments and argparse parses arguments in the order they were added.
parser.add_argument('command', action=CommandAction,
registrar=Registrar, context=context)
return parser
| mpl-2.0 |
wchan/tensorflow | tensorflow/models/image/cifar10/cifar10_input_test.py | 18 | 2299 | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for cifar10 input."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow as tf
from tensorflow.models.image.cifar10 import cifar10_input
class CIFAR10InputTest(tf.test.TestCase):
def _record(self, label, red, green, blue):
image_size = 32 * 32
record = bytes(bytearray([label] + [red] * image_size +
[green] * image_size + [blue] * image_size))
expected = [[[red, green, blue]] * 32] * 32
return record, expected
def testSimple(self):
labels = [9, 3, 0]
records = [self._record(labels[0], 0, 128, 255),
self._record(labels[1], 255, 0, 1),
self._record(labels[2], 254, 255, 0)]
contents = b"".join([record for record, _ in records])
expected = [expected for _, expected in records]
filename = os.path.join(self.get_temp_dir(), "cifar")
open(filename, "wb").write(contents)
with self.test_session() as sess:
q = tf.FIFOQueue(99, [tf.string], shapes=())
q.enqueue([filename]).run()
q.close().run()
result = cifar10_input.read_cifar10(q)
for i in range(3):
key, label, uint8image = sess.run([
result.key, result.label, result.uint8image])
self.assertEqual("%s:%d" % (filename, i), tf.compat.as_text(key))
self.assertEqual(labels[i], label)
self.assertAllEqual(expected[i], uint8image)
with self.assertRaises(tf.errors.OutOfRangeError):
sess.run([result.key, result.uint8image])
if __name__ == "__main__":
tf.test.main()
| apache-2.0 |
GiulianoFranchetto/zephyr | scripts/west_commands/runners/bossac.py | 1 | 2200 | # Copyright (c) 2017 Linaro Limited.
#
# SPDX-License-Identifier: Apache-2.0
'''bossac-specific runner (flash only) for Atmel SAM microcontrollers.'''
import platform
from runners.core import ZephyrBinaryRunner, RunnerCaps
DEFAULT_BOSSAC_PORT = '/dev/ttyACM0'
class BossacBinaryRunner(ZephyrBinaryRunner):
'''Runner front-end for bossac.'''
def __init__(self, cfg, bossac='bossac', port=DEFAULT_BOSSAC_PORT,
offset=0):
super(BossacBinaryRunner, self).__init__(cfg)
self.bossac = bossac
self.port = port
self.offset = offset
@classmethod
def name(cls):
return 'bossac'
@classmethod
def capabilities(cls):
return RunnerCaps(commands={'flash'})
@classmethod
def do_add_parser(cls, parser):
parser.add_argument('--bossac', default='bossac',
help='path to bossac, default is bossac')
parser.add_argument('--offset', default=0,
help='start erase/write/read/verify operation '
'at flash OFFSET; OFFSET must be aligned '
' to a flash page boundary')
parser.add_argument('--bossac-port', default='/dev/ttyACM0',
help='serial port to use, default is /dev/ttyACM0')
@classmethod
def create(cls, cfg, args):
return BossacBinaryRunner(cfg, bossac=args.bossac,
port=args.bossac_port, offset=args.offset)
def do_run(self, command, **kwargs):
if platform.system() != 'Linux':
msg = 'CAUTION: No flash tool for your host system found!'
raise NotImplementedError(msg)
self.require('stty')
self.require(self.bossac)
cmd_stty = ['stty', '-F', self.port, 'raw', 'ispeed', '1200',
'ospeed', '1200', 'cs8', '-cstopb', 'ignpar', 'eol', '255',
'eof', '255']
cmd_flash = [self.bossac, '-p', self.port, '-R', '-e', '-w', '-v',
'-o', '%s' % self.offset,
'-b', self.cfg.bin_file]
self.check_call(cmd_stty)
self.check_call(cmd_flash)
| apache-2.0 |
gunan/tensorflow | tensorflow/python/ops/ragged/ragged_util_test.py | 9 | 8267 | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for ragged_util."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops.ragged import ragged_util
from tensorflow.python.platform import googletest
# Example 3d tensor for test cases. Has shape [4, 2, 3].
TENSOR_3D = [[[('%d%d%d' % (i, j, k)).encode('utf-8')
for k in range(3)]
for j in range(2)]
for i in range(4)]
# Example 4d tensor for test cases. Has shape [4, 2, 3, 5].
TENSOR_4D = [[[[('%d%d%d%d' % (i, j, k, l)).encode('utf-8')
for l in range(5)]
for k in range(3)]
for j in range(2)]
for i in range(4)]
@test_util.run_all_in_graph_and_eager_modes
class RaggedUtilTest(test_util.TensorFlowTestCase,
parameterized.TestCase):
@parameterized.parameters([
# Docstring examples
dict(
data=['a', 'b', 'c'],
repeats=[3, 0, 2],
axis=0,
expected=[b'a', b'a', b'a', b'c', b'c']),
dict(
data=[[1, 2], [3, 4]],
repeats=[2, 3],
axis=0,
expected=[[1, 2], [1, 2], [3, 4], [3, 4], [3, 4]]),
dict(
data=[[1, 2], [3, 4]],
repeats=[2, 3],
axis=1,
expected=[[1, 1, 2, 2, 2], [3, 3, 4, 4, 4]]),
# Scalar repeats value
dict(
data=['a', 'b', 'c'],
repeats=2,
axis=0,
expected=[b'a', b'a', b'b', b'b', b'c', b'c']),
dict(
data=[[1, 2], [3, 4]],
repeats=2,
axis=0,
expected=[[1, 2], [1, 2], [3, 4], [3, 4]]),
dict(
data=[[1, 2], [3, 4]],
repeats=2,
axis=1,
expected=[[1, 1, 2, 2], [3, 3, 4, 4]]),
# data & repeats are broadcast to have at least one dimension,
# so these are all equivalent:
dict(data=3, repeats=4, axis=0, expected=[3, 3, 3, 3]),
dict(data=[3], repeats=4, axis=0, expected=[3, 3, 3, 3]),
dict(data=3, repeats=[4], axis=0, expected=[3, 3, 3, 3]),
dict(data=[3], repeats=[4], axis=0, expected=[3, 3, 3, 3]),
# Empty tensor
dict(data=[], repeats=[], axis=0, expected=[]),
])
def testRepeat(self, data, repeats, expected, axis=None):
result = ragged_util.repeat(data, repeats, axis)
self.assertAllEqual(result, expected)
@parameterized.parameters([
dict(mode=mode, **args)
for mode in ['constant', 'dynamic', 'unknown_shape']
for args in [
# data & repeats are broadcast to have at least one dimension,
# so these are all equivalent:
dict(data=3, repeats=4, axis=0),
dict(data=[3], repeats=4, axis=0),
dict(data=3, repeats=[4], axis=0),
dict(data=[3], repeats=[4], axis=0),
# 1-dimensional data tensor.
dict(data=[], repeats=5, axis=0),
dict(data=[1, 2, 3], repeats=5, axis=0),
dict(data=[1, 2, 3], repeats=[3, 0, 2], axis=0),
dict(data=[1, 2, 3], repeats=[3, 0, 2], axis=-1),
dict(data=[b'a', b'b', b'c'], repeats=[3, 0, 2], axis=0),
# 2-dimensional data tensor.
dict(data=[[1, 2, 3], [4, 5, 6]], repeats=3, axis=0),
dict(data=[[1, 2, 3], [4, 5, 6]], repeats=3, axis=1),
dict(data=[[1, 2, 3], [4, 5, 6]], repeats=[3, 5], axis=0),
dict(data=[[1, 2, 3], [4, 5, 6]], repeats=[3, 5, 7], axis=1),
# 3-dimensional data tensor: shape=[4, 2, 3].
dict(data=TENSOR_3D, repeats=2, axis=0),
dict(data=TENSOR_3D, repeats=2, axis=1),
dict(data=TENSOR_3D, repeats=2, axis=2),
dict(data=TENSOR_3D, repeats=[2, 0, 4, 1], axis=0),
dict(data=TENSOR_3D, repeats=[3, 2], axis=1),
dict(data=TENSOR_3D, repeats=[1, 3, 1], axis=2),
# 4-dimensional data tensor: shape=[4, 2, 3, 5].
dict(data=TENSOR_4D, repeats=2, axis=0),
dict(data=TENSOR_4D, repeats=2, axis=1),
dict(data=TENSOR_4D, repeats=2, axis=2),
dict(data=TENSOR_4D, repeats=2, axis=3),
dict(data=TENSOR_4D, repeats=[2, 0, 4, 1], axis=0),
dict(data=TENSOR_4D, repeats=[3, 2], axis=1),
dict(data=TENSOR_4D, repeats=[1, 3, 1], axis=2),
dict(data=TENSOR_4D, repeats=[1, 3, 0, 0, 2], axis=3),
]
])
def testValuesMatchesNumpy(self, mode, data, repeats, axis):
# Exception: we can't handle negative axis if data.ndims is unknown.
if axis < 0 and mode == 'unknown_shape':
return
expected = np.repeat(data, repeats, axis)
if mode == 'constant':
data = constant_op.constant(data)
repeats = constant_op.constant(repeats)
elif mode == 'dynamic':
data = constant_op.constant(data)
repeats = constant_op.constant(repeats)
data = array_ops.placeholder_with_default(data, data.shape)
repeats = array_ops.placeholder_with_default(repeats, repeats.shape)
elif mode == 'unknown_shape':
data = array_ops.placeholder_with_default(data, None)
repeats = array_ops.placeholder_with_default(repeats, None)
result = ragged_util.repeat(data, repeats, axis)
self.assertAllEqual(result, expected)
@parameterized.parameters([
dict(
descr='axis >= rank(data)',
mode='dynamic',
data=[1, 2, 3],
repeats=[3, 0, 2],
axis=1,
error='axis=1 out of bounds: expected -1<=axis<1'),
dict(
descr='axis < -rank(data)',
mode='dynamic',
data=[1, 2, 3],
repeats=[3, 0, 2],
axis=-2,
error='axis=-2 out of bounds: expected -1<=axis<1'),
dict(
descr='len(repeats) != data.shape[axis]',
mode='dynamic',
data=[[1, 2, 3], [4, 5, 6]],
repeats=[2, 3],
axis=1,
error='Dimensions 3 and 2 are not compatible'),
dict(
descr='rank(repeats) > 1',
mode='dynamic',
data=[[1, 2, 3], [4, 5, 6]],
repeats=[[3], [5]],
axis=1,
error=r'Shape \(2, 1\) must have rank at most 1'),
dict(
descr='non-integer axis',
mode='constant',
data=[1, 2, 3],
repeats=2,
axis='foo',
exception=TypeError,
error='axis must be an int'),
])
def testError(self,
descr,
mode,
data,
repeats,
axis,
exception=ValueError,
error=None):
# Make sure that this is also an error case for numpy.
with self.assertRaises(exception):
np.repeat(data, repeats, axis)
if mode == 'constant':
data = constant_op.constant(data)
repeats = constant_op.constant(repeats)
elif mode == 'dynamic':
data = constant_op.constant(data)
repeats = constant_op.constant(repeats)
data = array_ops.placeholder_with_default(data, data.shape)
repeats = array_ops.placeholder_with_default(repeats, repeats.shape)
elif mode == 'unknown_shape':
data = array_ops.placeholder_with_default(data, None)
repeats = array_ops.placeholder_with_default(repeats, None)
with self.assertRaisesRegexp(exception, error):
ragged_util.repeat(data, repeats, axis)
if __name__ == '__main__':
googletest.main()
| apache-2.0 |
dsoprea/PythonScheduler | setup.py | 1 | 1204 | import setuptools
import os
import scheduler
app_path = os.path.dirname(scheduler.__file__)
with open(os.path.join(app_path, 'resources', 'README.rst')) as f:
long_description = f.read()
with open(os.path.join(app_path, 'resources', 'requirements.txt')) as f:
install_requires = list(map(lambda s: s.strip(), f.readlines()))
setuptools.setup(
name='scheduler',
version=scheduler.__version__,
description="Multithreaded Python-routine scheduling framework",
long_description=long_description,
classifiers=[],
license='GPL 2',
keywords='tasks jobs schedule scheduling',
author='Dustin Oprea',
author_email='[email protected]',
packages=setuptools.find_packages(exclude=['dev']),
include_package_data=True,
zip_safe=False,
install_requires=install_requires,
package_data={
'scheduler': ['resources/README.rst',
'resources/requirements.txt',
'resources/scripts/*'],
},
scripts=[
'scheduler/resources/scripts/sched_process_tasks_prod',
'scheduler/resources/scripts/sched_process_tasks_dev',
],
)
| gpl-2.0 |
jef-n/QGIS | tests/src/python/test_qgsserver_response.py | 45 | 2326 | # -*- coding: utf-8 -*-
"""QGIS Unit tests for QgsServerResponse.
From build dir, run: ctest -R PyQgsServerResponse -V
.. note:: This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
"""
import unittest
__author__ = 'Alessandro Pasotti'
__date__ = '29/04/2017'
__copyright__ = 'Copyright 2017, The QGIS Project'
from qgis.server import QgsBufferServerResponse
class QgsServerResponseTest(unittest.TestCase):
def test_responseHeaders(self):
"""Test response headers"""
headers = {'header-key-1': 'header-value-1', 'header-key-2': 'header-value-2'}
response = QgsBufferServerResponse()
for k, v in headers.items():
response.setHeader(k, v)
for k, v in response.headers().items():
self.assertEqual(headers[k], v)
response.removeHeader('header-key-1')
self.assertEqual(response.headers(), {'header-key-2': 'header-value-2'})
response.setHeader('header-key-1', 'header-value-1')
for k, v in response.headers().items():
self.assertEqual(headers[k], v)
def test_statusCode(self):
"""Test return status HTTP code"""
response = QgsBufferServerResponse()
response.setStatusCode(222)
self.assertEqual(response.statusCode(), 222)
def test_write(self):
"""Test that writing on the buffer sets the body"""
# Set as str
response = QgsBufferServerResponse()
response.write('Greetings from Essen Linux Hotel 2017 Hack Fest!')
self.assertEqual(bytes(response.body()), b'')
response.finish()
self.assertEqual(bytes(response.body()), b'Greetings from Essen Linux Hotel 2017 Hack Fest!')
self.assertEqual(response.headers(), {'Content-Length': '48'})
# Set as a byte array
response = QgsBufferServerResponse()
response.write(b'Greetings from Essen Linux Hotel 2017 Hack Fest!')
self.assertEqual(bytes(response.body()), b'')
response.finish()
self.assertEqual(bytes(response.body()), b'Greetings from Essen Linux Hotel 2017 Hack Fest!')
if __name__ == '__main__':
unittest.main()
| gpl-2.0 |
gtko/Sick-Beard | sickbeard/clients/requests/packages/urllib3/contrib/ntlmpool.py | 262 | 4740 | # urllib3/contrib/ntlmpool.py
# Copyright 2008-2012 Andrey Petrov and contributors (see CONTRIBUTORS.txt)
#
# This module is part of urllib3 and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
NTLM authenticating pool, contributed by erikcederstran
Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10
"""
try:
from http.client import HTTPSConnection
except ImportError:
from httplib import HTTPSConnection
from logging import getLogger
from ntlm import ntlm
from urllib3 import HTTPSConnectionPool
log = getLogger(__name__)
class NTLMConnectionPool(HTTPSConnectionPool):
"""
Implements an NTLM authentication version of an urllib3 connection pool
"""
scheme = 'https'
def __init__(self, user, pw, authurl, *args, **kwargs):
"""
authurl is a random URL on the server that is protected by NTLM.
user is the Windows user, probably in the DOMAIN\username format.
pw is the password for the user.
"""
super(NTLMConnectionPool, self).__init__(*args, **kwargs)
self.authurl = authurl
self.rawuser = user
user_parts = user.split('\\', 1)
self.domain = user_parts[0].upper()
self.user = user_parts[1]
self.pw = pw
def _new_conn(self):
# Performs the NTLM handshake that secures the connection. The socket
# must be kept open while requests are performed.
self.num_connections += 1
log.debug('Starting NTLM HTTPS connection no. %d: https://%s%s' %
(self.num_connections, self.host, self.authurl))
headers = {}
headers['Connection'] = 'Keep-Alive'
req_header = 'Authorization'
resp_header = 'www-authenticate'
conn = HTTPSConnection(host=self.host, port=self.port)
# Send negotiation message
headers[req_header] = (
'NTLM %s' % ntlm.create_NTLM_NEGOTIATE_MESSAGE(self.rawuser))
log.debug('Request headers: %s' % headers)
conn.request('GET', self.authurl, None, headers)
res = conn.getresponse()
reshdr = dict(res.getheaders())
log.debug('Response status: %s %s' % (res.status, res.reason))
log.debug('Response headers: %s' % reshdr)
log.debug('Response data: %s [...]' % res.read(100))
# Remove the reference to the socket, so that it can not be closed by
# the response object (we want to keep the socket open)
res.fp = None
# Server should respond with a challenge message
auth_header_values = reshdr[resp_header].split(', ')
auth_header_value = None
for s in auth_header_values:
if s[:5] == 'NTLM ':
auth_header_value = s[5:]
if auth_header_value is None:
raise Exception('Unexpected %s response header: %s' %
(resp_header, reshdr[resp_header]))
# Send authentication message
ServerChallenge, NegotiateFlags = \
ntlm.parse_NTLM_CHALLENGE_MESSAGE(auth_header_value)
auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE(ServerChallenge,
self.user,
self.domain,
self.pw,
NegotiateFlags)
headers[req_header] = 'NTLM %s' % auth_msg
log.debug('Request headers: %s' % headers)
conn.request('GET', self.authurl, None, headers)
res = conn.getresponse()
log.debug('Response status: %s %s' % (res.status, res.reason))
log.debug('Response headers: %s' % dict(res.getheaders()))
log.debug('Response data: %s [...]' % res.read()[:100])
if res.status != 200:
if res.status == 401:
raise Exception('Server rejected request: wrong '
'username or password')
raise Exception('Wrong server response: %s %s' %
(res.status, res.reason))
res.fp = None
log.debug('Connection established')
return conn
def urlopen(self, method, url, body=None, headers=None, retries=3,
redirect=True, assert_same_host=True):
if headers is None:
headers = {}
headers['Connection'] = 'Keep-Alive'
return super(NTLMConnectionPool, self).urlopen(method, url, body,
headers, retries,
redirect,
assert_same_host)
| gpl-3.0 |
40223137/w1717 | static/Brython3.1.0-20150301-090019/Lib/encodings/aliases.py | 726 | 15414 | """ Encoding Aliases Support
This module is used by the encodings package search function to
map encodings names to module names.
Note that the search function normalizes the encoding names before
doing the lookup, so the mapping will have to map normalized
encoding names to module names.
Contents:
The following aliases dictionary contains mappings of all IANA
character set names for which the Python core library provides
codecs. In addition to these, a few Python specific codec
aliases have also been added.
"""
aliases = {
# Please keep this list sorted alphabetically by value !
# ascii codec
'646' : 'ascii',
'ansi_x3.4_1968' : 'ascii',
'ansi_x3_4_1968' : 'ascii', # some email headers use this non-standard name
'ansi_x3.4_1986' : 'ascii',
'cp367' : 'ascii',
'csascii' : 'ascii',
'ibm367' : 'ascii',
'iso646_us' : 'ascii',
'iso_646.irv_1991' : 'ascii',
'iso_ir_6' : 'ascii',
'us' : 'ascii',
'us_ascii' : 'ascii',
# base64_codec codec
'base64' : 'base64_codec',
'base_64' : 'base64_codec',
# big5 codec
'big5_tw' : 'big5',
'csbig5' : 'big5',
# big5hkscs codec
'big5_hkscs' : 'big5hkscs',
'hkscs' : 'big5hkscs',
# bz2_codec codec
'bz2' : 'bz2_codec',
# cp037 codec
'037' : 'cp037',
'csibm037' : 'cp037',
'ebcdic_cp_ca' : 'cp037',
'ebcdic_cp_nl' : 'cp037',
'ebcdic_cp_us' : 'cp037',
'ebcdic_cp_wt' : 'cp037',
'ibm037' : 'cp037',
'ibm039' : 'cp037',
# cp1026 codec
'1026' : 'cp1026',
'csibm1026' : 'cp1026',
'ibm1026' : 'cp1026',
# cp1125 codec
'1125' : 'cp1125',
'ibm1125' : 'cp1125',
'cp866u' : 'cp1125',
'ruscii' : 'cp1125',
# cp1140 codec
'1140' : 'cp1140',
'ibm1140' : 'cp1140',
# cp1250 codec
'1250' : 'cp1250',
'windows_1250' : 'cp1250',
# cp1251 codec
'1251' : 'cp1251',
'windows_1251' : 'cp1251',
# cp1252 codec
'1252' : 'cp1252',
'windows_1252' : 'cp1252',
# cp1253 codec
'1253' : 'cp1253',
'windows_1253' : 'cp1253',
# cp1254 codec
'1254' : 'cp1254',
'windows_1254' : 'cp1254',
# cp1255 codec
'1255' : 'cp1255',
'windows_1255' : 'cp1255',
# cp1256 codec
'1256' : 'cp1256',
'windows_1256' : 'cp1256',
# cp1257 codec
'1257' : 'cp1257',
'windows_1257' : 'cp1257',
# cp1258 codec
'1258' : 'cp1258',
'windows_1258' : 'cp1258',
# cp273 codec
'273' : 'cp273',
'ibm273' : 'cp273',
'csibm273' : 'cp273',
# cp424 codec
'424' : 'cp424',
'csibm424' : 'cp424',
'ebcdic_cp_he' : 'cp424',
'ibm424' : 'cp424',
# cp437 codec
'437' : 'cp437',
'cspc8codepage437' : 'cp437',
'ibm437' : 'cp437',
# cp500 codec
'500' : 'cp500',
'csibm500' : 'cp500',
'ebcdic_cp_be' : 'cp500',
'ebcdic_cp_ch' : 'cp500',
'ibm500' : 'cp500',
# cp775 codec
'775' : 'cp775',
'cspc775baltic' : 'cp775',
'ibm775' : 'cp775',
# cp850 codec
'850' : 'cp850',
'cspc850multilingual' : 'cp850',
'ibm850' : 'cp850',
# cp852 codec
'852' : 'cp852',
'cspcp852' : 'cp852',
'ibm852' : 'cp852',
# cp855 codec
'855' : 'cp855',
'csibm855' : 'cp855',
'ibm855' : 'cp855',
# cp857 codec
'857' : 'cp857',
'csibm857' : 'cp857',
'ibm857' : 'cp857',
# cp858 codec
'858' : 'cp858',
'csibm858' : 'cp858',
'ibm858' : 'cp858',
# cp860 codec
'860' : 'cp860',
'csibm860' : 'cp860',
'ibm860' : 'cp860',
# cp861 codec
'861' : 'cp861',
'cp_is' : 'cp861',
'csibm861' : 'cp861',
'ibm861' : 'cp861',
# cp862 codec
'862' : 'cp862',
'cspc862latinhebrew' : 'cp862',
'ibm862' : 'cp862',
# cp863 codec
'863' : 'cp863',
'csibm863' : 'cp863',
'ibm863' : 'cp863',
# cp864 codec
'864' : 'cp864',
'csibm864' : 'cp864',
'ibm864' : 'cp864',
# cp865 codec
'865' : 'cp865',
'csibm865' : 'cp865',
'ibm865' : 'cp865',
# cp866 codec
'866' : 'cp866',
'csibm866' : 'cp866',
'ibm866' : 'cp866',
# cp869 codec
'869' : 'cp869',
'cp_gr' : 'cp869',
'csibm869' : 'cp869',
'ibm869' : 'cp869',
# cp932 codec
'932' : 'cp932',
'ms932' : 'cp932',
'mskanji' : 'cp932',
'ms_kanji' : 'cp932',
# cp949 codec
'949' : 'cp949',
'ms949' : 'cp949',
'uhc' : 'cp949',
# cp950 codec
'950' : 'cp950',
'ms950' : 'cp950',
# euc_jis_2004 codec
'jisx0213' : 'euc_jis_2004',
'eucjis2004' : 'euc_jis_2004',
'euc_jis2004' : 'euc_jis_2004',
# euc_jisx0213 codec
'eucjisx0213' : 'euc_jisx0213',
# euc_jp codec
'eucjp' : 'euc_jp',
'ujis' : 'euc_jp',
'u_jis' : 'euc_jp',
# euc_kr codec
'euckr' : 'euc_kr',
'korean' : 'euc_kr',
'ksc5601' : 'euc_kr',
'ks_c_5601' : 'euc_kr',
'ks_c_5601_1987' : 'euc_kr',
'ksx1001' : 'euc_kr',
'ks_x_1001' : 'euc_kr',
# gb18030 codec
'gb18030_2000' : 'gb18030',
# gb2312 codec
'chinese' : 'gb2312',
'csiso58gb231280' : 'gb2312',
'euc_cn' : 'gb2312',
'euccn' : 'gb2312',
'eucgb2312_cn' : 'gb2312',
'gb2312_1980' : 'gb2312',
'gb2312_80' : 'gb2312',
'iso_ir_58' : 'gb2312',
# gbk codec
'936' : 'gbk',
'cp936' : 'gbk',
'ms936' : 'gbk',
# hex_codec codec
'hex' : 'hex_codec',
# hp_roman8 codec
'roman8' : 'hp_roman8',
'r8' : 'hp_roman8',
'csHPRoman8' : 'hp_roman8',
# hz codec
'hzgb' : 'hz',
'hz_gb' : 'hz',
'hz_gb_2312' : 'hz',
# iso2022_jp codec
'csiso2022jp' : 'iso2022_jp',
'iso2022jp' : 'iso2022_jp',
'iso_2022_jp' : 'iso2022_jp',
# iso2022_jp_1 codec
'iso2022jp_1' : 'iso2022_jp_1',
'iso_2022_jp_1' : 'iso2022_jp_1',
# iso2022_jp_2 codec
'iso2022jp_2' : 'iso2022_jp_2',
'iso_2022_jp_2' : 'iso2022_jp_2',
# iso2022_jp_2004 codec
'iso_2022_jp_2004' : 'iso2022_jp_2004',
'iso2022jp_2004' : 'iso2022_jp_2004',
# iso2022_jp_3 codec
'iso2022jp_3' : 'iso2022_jp_3',
'iso_2022_jp_3' : 'iso2022_jp_3',
# iso2022_jp_ext codec
'iso2022jp_ext' : 'iso2022_jp_ext',
'iso_2022_jp_ext' : 'iso2022_jp_ext',
# iso2022_kr codec
'csiso2022kr' : 'iso2022_kr',
'iso2022kr' : 'iso2022_kr',
'iso_2022_kr' : 'iso2022_kr',
# iso8859_10 codec
'csisolatin6' : 'iso8859_10',
'iso_8859_10' : 'iso8859_10',
'iso_8859_10_1992' : 'iso8859_10',
'iso_ir_157' : 'iso8859_10',
'l6' : 'iso8859_10',
'latin6' : 'iso8859_10',
# iso8859_11 codec
'thai' : 'iso8859_11',
'iso_8859_11' : 'iso8859_11',
'iso_8859_11_2001' : 'iso8859_11',
# iso8859_13 codec
'iso_8859_13' : 'iso8859_13',
'l7' : 'iso8859_13',
'latin7' : 'iso8859_13',
# iso8859_14 codec
'iso_8859_14' : 'iso8859_14',
'iso_8859_14_1998' : 'iso8859_14',
'iso_celtic' : 'iso8859_14',
'iso_ir_199' : 'iso8859_14',
'l8' : 'iso8859_14',
'latin8' : 'iso8859_14',
# iso8859_15 codec
'iso_8859_15' : 'iso8859_15',
'l9' : 'iso8859_15',
'latin9' : 'iso8859_15',
# iso8859_16 codec
'iso_8859_16' : 'iso8859_16',
'iso_8859_16_2001' : 'iso8859_16',
'iso_ir_226' : 'iso8859_16',
'l10' : 'iso8859_16',
'latin10' : 'iso8859_16',
# iso8859_2 codec
'csisolatin2' : 'iso8859_2',
'iso_8859_2' : 'iso8859_2',
'iso_8859_2_1987' : 'iso8859_2',
'iso_ir_101' : 'iso8859_2',
'l2' : 'iso8859_2',
'latin2' : 'iso8859_2',
# iso8859_3 codec
'csisolatin3' : 'iso8859_3',
'iso_8859_3' : 'iso8859_3',
'iso_8859_3_1988' : 'iso8859_3',
'iso_ir_109' : 'iso8859_3',
'l3' : 'iso8859_3',
'latin3' : 'iso8859_3',
# iso8859_4 codec
'csisolatin4' : 'iso8859_4',
'iso_8859_4' : 'iso8859_4',
'iso_8859_4_1988' : 'iso8859_4',
'iso_ir_110' : 'iso8859_4',
'l4' : 'iso8859_4',
'latin4' : 'iso8859_4',
# iso8859_5 codec
'csisolatincyrillic' : 'iso8859_5',
'cyrillic' : 'iso8859_5',
'iso_8859_5' : 'iso8859_5',
'iso_8859_5_1988' : 'iso8859_5',
'iso_ir_144' : 'iso8859_5',
# iso8859_6 codec
'arabic' : 'iso8859_6',
'asmo_708' : 'iso8859_6',
'csisolatinarabic' : 'iso8859_6',
'ecma_114' : 'iso8859_6',
'iso_8859_6' : 'iso8859_6',
'iso_8859_6_1987' : 'iso8859_6',
'iso_ir_127' : 'iso8859_6',
# iso8859_7 codec
'csisolatingreek' : 'iso8859_7',
'ecma_118' : 'iso8859_7',
'elot_928' : 'iso8859_7',
'greek' : 'iso8859_7',
'greek8' : 'iso8859_7',
'iso_8859_7' : 'iso8859_7',
'iso_8859_7_1987' : 'iso8859_7',
'iso_ir_126' : 'iso8859_7',
# iso8859_8 codec
'csisolatinhebrew' : 'iso8859_8',
'hebrew' : 'iso8859_8',
'iso_8859_8' : 'iso8859_8',
'iso_8859_8_1988' : 'iso8859_8',
'iso_ir_138' : 'iso8859_8',
# iso8859_9 codec
'csisolatin5' : 'iso8859_9',
'iso_8859_9' : 'iso8859_9',
'iso_8859_9_1989' : 'iso8859_9',
'iso_ir_148' : 'iso8859_9',
'l5' : 'iso8859_9',
'latin5' : 'iso8859_9',
# johab codec
'cp1361' : 'johab',
'ms1361' : 'johab',
# koi8_r codec
'cskoi8r' : 'koi8_r',
# latin_1 codec
#
# Note that the latin_1 codec is implemented internally in C and a
# lot faster than the charmap codec iso8859_1 which uses the same
# encoding. This is why we discourage the use of the iso8859_1
# codec and alias it to latin_1 instead.
#
'8859' : 'latin_1',
'cp819' : 'latin_1',
'csisolatin1' : 'latin_1',
'ibm819' : 'latin_1',
'iso8859' : 'latin_1',
'iso8859_1' : 'latin_1',
'iso_8859_1' : 'latin_1',
'iso_8859_1_1987' : 'latin_1',
'iso_ir_100' : 'latin_1',
'l1' : 'latin_1',
'latin' : 'latin_1',
'latin1' : 'latin_1',
# mac_cyrillic codec
'maccyrillic' : 'mac_cyrillic',
# mac_greek codec
'macgreek' : 'mac_greek',
# mac_iceland codec
'maciceland' : 'mac_iceland',
# mac_latin2 codec
'maccentraleurope' : 'mac_latin2',
'maclatin2' : 'mac_latin2',
# mac_roman codec
'macintosh' : 'mac_roman',
'macroman' : 'mac_roman',
# mac_turkish codec
'macturkish' : 'mac_turkish',
# mbcs codec
'dbcs' : 'mbcs',
# ptcp154 codec
'csptcp154' : 'ptcp154',
'pt154' : 'ptcp154',
'cp154' : 'ptcp154',
'cyrillic_asian' : 'ptcp154',
# quopri_codec codec
'quopri' : 'quopri_codec',
'quoted_printable' : 'quopri_codec',
'quotedprintable' : 'quopri_codec',
# rot_13 codec
'rot13' : 'rot_13',
# shift_jis codec
'csshiftjis' : 'shift_jis',
'shiftjis' : 'shift_jis',
'sjis' : 'shift_jis',
's_jis' : 'shift_jis',
# shift_jis_2004 codec
'shiftjis2004' : 'shift_jis_2004',
'sjis_2004' : 'shift_jis_2004',
's_jis_2004' : 'shift_jis_2004',
# shift_jisx0213 codec
'shiftjisx0213' : 'shift_jisx0213',
'sjisx0213' : 'shift_jisx0213',
's_jisx0213' : 'shift_jisx0213',
# tactis codec
'tis260' : 'tactis',
# tis_620 codec
'tis620' : 'tis_620',
'tis_620_0' : 'tis_620',
'tis_620_2529_0' : 'tis_620',
'tis_620_2529_1' : 'tis_620',
'iso_ir_166' : 'tis_620',
# utf_16 codec
'u16' : 'utf_16',
'utf16' : 'utf_16',
# utf_16_be codec
'unicodebigunmarked' : 'utf_16_be',
'utf_16be' : 'utf_16_be',
# utf_16_le codec
'unicodelittleunmarked' : 'utf_16_le',
'utf_16le' : 'utf_16_le',
# utf_32 codec
'u32' : 'utf_32',
'utf32' : 'utf_32',
# utf_32_be codec
'utf_32be' : 'utf_32_be',
# utf_32_le codec
'utf_32le' : 'utf_32_le',
# utf_7 codec
'u7' : 'utf_7',
'utf7' : 'utf_7',
'unicode_1_1_utf_7' : 'utf_7',
# utf_8 codec
'u8' : 'utf_8',
'utf' : 'utf_8',
'utf8' : 'utf_8',
'utf8_ucs2' : 'utf_8',
'utf8_ucs4' : 'utf_8',
# uu_codec codec
'uu' : 'uu_codec',
# zlib_codec codec
'zip' : 'zlib_codec',
'zlib' : 'zlib_codec',
# temporary mac CJK aliases, will be replaced by proper codecs in 3.1
'x_mac_japanese' : 'shift_jis',
'x_mac_korean' : 'euc_kr',
'x_mac_simp_chinese' : 'gb2312',
'x_mac_trad_chinese' : 'big5',
}
| gpl-3.0 |
CoolDevelopment/MoshKernel-amami | scripts/rt-tester/rt-tester.py | 11005 | 5307 | #!/usr/bin/python
#
# rt-mutex tester
#
# (C) 2006 Thomas Gleixner <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
import os
import sys
import getopt
import shutil
import string
# Globals
quiet = 0
test = 0
comments = 0
sysfsprefix = "/sys/devices/system/rttest/rttest"
statusfile = "/status"
commandfile = "/command"
# Command opcodes
cmd_opcodes = {
"schedother" : "1",
"schedfifo" : "2",
"lock" : "3",
"locknowait" : "4",
"lockint" : "5",
"lockintnowait" : "6",
"lockcont" : "7",
"unlock" : "8",
"signal" : "11",
"resetevent" : "98",
"reset" : "99",
}
test_opcodes = {
"prioeq" : ["P" , "eq" , None],
"priolt" : ["P" , "lt" , None],
"priogt" : ["P" , "gt" , None],
"nprioeq" : ["N" , "eq" , None],
"npriolt" : ["N" , "lt" , None],
"npriogt" : ["N" , "gt" , None],
"unlocked" : ["M" , "eq" , 0],
"trylock" : ["M" , "eq" , 1],
"blocked" : ["M" , "eq" , 2],
"blockedwake" : ["M" , "eq" , 3],
"locked" : ["M" , "eq" , 4],
"opcodeeq" : ["O" , "eq" , None],
"opcodelt" : ["O" , "lt" , None],
"opcodegt" : ["O" , "gt" , None],
"eventeq" : ["E" , "eq" , None],
"eventlt" : ["E" , "lt" , None],
"eventgt" : ["E" , "gt" , None],
}
# Print usage information
def usage():
print "rt-tester.py <-c -h -q -t> <testfile>"
print " -c display comments after first command"
print " -h help"
print " -q quiet mode"
print " -t test mode (syntax check)"
print " testfile: read test specification from testfile"
print " otherwise from stdin"
return
# Print progress when not in quiet mode
def progress(str):
if not quiet:
print str
# Analyse a status value
def analyse(val, top, arg):
intval = int(val)
if top[0] == "M":
intval = intval / (10 ** int(arg))
intval = intval % 10
argval = top[2]
elif top[0] == "O":
argval = int(cmd_opcodes.get(arg, arg))
else:
argval = int(arg)
# progress("%d %s %d" %(intval, top[1], argval))
if top[1] == "eq" and intval == argval:
return 1
if top[1] == "lt" and intval < argval:
return 1
if top[1] == "gt" and intval > argval:
return 1
return 0
# Parse the commandline
try:
(options, arguments) = getopt.getopt(sys.argv[1:],'chqt')
except getopt.GetoptError, ex:
usage()
sys.exit(1)
# Parse commandline options
for option, value in options:
if option == "-c":
comments = 1
elif option == "-q":
quiet = 1
elif option == "-t":
test = 1
elif option == '-h':
usage()
sys.exit(0)
# Select the input source
if arguments:
try:
fd = open(arguments[0])
except Exception,ex:
sys.stderr.write("File not found %s\n" %(arguments[0]))
sys.exit(1)
else:
fd = sys.stdin
linenr = 0
# Read the test patterns
while 1:
linenr = linenr + 1
line = fd.readline()
if not len(line):
break
line = line.strip()
parts = line.split(":")
if not parts or len(parts) < 1:
continue
if len(parts[0]) == 0:
continue
if parts[0].startswith("#"):
if comments > 1:
progress(line)
continue
if comments == 1:
comments = 2
progress(line)
cmd = parts[0].strip().lower()
opc = parts[1].strip().lower()
tid = parts[2].strip()
dat = parts[3].strip()
try:
# Test or wait for a status value
if cmd == "t" or cmd == "w":
testop = test_opcodes[opc]
fname = "%s%s%s" %(sysfsprefix, tid, statusfile)
if test:
print fname
continue
while 1:
query = 1
fsta = open(fname, 'r')
status = fsta.readline().strip()
fsta.close()
stat = status.split(",")
for s in stat:
s = s.strip()
if s.startswith(testop[0]):
# Separate status value
val = s[2:].strip()
query = analyse(val, testop, dat)
break
if query or cmd == "t":
break
progress(" " + status)
if not query:
sys.stderr.write("Test failed in line %d\n" %(linenr))
sys.exit(1)
# Issue a command to the tester
elif cmd == "c":
cmdnr = cmd_opcodes[opc]
# Build command string and sys filename
cmdstr = "%s:%s" %(cmdnr, dat)
fname = "%s%s%s" %(sysfsprefix, tid, commandfile)
if test:
print fname
continue
fcmd = open(fname, 'w')
fcmd.write(cmdstr)
fcmd.close()
except Exception,ex:
sys.stderr.write(str(ex))
sys.stderr.write("\nSyntax error in line %d\n" %(linenr))
if not test:
fd.close()
sys.exit(1)
# Normal exit pass
print "Pass"
sys.exit(0)
| gpl-2.0 |
thjashin/tensorflow | tensorflow/contrib/distributions/python/kernel_tests/beta_test.py | 34 | 13683 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from scipy import special
from scipy import stats
from tensorflow.contrib.distributions.python.ops import beta as beta_lib
from tensorflow.contrib.distributions.python.ops import kullback_leibler
from tensorflow.python.client import session
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import random_seed
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn_ops
from tensorflow.python.platform import test
class BetaTest(test.TestCase):
def testSimpleShapes(self):
with self.test_session():
a = np.random.rand(3)
b = np.random.rand(3)
dist = beta_lib.Beta(a, b)
self.assertAllEqual([], dist.event_shape_tensor().eval())
self.assertAllEqual([3], dist.batch_shape_tensor().eval())
self.assertEqual(tensor_shape.TensorShape([]), dist.event_shape)
self.assertEqual(tensor_shape.TensorShape([3]), dist.batch_shape)
def testComplexShapes(self):
with self.test_session():
a = np.random.rand(3, 2, 2)
b = np.random.rand(3, 2, 2)
dist = beta_lib.Beta(a, b)
self.assertAllEqual([], dist.event_shape_tensor().eval())
self.assertAllEqual([3, 2, 2], dist.batch_shape_tensor().eval())
self.assertEqual(tensor_shape.TensorShape([]), dist.event_shape)
self.assertEqual(
tensor_shape.TensorShape([3, 2, 2]), dist.batch_shape)
def testComplexShapesBroadcast(self):
with self.test_session():
a = np.random.rand(3, 2, 2)
b = np.random.rand(2, 2)
dist = beta_lib.Beta(a, b)
self.assertAllEqual([], dist.event_shape_tensor().eval())
self.assertAllEqual([3, 2, 2], dist.batch_shape_tensor().eval())
self.assertEqual(tensor_shape.TensorShape([]), dist.event_shape)
self.assertEqual(
tensor_shape.TensorShape([3, 2, 2]), dist.batch_shape)
def testAlphaProperty(self):
a = [[1., 2, 3]]
b = [[2., 4, 3]]
with self.test_session():
dist = beta_lib.Beta(a, b)
self.assertEqual([1, 3], dist.concentration1.get_shape())
self.assertAllClose(a, dist.concentration1.eval())
def testBetaProperty(self):
a = [[1., 2, 3]]
b = [[2., 4, 3]]
with self.test_session():
dist = beta_lib.Beta(a, b)
self.assertEqual([1, 3], dist.concentration0.get_shape())
self.assertAllClose(b, dist.concentration0.eval())
def testPdfXProper(self):
a = [[1., 2, 3]]
b = [[2., 4, 3]]
with self.test_session():
dist = beta_lib.Beta(a, b, validate_args=True)
dist.prob([.1, .3, .6]).eval()
dist.prob([.2, .3, .5]).eval()
# Either condition can trigger.
with self.assertRaisesOpError("sample must be positive"):
dist.prob([-1., 0.1, 0.5]).eval()
with self.assertRaisesOpError("sample must be positive"):
dist.prob([0., 0.1, 0.5]).eval()
with self.assertRaisesOpError("sample must be no larger than `1`"):
dist.prob([.1, .2, 1.2]).eval()
def testPdfTwoBatches(self):
with self.test_session():
a = [1., 2]
b = [1., 2]
x = [.5, .5]
dist = beta_lib.Beta(a, b)
pdf = dist.prob(x)
self.assertAllClose([1., 3. / 2], pdf.eval())
self.assertEqual((2,), pdf.get_shape())
def testPdfTwoBatchesNontrivialX(self):
with self.test_session():
a = [1., 2]
b = [1., 2]
x = [.3, .7]
dist = beta_lib.Beta(a, b)
pdf = dist.prob(x)
self.assertAllClose([1, 63. / 50], pdf.eval())
self.assertEqual((2,), pdf.get_shape())
def testPdfUniformZeroBatch(self):
with self.test_session():
# This is equivalent to a uniform distribution
a = 1.
b = 1.
x = np.array([.1, .2, .3, .5, .8], dtype=np.float32)
dist = beta_lib.Beta(a, b)
pdf = dist.prob(x)
self.assertAllClose([1.] * 5, pdf.eval())
self.assertEqual((5,), pdf.get_shape())
def testPdfAlphaStretchedInBroadcastWhenSameRank(self):
with self.test_session():
a = [[1., 2]]
b = [[1., 2]]
x = [[.5, .5], [.3, .7]]
dist = beta_lib.Beta(a, b)
pdf = dist.prob(x)
self.assertAllClose([[1., 3. / 2], [1., 63. / 50]], pdf.eval())
self.assertEqual((2, 2), pdf.get_shape())
def testPdfAlphaStretchedInBroadcastWhenLowerRank(self):
with self.test_session():
a = [1., 2]
b = [1., 2]
x = [[.5, .5], [.2, .8]]
pdf = beta_lib.Beta(a, b).prob(x)
self.assertAllClose([[1., 3. / 2], [1., 24. / 25]], pdf.eval())
self.assertEqual((2, 2), pdf.get_shape())
def testPdfXStretchedInBroadcastWhenSameRank(self):
with self.test_session():
a = [[1., 2], [2., 3]]
b = [[1., 2], [2., 3]]
x = [[.5, .5]]
pdf = beta_lib.Beta(a, b).prob(x)
self.assertAllClose([[1., 3. / 2], [3. / 2, 15. / 8]], pdf.eval())
self.assertEqual((2, 2), pdf.get_shape())
def testPdfXStretchedInBroadcastWhenLowerRank(self):
with self.test_session():
a = [[1., 2], [2., 3]]
b = [[1., 2], [2., 3]]
x = [.5, .5]
pdf = beta_lib.Beta(a, b).prob(x)
self.assertAllClose([[1., 3. / 2], [3. / 2, 15. / 8]], pdf.eval())
self.assertEqual((2, 2), pdf.get_shape())
def testBetaMean(self):
with session.Session():
a = [1., 2, 3]
b = [2., 4, 1.2]
expected_mean = stats.beta.mean(a, b)
dist = beta_lib.Beta(a, b)
self.assertEqual(dist.mean().get_shape(), (3,))
self.assertAllClose(expected_mean, dist.mean().eval())
def testBetaVariance(self):
with session.Session():
a = [1., 2, 3]
b = [2., 4, 1.2]
expected_variance = stats.beta.var(a, b)
dist = beta_lib.Beta(a, b)
self.assertEqual(dist.variance().get_shape(), (3,))
self.assertAllClose(expected_variance, dist.variance().eval())
def testBetaMode(self):
with session.Session():
a = np.array([1.1, 2, 3])
b = np.array([2., 4, 1.2])
expected_mode = (a - 1) / (a + b - 2)
dist = beta_lib.Beta(a, b)
self.assertEqual(dist.mode().get_shape(), (3,))
self.assertAllClose(expected_mode, dist.mode().eval())
def testBetaModeInvalid(self):
with session.Session():
a = np.array([1., 2, 3])
b = np.array([2., 4, 1.2])
dist = beta_lib.Beta(a, b, allow_nan_stats=False)
with self.assertRaisesOpError("Condition x < y.*"):
dist.mode().eval()
a = np.array([2., 2, 3])
b = np.array([1., 4, 1.2])
dist = beta_lib.Beta(a, b, allow_nan_stats=False)
with self.assertRaisesOpError("Condition x < y.*"):
dist.mode().eval()
def testBetaModeEnableAllowNanStats(self):
with session.Session():
a = np.array([1., 2, 3])
b = np.array([2., 4, 1.2])
dist = beta_lib.Beta(a, b, allow_nan_stats=True)
expected_mode = (a - 1) / (a + b - 2)
expected_mode[0] = np.nan
self.assertEqual((3,), dist.mode().get_shape())
self.assertAllClose(expected_mode, dist.mode().eval())
a = np.array([2., 2, 3])
b = np.array([1., 4, 1.2])
dist = beta_lib.Beta(a, b, allow_nan_stats=True)
expected_mode = (a - 1) / (a + b - 2)
expected_mode[0] = np.nan
self.assertEqual((3,), dist.mode().get_shape())
self.assertAllClose(expected_mode, dist.mode().eval())
def testBetaEntropy(self):
with session.Session():
a = [1., 2, 3]
b = [2., 4, 1.2]
expected_entropy = stats.beta.entropy(a, b)
dist = beta_lib.Beta(a, b)
self.assertEqual(dist.entropy().get_shape(), (3,))
self.assertAllClose(expected_entropy, dist.entropy().eval())
def testBetaSample(self):
with self.test_session():
a = 1.
b = 2.
beta = beta_lib.Beta(a, b)
n = constant_op.constant(100000)
samples = beta.sample(n)
sample_values = samples.eval()
self.assertEqual(sample_values.shape, (100000,))
self.assertFalse(np.any(sample_values < 0.0))
self.assertLess(
stats.kstest(
# Beta is a univariate distribution.
sample_values,
stats.beta(a=1., b=2.).cdf)[0],
0.01)
# The standard error of the sample mean is 1 / (sqrt(18 * n))
self.assertAllClose(
sample_values.mean(axis=0), stats.beta.mean(a, b), atol=1e-2)
self.assertAllClose(
np.cov(sample_values, rowvar=0), stats.beta.var(a, b), atol=1e-1)
# Test that sampling with the same seed twice gives the same results.
def testBetaSampleMultipleTimes(self):
with self.test_session():
a_val = 1.
b_val = 2.
n_val = 100
random_seed.set_random_seed(654321)
beta1 = beta_lib.Beta(concentration1=a_val,
concentration0=b_val,
name="beta1")
samples1 = beta1.sample(n_val, seed=123456).eval()
random_seed.set_random_seed(654321)
beta2 = beta_lib.Beta(concentration1=a_val,
concentration0=b_val,
name="beta2")
samples2 = beta2.sample(n_val, seed=123456).eval()
self.assertAllClose(samples1, samples2)
def testBetaSampleMultidimensional(self):
with self.test_session():
a = np.random.rand(3, 2, 2).astype(np.float32)
b = np.random.rand(3, 2, 2).astype(np.float32)
beta = beta_lib.Beta(a, b)
n = constant_op.constant(100000)
samples = beta.sample(n)
sample_values = samples.eval()
self.assertEqual(sample_values.shape, (100000, 3, 2, 2))
self.assertFalse(np.any(sample_values < 0.0))
self.assertAllClose(
sample_values[:, 1, :].mean(axis=0),
stats.beta.mean(a, b)[1, :],
atol=1e-1)
def testBetaCdf(self):
with self.test_session():
shape = (30, 40, 50)
for dt in (np.float32, np.float64):
a = 10. * np.random.random(shape).astype(dt)
b = 10. * np.random.random(shape).astype(dt)
x = np.random.random(shape).astype(dt)
actual = beta_lib.Beta(a, b).cdf(x).eval()
self.assertAllEqual(np.ones(shape, dtype=np.bool), 0. <= x)
self.assertAllEqual(np.ones(shape, dtype=np.bool), 1. >= x)
self.assertAllClose(stats.beta.cdf(x, a, b), actual, rtol=1e-4, atol=0)
def testBetaLogCdf(self):
with self.test_session():
shape = (30, 40, 50)
for dt in (np.float32, np.float64):
a = 10. * np.random.random(shape).astype(dt)
b = 10. * np.random.random(shape).astype(dt)
x = np.random.random(shape).astype(dt)
actual = math_ops.exp(beta_lib.Beta(a, b).log_cdf(x)).eval()
self.assertAllEqual(np.ones(shape, dtype=np.bool), 0. <= x)
self.assertAllEqual(np.ones(shape, dtype=np.bool), 1. >= x)
self.assertAllClose(stats.beta.cdf(x, a, b), actual, rtol=1e-4, atol=0)
def testBetaWithSoftplusConcentration(self):
with self.test_session():
a, b = -4.2, -9.1
dist = beta_lib.BetaWithSoftplusConcentration(a, b)
self.assertAllClose(nn_ops.softplus(a).eval(), dist.concentration1.eval())
self.assertAllClose(nn_ops.softplus(b).eval(), dist.concentration0.eval())
def testBetaBetaKL(self):
with self.test_session() as sess:
for shape in [(10,), (4, 5)]:
a1 = 6.0 * np.random.random(size=shape) + 1e-4
b1 = 6.0 * np.random.random(size=shape) + 1e-4
a2 = 6.0 * np.random.random(size=shape) + 1e-4
b2 = 6.0 * np.random.random(size=shape) + 1e-4
# Take inverse softplus of values to test BetaWithSoftplusConcentration
a1_sp = np.log(np.exp(a1) - 1.0)
b1_sp = np.log(np.exp(b1) - 1.0)
a2_sp = np.log(np.exp(a2) - 1.0)
b2_sp = np.log(np.exp(b2) - 1.0)
d1 = beta_lib.Beta(concentration1=a1, concentration0=b1)
d2 = beta_lib.Beta(concentration1=a2, concentration0=b2)
d1_sp = beta_lib.BetaWithSoftplusConcentration(concentration1=a1_sp,
concentration0=b1_sp)
d2_sp = beta_lib.BetaWithSoftplusConcentration(concentration1=a2_sp,
concentration0=b2_sp)
kl_expected = (special.betaln(a2, b2) - special.betaln(a1, b1) +
(a1 - a2) * special.digamma(a1) +
(b1 - b2) * special.digamma(b1) +
(a2 - a1 + b2 - b1) * special.digamma(a1 + b1))
for dist1 in [d1, d1_sp]:
for dist2 in [d2, d2_sp]:
kl = kullback_leibler.kl(dist1, dist2)
kl_val = sess.run(kl)
self.assertEqual(kl.get_shape(), shape)
self.assertAllClose(kl_val, kl_expected)
# Make sure KL(d1||d1) is 0
kl_same = sess.run(kullback_leibler.kl(d1, d1))
self.assertAllClose(kl_same, np.zeros_like(kl_expected))
if __name__ == "__main__":
test.main()
| apache-2.0 |
balloob/github3.py | github3/__init__.py | 1 | 1966 | # -*- coding: utf-8 -*-
"""
github3
=======
See https://github3.readthedocs.io/ for documentation.
:copyright: (c) 2012-2016 by Ian Cordasco
:license: Modified BSD, see LICENSE for more details
"""
from .__about__ import (
__package_name__, __title__, __author__, __author_email__,
__license__, __copyright__, __version__, __version_info__,
__url__,
)
from .api import (
all_events,
all_repositories,
all_users,
authorize,
create_gist,
emojis,
enterprise_login,
followed_by,
followers_of,
gist,
gists_by,
gitignore_template,
gitignore_templates,
issue,
issues_on,
login,
markdown,
octocat,
organization,
organizations_with,
public_gists,
pull_request,
rate_limit,
repositories_by,
repository,
search_code,
search_issues,
search_repositories,
search_users,
starred_by,
subscriptions_for,
user,
zen
)
from .github import GitHub, GitHubEnterprise, GitHubStatus
from .exceptions import GitHubError
__all__ = (
'GitHub',
'GitHubEnterprise',
'GitHubError',
'GitHubStatus',
'authorize',
'login',
'enterprise_login',
'emojis',
'gist',
'gitignore_template',
'create_gist',
'issue',
'markdown',
'octocat',
'organization',
'pull_request',
'followers_of',
'followed_by',
'public_gists',
'gists_by',
'issues_on',
'gitignore_templates',
'all_repositories',
'all_users',
'all_events',
'organizations_with',
'repositories_by',
'starred_by',
'subscriptions_for',
'rate_limit',
'repository',
'search_code',
'search_repositories',
'search_users',
'search_issues',
'user',
'zen',
# Metadata attributes
'__package_name__',
'__title__',
'__author__',
'__author_email__',
'__license__',
'__copyright__',
'__version__',
'__version_info__',
'__url__',
)
| bsd-3-clause |
nuagenetworks/vspk-python | vspk/v6/fetchers/nuvmresyncs_fetcher.py | 2 | 2107 | # -*- coding: utf-8 -*-
#
# Copyright (c) 2015, Alcatel-Lucent Inc, 2017 Nokia
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from bambou import NURESTFetcher
class NUVMResyncsFetcher(NURESTFetcher):
""" Represents a NUVMResyncs fetcher
Notes:
This fetcher enables to fetch NUVMResync objects.
See:
bambou.NURESTFetcher
"""
@classmethod
def managed_class(cls):
""" Return NUVMResync class that is managed.
Returns:
.NUVMResync: the managed class
"""
from .. import NUVMResync
return NUVMResync
| bsd-3-clause |
infant-cognition-tampere/gazelib-fixtures | tools/visualize.py | 2 | 1937 | # -*- coding: utf-8 -*-
'''
We use this file as an ad-hoc tool to visualize gazedata.
DEPRECATED: Use gazelib.visualization instead.
'''
import os
import gazelib
import bokeh.plotting as plotting
this_dir = os.path.dirname(os.path.realpath(__file__))
root_dir = os.path.dirname(this_dir)
source_dir = os.path.join(root_dir, 'original-recordings')
filename = os.path.join(source_dir, 'recording-001-trials.gazedata')
g0 = gazelib.load_csv_as_dictlist(filename, delimit='\t')
xs, ys, vals = gazelib.combine_coordinates(g0, ['0', '1'],
'XGazePosRightEye', 'YGazePosRightEye', 'ValidityRightEye',
'XGazePosLeftEye', 'YGazePosLeftEye', 'ValidityLeftEye')
g1 = gazelib.add_key(g0, 'comb_x', list(xs))
g2 = gazelib.add_key(g1, 'comb_y', list(ys))
g3 = gazelib.add_key(g2, 'comb_val', list(vals))
g4 = gazelib.interpolate_using_last_good_value(g3, 'comb_x', 'comb_val', ['0', '1'])
g5 = gazelib.interpolate_using_last_good_value(g4, 'comb_y', 'comb_val', ['0', '1'])
#g6 = gazelib.median_filter_data(g5, 7, 'comb_x')
#g7 = gazelib.median_filter_data(g6, 7, 'comb_y')
g8 = gazelib.gazepoints_containing_value(g5, 'tag', ['Target'])
g9 = gazelib.split_at_change_in_value(g8, 'trialnumber')
g10 = list(map(lambda trial: gazelib.first_gazepoints(trial, 1000), g9))
#g6 = g5[600:2500]
#print(len(g5))
#x = list(map(float, xs))
#xs0 = list(range(0, len(x))) # gazelib.get_key(, 'comb_x')
#ys0 = x # gazelib.get_key(g1, 'comb_x')
#print(len(xs0))
#print(len(ys0))
for index, trial in enumerate(g10):
t_x = list(map(float, gazelib.get_key(trial, 'comb_x')))
t_y = list(map(float, gazelib.get_key(trial, 'comb_y')))
title = 'trial-' + str(index).zfill(2)
p = plotting.figure(title=title, x_axis_label='gaze X', y_axis_label='gaze Y')
p.cross(t_x, t_y, size=10)
p.line(t_x, t_y, line_width=1)
output_file_name = title + '.html'
plotting.output_file(output_file_name, title)
plotting.save(p)
| mit |
benjaminwhite/Gtasks | gtasks/task.py | 1 | 3569 | from __future__ import absolute_import
import re
from contextlib import contextmanager
import gtasks.timeconversion as tc
from gtasks.gtaskobject import GtaskObject
from gtasks.misc import raise_for_type
from gtasks.tasklist import TaskList
class Task(GtaskObject):
LIST_REGEX = re.compile('lists/(\w+)/tasks')
def __init__(self, task_dict, gtasks):
GtaskObject.__init__(self, task_dict, gtasks)
list_id = Task.LIST_REGEX.search(task_dict['selfLink']).group(1)
if list_id in gtasks._list_index:
self.task_list = gtasks._list_index[list_id]
else:
list_dict = {'id': list_id, 'selfLink': gtasks.LISTS_URL+'/'+list_id}
self.task_list = TaskList(list_dict, gtasks)
task_id = task_dict['id']
self.task_list._task_index[task_id] = self
gtasks._task_index[task_id] = self
self._parent_settings = self.task_list
self._update_params = {'task': task_id, 'tasklist': list_id}
def unhide(self):
self._set_property('hidden', False)
@contextmanager
def batch_edit(self):
old_value = self._auto_push
self._auto_push = False
yield
self.push_updates()
self._auto_push = old_value
# hidden property (read-only)
@property
def hidden(self):
return self._get_property('hidden') is True
# notes property
@property
def notes(self):
return self._get_property('notes')
@notes.setter
def notes(self, value):
self._set_property('notes', value, str)
# complete property
@property
def complete(self):
return self._get_property('status') == 'completed'
@complete.setter
def complete(self, value):
raise_for_type(value, bool)
if value:
self._set_property('status', 'completed')
else:
self._set_property('completed', None, push_override=False)
self._set_property('status', 'needsAction')
# due_date property
@property
def due_date(self):
date = self._get_property('due')
if date is not None:
date = tc.from_date_rfc3339(date)
return date
@due_date.setter
def due_date(self, value):
if value is None:
self._set_property('due', None)
else:
self._set_property('due', tc.to_date_rfc3339(value))
# completion_date property
@property
def completion_date(self):
date = self._get_property('completed')
if date is not None:
date = tc.from_rfc3339(date)
return date
@completion_date.setter
def completion_date(self, value):
if value is None:
self._set_property('status', 'needsAction', push_override=False)
self._set_property('completed', None)
else:
self._set_property('status', 'completed', push_override=False)
self._set_property('completed', tc.to_rfc3339(value))
# deleted property
@property
def deleted(self):
return self._get_property('deleted') is True
@deleted.setter
def deleted(self, value):
self._set_property('deleted', value, bool)
# parent proprty
@property
def parent(self):
parent_id = self._get_property('parent')
if parent_id:
return self._gtasks.get_task(parent_id)
else:
return None
def __unicode__(self):
mark = u'\u2713' if self.complete else u' ' # u2713 is a checkmark
return u'({}) {}'.format(mark, self.title)
| mit |
DHI-GRAS/processing_SWAT | SWATAlgorithm.py | 2 | 2557 | """
***************************************************************************
WG9HMAlgorithm.py
-------------------------------------
Copyright (C) 2014 TIGER-NET (www.tiger-net.org)
***************************************************************************
* This plugin is part of the Water Observation Information System (WOIS) *
* developed under the TIGER-NET project funded by the European Space *
* Agency as part of the long-term TIGER initiative aiming at promoting *
* the use of Earth Observation (EO) for improved Integrated Water *
* Resources Management (IWRM) in Africa. *
* *
* WOIS is a free software i.e. you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published *
* by the Free Software Foundation, either version 3 of the License, *
* or (at your option) any later version. *
* *
* WOIS is distributed in the hope that it will be useful, but WITHOUT ANY *
* WARRANTY; without even the implied warranty of MERCHANTABILITY or *
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License *
* for more details. *
* *
* You should have received a copy of the GNU General Public License along *
* with this program. If not, see <http://www.gnu.org/licenses/>. *
***************************************************************************
"""
import os
from PyQt4.QtGui import *
from processing.core.GeoAlgorithm import GeoAlgorithm
from processing.gui.Help2Html import getHtmlFromRstFile
from processing.core.ProcessingLog import ProcessingLog
class SWATAlgorithm(GeoAlgorithm, object):
def __init__(self, descriptionFile):
super(SWATAlgorithm, self).__init__()
self.descriptionFile = descriptionFile
def help(self):
[folder, filename] = os.path.split(self.descriptionFile)
[filename, _] = os.path.splitext(filename)
helpfile = os.path.join(folder, "doc", filename+".html")
if os.path.exists(helpfile):
return True, getHtmlFromRstFile(helpfile)
else:
return False, None
def getIcon(self):
return QIcon(os.path.dirname(__file__) + "/images/tigerNET.png")
| gpl-3.0 |
hchen1202/django-react | virtualenv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/controller.py | 327 | 13024 | """
The httplib2 algorithms ported for use with requests.
"""
import logging
import re
import calendar
import time
from email.utils import parsedate_tz
from pip._vendor.requests.structures import CaseInsensitiveDict
from .cache import DictCache
from .serialize import Serializer
logger = logging.getLogger(__name__)
URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
def parse_uri(uri):
"""Parses a URI using the regex given in Appendix B of RFC 3986.
(scheme, authority, path, query, fragment) = parse_uri(uri)
"""
groups = URI.match(uri).groups()
return (groups[1], groups[3], groups[4], groups[6], groups[8])
class CacheController(object):
"""An interface to see if request should cached or not.
"""
def __init__(self, cache=None, cache_etags=True, serializer=None):
self.cache = cache or DictCache()
self.cache_etags = cache_etags
self.serializer = serializer or Serializer()
@classmethod
def _urlnorm(cls, uri):
"""Normalize the URL to create a safe key for the cache"""
(scheme, authority, path, query, fragment) = parse_uri(uri)
if not scheme or not authority:
raise Exception("Only absolute URIs are allowed. uri = %s" % uri)
scheme = scheme.lower()
authority = authority.lower()
if not path:
path = "/"
# Could do syntax based normalization of the URI before
# computing the digest. See Section 6.2.2 of Std 66.
request_uri = query and "?".join([path, query]) or path
defrag_uri = scheme + "://" + authority + request_uri
return defrag_uri
@classmethod
def cache_url(cls, uri):
return cls._urlnorm(uri)
def parse_cache_control(self, headers):
"""
Parse the cache control headers returning a dictionary with values
for the different directives.
"""
retval = {}
cc_header = 'cache-control'
if 'Cache-Control' in headers:
cc_header = 'Cache-Control'
if cc_header in headers:
parts = headers[cc_header].split(',')
parts_with_args = [
tuple([x.strip().lower() for x in part.split("=", 1)])
for part in parts if -1 != part.find("=")
]
parts_wo_args = [
(name.strip().lower(), 1)
for name in parts if -1 == name.find("=")
]
retval = dict(parts_with_args + parts_wo_args)
return retval
def cached_request(self, request):
"""
Return a cached response if it exists in the cache, otherwise
return False.
"""
cache_url = self.cache_url(request.url)
logger.debug('Looking up "%s" in the cache', cache_url)
cc = self.parse_cache_control(request.headers)
# Bail out if the request insists on fresh data
if 'no-cache' in cc:
logger.debug('Request header has "no-cache", cache bypassed')
return False
if 'max-age' in cc and cc['max-age'] == 0:
logger.debug('Request header has "max_age" as 0, cache bypassed')
return False
# Request allows serving from the cache, let's see if we find something
cache_data = self.cache.get(cache_url)
if cache_data is None:
logger.debug('No cache entry available')
return False
# Check whether it can be deserialized
resp = self.serializer.loads(request, cache_data)
if not resp:
logger.warning('Cache entry deserialization failed, entry ignored')
return False
# If we have a cached 301, return it immediately. We don't
# need to test our response for other headers b/c it is
# intrinsically "cacheable" as it is Permanent.
# See:
# https://tools.ietf.org/html/rfc7231#section-6.4.2
#
# Client can try to refresh the value by repeating the request
# with cache busting headers as usual (ie no-cache).
if resp.status == 301:
msg = ('Returning cached "301 Moved Permanently" response '
'(ignoring date and etag information)')
logger.debug(msg)
return resp
headers = CaseInsensitiveDict(resp.headers)
if not headers or 'date' not in headers:
if 'etag' not in headers:
# Without date or etag, the cached response can never be used
# and should be deleted.
logger.debug('Purging cached response: no date or etag')
self.cache.delete(cache_url)
logger.debug('Ignoring cached response: no date')
return False
now = time.time()
date = calendar.timegm(
parsedate_tz(headers['date'])
)
current_age = max(0, now - date)
logger.debug('Current age based on date: %i', current_age)
# TODO: There is an assumption that the result will be a
# urllib3 response object. This may not be best since we
# could probably avoid instantiating or constructing the
# response until we know we need it.
resp_cc = self.parse_cache_control(headers)
# determine freshness
freshness_lifetime = 0
# Check the max-age pragma in the cache control header
if 'max-age' in resp_cc and resp_cc['max-age'].isdigit():
freshness_lifetime = int(resp_cc['max-age'])
logger.debug('Freshness lifetime from max-age: %i',
freshness_lifetime)
# If there isn't a max-age, check for an expires header
elif 'expires' in headers:
expires = parsedate_tz(headers['expires'])
if expires is not None:
expire_time = calendar.timegm(expires) - date
freshness_lifetime = max(0, expire_time)
logger.debug("Freshness lifetime from expires: %i",
freshness_lifetime)
# Determine if we are setting freshness limit in the
# request. Note, this overrides what was in the response.
if 'max-age' in cc:
try:
freshness_lifetime = int(cc['max-age'])
logger.debug('Freshness lifetime from request max-age: %i',
freshness_lifetime)
except ValueError:
freshness_lifetime = 0
if 'min-fresh' in cc:
try:
min_fresh = int(cc['min-fresh'])
except ValueError:
min_fresh = 0
# adjust our current age by our min fresh
current_age += min_fresh
logger.debug('Adjusted current age from min-fresh: %i',
current_age)
# Return entry if it is fresh enough
if freshness_lifetime > current_age:
logger.debug('The response is "fresh", returning cached response')
logger.debug('%i > %i', freshness_lifetime, current_age)
return resp
# we're not fresh. If we don't have an Etag, clear it out
if 'etag' not in headers:
logger.debug(
'The cached response is "stale" with no etag, purging'
)
self.cache.delete(cache_url)
# return the original handler
return False
def conditional_headers(self, request):
cache_url = self.cache_url(request.url)
resp = self.serializer.loads(request, self.cache.get(cache_url))
new_headers = {}
if resp:
headers = CaseInsensitiveDict(resp.headers)
if 'etag' in headers:
new_headers['If-None-Match'] = headers['ETag']
if 'last-modified' in headers:
new_headers['If-Modified-Since'] = headers['Last-Modified']
return new_headers
def cache_response(self, request, response, body=None):
"""
Algorithm for caching requests.
This assumes a requests Response object.
"""
# From httplib2: Don't cache 206's since we aren't going to
# handle byte range requests
cacheable_status_codes = [200, 203, 300, 301]
if response.status not in cacheable_status_codes:
logger.debug(
'Status code %s not in %s',
response.status,
cacheable_status_codes
)
return
response_headers = CaseInsensitiveDict(response.headers)
# If we've been given a body, our response has a Content-Length, that
# Content-Length is valid then we can check to see if the body we've
# been given matches the expected size, and if it doesn't we'll just
# skip trying to cache it.
if (body is not None and
"content-length" in response_headers and
response_headers["content-length"].isdigit() and
int(response_headers["content-length"]) != len(body)):
return
cc_req = self.parse_cache_control(request.headers)
cc = self.parse_cache_control(response_headers)
cache_url = self.cache_url(request.url)
logger.debug('Updating cache with response from "%s"', cache_url)
# Delete it from the cache if we happen to have it stored there
no_store = False
if cc.get('no-store'):
no_store = True
logger.debug('Response header has "no-store"')
if cc_req.get('no-store'):
no_store = True
logger.debug('Request header has "no-store"')
if no_store and self.cache.get(cache_url):
logger.debug('Purging existing cache entry to honor "no-store"')
self.cache.delete(cache_url)
# If we've been given an etag, then keep the response
if self.cache_etags and 'etag' in response_headers:
logger.debug('Caching due to etag')
self.cache.set(
cache_url,
self.serializer.dumps(request, response, body=body),
)
# Add to the cache any 301s. We do this before looking that
# the Date headers.
elif response.status == 301:
logger.debug('Caching permanant redirect')
self.cache.set(
cache_url,
self.serializer.dumps(request, response)
)
# Add to the cache if the response headers demand it. If there
# is no date header then we can't do anything about expiring
# the cache.
elif 'date' in response_headers:
# cache when there is a max-age > 0
if cc and cc.get('max-age'):
if cc['max-age'].isdigit() and int(cc['max-age']) > 0:
logger.debug('Caching b/c date exists and max-age > 0')
self.cache.set(
cache_url,
self.serializer.dumps(request, response, body=body),
)
# If the request can expire, it means we should cache it
# in the meantime.
elif 'expires' in response_headers:
if response_headers['expires']:
logger.debug('Caching b/c of expires header')
self.cache.set(
cache_url,
self.serializer.dumps(request, response, body=body),
)
def update_cached_response(self, request, response):
"""On a 304 we will get a new set of headers that we want to
update our cached value with, assuming we have one.
This should only ever be called when we've sent an ETag and
gotten a 304 as the response.
"""
cache_url = self.cache_url(request.url)
cached_response = self.serializer.loads(
request,
self.cache.get(cache_url)
)
if not cached_response:
# we didn't have a cached response
return response
# Lets update our headers with the headers from the new request:
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
#
# The server isn't supposed to send headers that would make
# the cached body invalid. But... just in case, we'll be sure
# to strip out ones we know that might be problmatic due to
# typical assumptions.
excluded_headers = [
"content-length",
]
cached_response.headers.update(
dict((k, v) for k, v in response.headers.items()
if k.lower() not in excluded_headers)
)
# we want a 200 b/c we have content via the cache
cached_response.status = 200
# update our cache
self.cache.set(
cache_url,
self.serializer.dumps(request, cached_response),
)
return cached_response
| mit |
alexanderturner/ansible | lib/ansible/modules/cloud/openstack/os_project_facts.py | 5 | 4919 | #!/usr/bin/python
# Copyright (c) 2016 Hewlett-Packard Enterprise Corporation
#
# This module is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this software. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: os_project_facts
short_description: Retrieve facts about one or more OpenStack projects
extends_documentation_fragment: openstack
version_added: "2.1"
author: "Ricardo Carrillo Cruz (@rcarrillocruz)"
description:
- Retrieve facts about a one or more OpenStack projects
requirements:
- "python >= 2.6"
- "shade"
options:
name:
description:
- Name or ID of the project
required: true
domain:
description:
- Name or ID of the domain containing the project if the cloud supports domains
required: false
default: None
filters:
description:
- A dictionary of meta data to use for further filtering. Elements of
this dictionary may be additional dictionaries.
required: false
default: None
'''
EXAMPLES = '''
# Gather facts about previously created projects
- os_project_facts:
cloud: awesomecloud
- debug:
var: openstack_projects
# Gather facts about a previously created project by name
- os_project_facts:
cloud: awesomecloud
name: demoproject
- debug:
var: openstack_projects
# Gather facts about a previously created project in a specific domain
- os_project_facts
cloud: awesomecloud
name: demoproject
domain: admindomain
- debug:
var: openstack_projects
# Gather facts about a previously created project in a specific domain
with filter
- os_project_facts
cloud: awesomecloud
name: demoproject
domain: admindomain
filters:
enabled: False
- debug:
var: openstack_projects
'''
RETURN = '''
openstack_projects:
description: has all the OpenStack facts about projects
returned: always, but can be null
type: complex
contains:
id:
description: Unique UUID.
returned: success
type: string
name:
description: Name given to the project.
returned: success
type: string
description:
description: Description of the project
returned: success
type: string
enabled:
description: Flag to indicate if the project is enabled
returned: success
type: bool
domain_id:
description: Domain ID containing the project (keystone v3 clouds only)
returned: success
type: bool
'''
try:
import shade
HAS_SHADE = True
except ImportError:
HAS_SHADE = False
def main():
argument_spec = openstack_full_argument_spec(
name=dict(required=False, default=None),
domain=dict(required=False, default=None),
filters=dict(required=False, type='dict', default=None),
)
module = AnsibleModule(argument_spec)
if not HAS_SHADE:
module.fail_json(msg='shade is required for this module')
try:
name = module.params['name']
domain = module.params['domain']
filters = module.params['filters']
opcloud = shade.operator_cloud(**module.params)
if domain:
try:
# We assume admin is passing domain id
dom = opcloud.get_domain(domain)['id']
domain = dom
except:
# If we fail, maybe admin is passing a domain name.
# Note that domains have unique names, just like id.
dom = opcloud.search_domains(filters={'name': domain})
if dom:
domain = dom[0]['id']
else:
module.fail_json(msg='Domain name or ID does not exist')
if not filters:
filters = {}
filters['domain_id'] = domain
projects = opcloud.search_projects(name, filters)
module.exit_json(changed=False, ansible_facts=dict(
openstack_projects=projects))
except shade.OpenStackCloudException as e:
module.fail_json(msg=str(e))
from ansible.module_utils.basic import *
from ansible.module_utils.openstack import *
if __name__ == '__main__':
main()
| gpl-3.0 |
frumiousbandersnatch/supybot-code | plugins/Todo/__init__.py | 15 | 2466 | ###
# Copyright (c) 2003-2005, Daniel DiPaolo
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
"""
The Todo plugin allows registered users to keep their own personal list of
tasks to do, with an optional priority for each.
"""
import supybot
import supybot.world as world
# Use this for the version of this plugin. You may wish to put a CVS keyword
# in here if you're keeping the plugin in CVS or some similar system.
__version__ = "%%VERSION%%"
__author__ = supybot.authors.strike
# This is a dictionary mapping supybot.Author instances to lists of
# contributions.
__contributors__ = {}
import config
import plugin
reload(plugin) # In case we're being reloaded.
# Add more reloads here if you add third-party modules and want them to be
# reloaded when this plugin is reloaded. Don't forget to import them as well!
if world.testing:
import test
Class = plugin.Class
configure = config.configure
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
| bsd-3-clause |
mattrobenolt/django | django/db/migrations/executor.py | 48 | 9843 | from __future__ import unicode_literals
from django.apps.registry import apps as global_apps
from django.db import migrations
from .loader import MigrationLoader
from .recorder import MigrationRecorder
from .state import ProjectState
class MigrationExecutor(object):
"""
End-to-end migration execution - loads migrations, and runs them
up or down to a specified set of targets.
"""
def __init__(self, connection, progress_callback=None):
self.connection = connection
self.loader = MigrationLoader(self.connection)
self.recorder = MigrationRecorder(self.connection)
self.progress_callback = progress_callback
def migration_plan(self, targets, clean_start=False):
"""
Given a set of targets, returns a list of (Migration instance, backwards?).
"""
plan = []
if clean_start:
applied = set()
else:
applied = set(self.loader.applied_migrations)
for target in targets:
# If the target is (app_label, None), that means unmigrate everything
if target[1] is None:
for root in self.loader.graph.root_nodes():
if root[0] == target[0]:
for migration in self.loader.graph.backwards_plan(root):
if migration in applied:
plan.append((self.loader.graph.nodes[migration], True))
applied.remove(migration)
# If the migration is already applied, do backwards mode,
# otherwise do forwards mode.
elif target in applied:
# Don't migrate backwards all the way to the target node (that
# may roll back dependencies in other apps that don't need to
# be rolled back); instead roll back through target's immediate
# child(ren) in the same app, and no further.
next_in_app = sorted(
n for n in
self.loader.graph.node_map[target].children
if n[0] == target[0]
)
for node in next_in_app:
for migration in self.loader.graph.backwards_plan(node):
if migration in applied:
plan.append((self.loader.graph.nodes[migration], True))
applied.remove(migration)
else:
for migration in self.loader.graph.forwards_plan(target):
if migration not in applied:
plan.append((self.loader.graph.nodes[migration], False))
applied.add(migration)
return plan
def migrate(self, targets, plan=None, fake=False, fake_initial=False):
"""
Migrates the database up to the given targets.
Django first needs to create all project states before a migration is
(un)applied and in a second step run all the database operations.
"""
if plan is None:
plan = self.migration_plan(targets)
migrations_to_run = {m[0] for m in plan}
# Create the forwards plan Django would follow on an empty database
full_plan = self.migration_plan(self.loader.graph.leaf_nodes(), clean_start=True)
# Holds all states right before a migration is applied
# if the migration is being run.
states = {}
state = ProjectState(real_apps=list(self.loader.unmigrated_apps))
if self.progress_callback:
self.progress_callback("render_start")
# Phase 1 -- Store all project states of migrations right before they
# are applied. The first migration that will be applied in phase 2 will
# trigger the rendering of the initial project state. From this time on
# models will be recursively reloaded as explained in
# `django.db.migrations.state.get_related_models_recursive()`.
for migration, _ in full_plan:
if not migrations_to_run:
# We remove every migration whose state was already computed
# from the set below (`migrations_to_run.remove(migration)`).
# If no states for migrations must be computed, we can exit
# this loop. Migrations that occur after the latest migration
# that is about to be applied would only trigger unneeded
# mutate_state() calls.
break
do_run = migration in migrations_to_run
if do_run:
if 'apps' not in state.__dict__:
state.apps # Render all real_apps -- performance critical
states[migration] = state.clone()
migrations_to_run.remove(migration)
# Only preserve the state if the migration is being run later
state = migration.mutate_state(state, preserve=do_run)
if self.progress_callback:
self.progress_callback("render_success")
# Phase 2 -- Run the migrations
for migration, backwards in plan:
if not backwards:
self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)
else:
self.unapply_migration(states[migration], migration, fake=fake)
def collect_sql(self, plan):
"""
Takes a migration plan and returns a list of collected SQL
statements that represent the best-efforts version of that plan.
"""
statements = []
state = None
for migration, backwards in plan:
with self.connection.schema_editor(collect_sql=True) as schema_editor:
if state is None:
state = self.loader.project_state((migration.app_label, migration.name), at_end=False)
if not backwards:
state = migration.apply(state, schema_editor, collect_sql=True)
else:
state = migration.unapply(state, schema_editor, collect_sql=True)
statements.extend(schema_editor.collected_sql)
return statements
def apply_migration(self, state, migration, fake=False, fake_initial=False):
"""
Runs a migration forwards.
"""
if self.progress_callback:
self.progress_callback("apply_start", migration, fake)
if not fake:
if fake_initial:
# Test to see if this is an already-applied initial migration
applied, state = self.detect_soft_applied(state, migration)
if applied:
fake = True
if not fake:
# Alright, do it normally
with self.connection.schema_editor() as schema_editor:
state = migration.apply(state, schema_editor)
# For replacement migrations, record individual statuses
if migration.replaces:
for app_label, name in migration.replaces:
self.recorder.record_applied(app_label, name)
else:
self.recorder.record_applied(migration.app_label, migration.name)
# Report progress
if self.progress_callback:
self.progress_callback("apply_success", migration, fake)
return state
def unapply_migration(self, state, migration, fake=False):
"""
Runs a migration backwards.
"""
if self.progress_callback:
self.progress_callback("unapply_start", migration, fake)
if not fake:
with self.connection.schema_editor() as schema_editor:
state = migration.unapply(state, schema_editor)
# For replacement migrations, record individual statuses
if migration.replaces:
for app_label, name in migration.replaces:
self.recorder.record_unapplied(app_label, name)
else:
self.recorder.record_unapplied(migration.app_label, migration.name)
# Report progress
if self.progress_callback:
self.progress_callback("unapply_success", migration, fake)
return state
def detect_soft_applied(self, project_state, migration):
"""
Tests whether a migration has been implicitly applied - that the
tables it would create exist. This is intended only for use
on initial migrations (as it only looks for CreateModel).
"""
# Bail if the migration isn't the first one in its app
if [name for app, name in migration.dependencies if app == migration.app_label]:
return False, project_state
if project_state is None:
after_state = self.loader.project_state((migration.app_label, migration.name), at_end=True)
else:
after_state = migration.mutate_state(project_state)
apps = after_state.apps
found_create_migration = False
# Make sure all create model are done
for operation in migration.operations:
if isinstance(operation, migrations.CreateModel):
model = apps.get_model(migration.app_label, operation.name)
if model._meta.swapped:
# We have to fetch the model to test with from the
# main app cache, as it's not a direct dependency.
model = global_apps.get_model(model._meta.swapped)
if model._meta.db_table not in self.connection.introspection.table_names(self.connection.cursor()):
return False, project_state
found_create_migration = True
# If we get this far and we found at least one CreateModel migration,
# the migration is considered implicitly applied.
return found_create_migration, after_state
| bsd-3-clause |
brianwoo/django-tutorial | build/Django/django/contrib/postgres/forms/hstore.py | 86 | 1422 | import json
from django import forms
from django.core.exceptions import ValidationError
from django.utils import six
from django.utils.translation import ugettext_lazy as _
__all__ = ['HStoreField']
class HStoreField(forms.CharField):
"""A field for HStore data which accepts JSON input."""
widget = forms.Textarea
default_error_messages = {
'invalid_json': _('Could not load JSON data.'),
}
def prepare_value(self, value):
if isinstance(value, dict):
return json.dumps(value)
return value
def to_python(self, value):
if not value:
return {}
try:
value = json.loads(value)
except ValueError:
raise ValidationError(
self.error_messages['invalid_json'],
code='invalid_json',
)
# Cast everything to strings for ease.
for key, val in value.items():
value[key] = six.text_type(val)
return value
def has_changed(self, initial, data):
"""
Return True if data differs from initial.
"""
# For purposes of seeing whether something has changed, None is
# the same as an empty dict, if the data or initial value we get
# is None, replace it w/ {}.
initial_value = self.to_python(initial)
return super(forms.HStoreField, self).has_changed(initial_value, data)
| gpl-3.0 |
Mj258/weiboapi | srapyDemo/envs/Lib/site-packages/scrapy/utils/url.py | 17 | 3915 | """
This module contains general purpose URL functions not found in the standard
library.
Some of the functions that used to be imported from this module have been moved
to the w3lib.url module. Always import those from there instead.
"""
import posixpath
from six.moves.urllib.parse import (ParseResult, urlunparse, urldefrag,
urlparse, parse_qsl, urlencode,
unquote)
# scrapy.utils.url was moved to w3lib.url and import * ensures this move doesn't break old code
from w3lib.url import *
from scrapy.utils.python import unicode_to_str
def url_is_from_any_domain(url, domains):
"""Return True if the url belongs to any of the given domains"""
host = parse_url(url).netloc.lower()
if host:
return any(((host == d.lower()) or (host.endswith('.%s' % d.lower())) for d in domains))
else:
return False
def url_is_from_spider(url, spider):
"""Return True if the url belongs to the given spider"""
return url_is_from_any_domain(url,
[spider.name] + list(getattr(spider, 'allowed_domains', [])))
def url_has_any_extension(url, extensions):
return posixpath.splitext(parse_url(url).path)[1].lower() in extensions
def canonicalize_url(url, keep_blank_values=True, keep_fragments=False,
encoding=None):
"""Canonicalize the given url by applying the following procedures:
- sort query arguments, first by key, then by value
- percent encode paths and query arguments. non-ASCII characters are
percent-encoded using UTF-8 (RFC-3986)
- normalize all spaces (in query arguments) '+' (plus symbol)
- normalize percent encodings case (%2f -> %2F)
- remove query arguments with blank values (unless keep_blank_values is True)
- remove fragments (unless keep_fragments is True)
The url passed can be a str or unicode, while the url returned is always a
str.
For examples see the tests in tests/test_utils_url.py
"""
scheme, netloc, path, params, query, fragment = parse_url(url)
keyvals = parse_qsl(query, keep_blank_values)
keyvals.sort()
query = urlencode(keyvals)
path = safe_url_string(_unquotepath(path)) or '/'
fragment = '' if not keep_fragments else fragment
return urlunparse((scheme, netloc.lower(), path, params, query, fragment))
def _unquotepath(path):
for reserved in ('2f', '2F', '3f', '3F'):
path = path.replace('%' + reserved, '%25' + reserved.upper())
return unquote(path)
def parse_url(url, encoding=None):
"""Return urlparsed url from the given argument (which could be an already
parsed url)
"""
return url if isinstance(url, ParseResult) else \
urlparse(unicode_to_str(url, encoding))
def escape_ajax(url):
"""
Return the crawleable url according to:
http://code.google.com/web/ajaxcrawling/docs/getting-started.html
>>> escape_ajax("www.example.com/ajax.html#!key=value")
'www.example.com/ajax.html?_escaped_fragment_=key%3Dvalue'
>>> escape_ajax("www.example.com/ajax.html?k1=v1&k2=v2#!key=value")
'www.example.com/ajax.html?k1=v1&k2=v2&_escaped_fragment_=key%3Dvalue'
>>> escape_ajax("www.example.com/ajax.html?#!key=value")
'www.example.com/ajax.html?_escaped_fragment_=key%3Dvalue'
>>> escape_ajax("www.example.com/ajax.html#!")
'www.example.com/ajax.html?_escaped_fragment_='
URLs that are not "AJAX crawlable" (according to Google) returned as-is:
>>> escape_ajax("www.example.com/ajax.html#key=value")
'www.example.com/ajax.html#key=value'
>>> escape_ajax("www.example.com/ajax.html#")
'www.example.com/ajax.html#'
>>> escape_ajax("www.example.com/ajax.html")
'www.example.com/ajax.html'
"""
defrag, frag = urldefrag(url)
if not frag.startswith('!'):
return url
return add_or_replace_parameter(defrag, '_escaped_fragment_', frag[1:])
| mit |
gauribhoite/personfinder | env/google_appengine/lib/requests/requests/packages/chardet/codingstatemachine.py | 2931 | 2318 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is mozilla.org code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .constants import eStart
from .compat import wrap_ord
class CodingStateMachine:
def __init__(self, sm):
self._mModel = sm
self._mCurrentBytePos = 0
self._mCurrentCharLen = 0
self.reset()
def reset(self):
self._mCurrentState = eStart
def next_state(self, c):
# for each byte we get its class
# if it is first byte, we also get byte length
# PY3K: aBuf is a byte stream, so c is an int, not a byte
byteCls = self._mModel['classTable'][wrap_ord(c)]
if self._mCurrentState == eStart:
self._mCurrentBytePos = 0
self._mCurrentCharLen = self._mModel['charLenTable'][byteCls]
# from byte's class and stateTable, we get its next state
curr_state = (self._mCurrentState * self._mModel['classFactor']
+ byteCls)
self._mCurrentState = self._mModel['stateTable'][curr_state]
self._mCurrentBytePos += 1
return self._mCurrentState
def get_current_charlen(self):
return self._mCurrentCharLen
def get_coding_state_machine(self):
return self._mModel['name']
| apache-2.0 |
smartdj/chrome-sync-server | google/protobuf/internal/type_checkers.py | 527 | 12163 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides type checking routines.
This module defines type checking utilities in the forms of dictionaries:
VALUE_CHECKERS: A dictionary of field types and a value validation object.
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
function.
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
function.
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
coresponding wire types.
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
function.
"""
__author__ = '[email protected] (Will Robinson)'
from google.protobuf.internal import decoder
from google.protobuf.internal import encoder
from google.protobuf.internal import wire_format
from google.protobuf import descriptor
_FieldDescriptor = descriptor.FieldDescriptor
def GetTypeChecker(cpp_type, field_type):
"""Returns a type checker for a message field of the specified types.
Args:
cpp_type: C++ type of the field (see descriptor.py).
field_type: Protocol message field type (see descriptor.py).
Returns:
An instance of TypeChecker which can be used to verify the types
of values assigned to a field of the specified type.
"""
if (cpp_type == _FieldDescriptor.CPPTYPE_STRING and
field_type == _FieldDescriptor.TYPE_STRING):
return UnicodeValueChecker()
return _VALUE_CHECKERS[cpp_type]
# None of the typecheckers below make any attempt to guard against people
# subclassing builtin types and doing weird things. We're not trying to
# protect against malicious clients here, just people accidentally shooting
# themselves in the foot in obvious ways.
class TypeChecker(object):
"""Type checker used to catch type errors as early as possible
when the client is setting scalar fields in protocol messages.
"""
def __init__(self, *acceptable_types):
self._acceptable_types = acceptable_types
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, self._acceptable_types):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), self._acceptable_types))
raise TypeError(message)
# IntValueChecker and its subclasses perform integer type-checks
# and bounds-checks.
class IntValueChecker(object):
"""Checker used for integer fields. Performs type-check and range check."""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (int, long)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int, long)))
raise TypeError(message)
if not self._MIN <= proposed_value <= self._MAX:
raise ValueError('Value out of range: %d' % proposed_value)
class UnicodeValueChecker(object):
"""Checker used for string fields."""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (str, unicode)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (str, unicode)))
raise TypeError(message)
# If the value is of type 'str' make sure that it is in 7-bit ASCII
# encoding.
if isinstance(proposed_value, str):
try:
unicode(proposed_value, 'ascii')
except UnicodeDecodeError:
raise ValueError('%.1024r has type str, but isn\'t in 7-bit ASCII '
'encoding. Non-ASCII strings must be converted to '
'unicode objects before being added.' %
(proposed_value))
class Int32ValueChecker(IntValueChecker):
# We're sure to use ints instead of longs here since comparison may be more
# efficient.
_MIN = -2147483648
_MAX = 2147483647
class Uint32ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 32) - 1
class Int64ValueChecker(IntValueChecker):
_MIN = -(1 << 63)
_MAX = (1 << 63) - 1
class Uint64ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 64) - 1
# Type-checkers for all scalar CPPTYPEs.
_VALUE_CHECKERS = {
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
_FieldDescriptor.CPPTYPE_DOUBLE: TypeChecker(
float, int, long),
_FieldDescriptor.CPPTYPE_FLOAT: TypeChecker(
float, int, long),
_FieldDescriptor.CPPTYPE_BOOL: TypeChecker(bool, int),
_FieldDescriptor.CPPTYPE_ENUM: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_STRING: TypeChecker(str),
}
# Map from field type to a function F, such that F(field_num, value)
# gives the total byte size for a value of the given type. This
# byte size includes tag information and any other additional space
# associated with serializing "value".
TYPE_TO_BYTE_SIZE_FN = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
}
# Maps from field types to encoder constructors.
TYPE_TO_ENCODER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder,
_FieldDescriptor.TYPE_INT64: encoder.Int64Encoder,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder,
_FieldDescriptor.TYPE_INT32: encoder.Int32Encoder,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder,
_FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder,
_FieldDescriptor.TYPE_STRING: encoder.StringEncoder,
_FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder,
_FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder,
_FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder,
}
# Maps from field types to sizer constructors.
TYPE_TO_SIZER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer,
_FieldDescriptor.TYPE_INT64: encoder.Int64Sizer,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer,
_FieldDescriptor.TYPE_INT32: encoder.Int32Sizer,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer,
_FieldDescriptor.TYPE_BOOL: encoder.BoolSizer,
_FieldDescriptor.TYPE_STRING: encoder.StringSizer,
_FieldDescriptor.TYPE_GROUP: encoder.GroupSizer,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer,
_FieldDescriptor.TYPE_BYTES: encoder.BytesSizer,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer,
_FieldDescriptor.TYPE_ENUM: encoder.EnumSizer,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer,
}
# Maps from field type to a decoder constructor.
TYPE_TO_DECODER = {
_FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder,
_FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder,
_FieldDescriptor.TYPE_INT64: decoder.Int64Decoder,
_FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder,
_FieldDescriptor.TYPE_INT32: decoder.Int32Decoder,
_FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder,
_FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder,
_FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder,
_FieldDescriptor.TYPE_STRING: decoder.StringDecoder,
_FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder,
_FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder,
_FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder,
_FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder,
_FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder,
_FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder,
_FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder,
_FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder,
_FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder,
}
# Maps from field type to expected wiretype.
FIELD_TYPE_TO_WIRE_TYPE = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_STRING:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
_FieldDescriptor.TYPE_MESSAGE:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_BYTES:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
}
| mit |
gaurav38/QosRouting | tests/unit/lib/mock_socket_test.py | 45 | 2309 | #!/usr/bin/env python
#
# Copyright 2011-2012 Andreas Wundsam
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import sys
import os.path
from copy import copy
sys.path.append(os.path.dirname(__file__) + "/../../..")
from pox.lib.mock_socket import MockSocket
class MockSocketTest(unittest.TestCase):
def setUp(self):
pass
def test_simple_send(self):
(a, b) = MockSocket.pair()
a.send("Hallo")
self.assertEquals(b.recv(), "Hallo")
b.send("Servus")
self.assertEquals(a.recv(), "Servus")
def test_ready_to_recv(self):
(a, b) = MockSocket.pair()
a.send("Hallo")
self.assertFalse(a.ready_to_recv())
self.assertTrue(b.ready_to_recv())
self.assertEquals(b.recv(), "Hallo")
self.assertFalse(b.ready_to_recv())
self.assertFalse(a.ready_to_recv())
b.send("Servus")
self.assertTrue(a.ready_to_recv())
self.assertEquals(a.recv(), "Servus")
self.assertFalse(a.ready_to_recv())
def test_on_ready_to_recv(self):
self.seen_size = -1
self.called = 0
def ready(socket, size):
self.called += 1
self.seen_size = size
(a, b) = MockSocket.pair()
b.set_on_ready_to_recv(ready)
self.assertEquals(self.called, 0)
a.send("Hallo")
self.assertEquals(self.called, 1)
self.assertEquals(self.seen_size, 5)
# check that it doesn't get called on the other sockets data
b.send("Huhu")
self.assertEquals(self.called, 1)
def test_empty_recv(self):
""" test_empty_recv: Check that empty reads on socket return ""
Note that this is actually non-sockety behavior and should probably be changed. This
test documents it as intended for now, though
"""
(a, b) = MockSocket.pair()
self.assertEquals(a.recv(), "")
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
ioram7/keystone-federado-pgid2013 | build/sqlalchemy-migrate/build/lib.linux-x86_64-2.7/migrate/tests/versioning/test_util.py | 29 | 3964 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from sqlalchemy import *
from migrate.exceptions import MigrateDeprecationWarning
from migrate.tests import fixture
from migrate.tests.fixture.warnings import catch_warnings
from migrate.versioning.util import *
import warnings
class TestUtil(fixture.Pathed):
def test_construct_engine(self):
"""Construct engine the smart way"""
url = 'sqlite://'
engine = construct_engine(url)
self.assert_(engine.name == 'sqlite')
# keyword arg
engine = construct_engine(url, engine_arg_encoding='utf-8')
self.assertEquals(engine.dialect.encoding, 'utf-8')
# dict
engine = construct_engine(url, engine_dict={'encoding': 'utf-8'})
self.assertEquals(engine.dialect.encoding, 'utf-8')
# engine parameter
engine_orig = create_engine('sqlite://')
engine = construct_engine(engine_orig)
self.assertEqual(engine, engine_orig)
# test precedance
engine = construct_engine(url, engine_dict={'encoding': 'iso-8859-1'},
engine_arg_encoding='utf-8')
self.assertEquals(engine.dialect.encoding, 'utf-8')
# deprecated echo=True parameter
try:
# py 2.4 compatability :-/
cw = catch_warnings(record=True)
w = cw.__enter__()
warnings.simplefilter("always")
engine = construct_engine(url, echo='True')
self.assertTrue(engine.echo)
self.assertEqual(len(w),1)
self.assertTrue(issubclass(w[-1].category,
MigrateDeprecationWarning))
self.assertEqual(
'echo=True parameter is deprecated, pass '
'engine_arg_echo=True or engine_dict={"echo": True}',
str(w[-1].message))
finally:
cw.__exit__()
# unsupported argument
self.assertRaises(ValueError, construct_engine, 1)
def test_asbool(self):
"""test asbool parsing"""
result = asbool(True)
self.assertEqual(result, True)
result = asbool(False)
self.assertEqual(result, False)
result = asbool('y')
self.assertEqual(result, True)
result = asbool('n')
self.assertEqual(result, False)
self.assertRaises(ValueError, asbool, 'test')
self.assertRaises(ValueError, asbool, object)
def test_load_model(self):
"""load model from dotted name"""
model_path = os.path.join(self.temp_usable_dir, 'test_load_model.py')
f = open(model_path, 'w')
f.write("class FakeFloat(int): pass")
f.close()
try:
# py 2.4 compatability :-/
cw = catch_warnings(record=True)
w = cw.__enter__()
warnings.simplefilter("always")
# deprecated spelling
FakeFloat = load_model('test_load_model.FakeFloat')
self.assert_(isinstance(FakeFloat(), int))
self.assertEqual(len(w),1)
self.assertTrue(issubclass(w[-1].category,
MigrateDeprecationWarning))
self.assertEqual(
'model should be in form of module.model:User '
'and not module.model.User',
str(w[-1].message))
finally:
cw.__exit__()
FakeFloat = load_model('test_load_model:FakeFloat')
self.assert_(isinstance(FakeFloat(), int))
FakeFloat = load_model(FakeFloat)
self.assert_(isinstance(FakeFloat(), int))
def test_guess_obj_type(self):
"""guess object type from string"""
result = guess_obj_type('7')
self.assertEqual(result, 7)
result = guess_obj_type('y')
self.assertEqual(result, True)
result = guess_obj_type('test')
self.assertEqual(result, 'test')
| apache-2.0 |
Swind/pure-python-adb | test_async/test_connection_async.py | 1 | 1662 | """Unit tests for `ConnectionAsync` class.
"""
import asyncio
import sys
import unittest
from unittest.mock import patch
sys.path.insert(0, '..')
from ppadb.connection_async import ConnectionAsync
from .async_wrapper import awaiter
from .patchers import FakeStreamReader, FakeStreamWriter, async_patch
class TestConnectionAsync(unittest.TestCase):
@awaiter
async def test_connect_close(self):
with async_patch('asyncio.open_connection', return_value=(FakeStreamReader(), FakeStreamWriter())):
conn = ConnectionAsync()
await conn.connect()
self.assertIsNotNone(conn.reader)
self.assertIsNotNone(conn.writer)
await conn.close()
self.assertIsNone(conn.reader)
self.assertIsNone(conn.writer)
@awaiter
async def test_connect_close_catch_oserror(self):
with async_patch('asyncio.open_connection', return_value=(FakeStreamReader(), FakeStreamWriter())):
conn = ConnectionAsync()
await conn.connect()
self.assertIsNotNone(conn.reader)
self.assertIsNotNone(conn.writer)
with patch('{}.FakeStreamWriter.close'.format(__name__), side_effect=OSError):
await conn.close()
self.assertIsNone(conn.reader)
self.assertIsNone(conn.writer)
@awaiter
async def test_connect_with_timeout(self):
with self.assertRaises(RuntimeError):
with async_patch('asyncio.open_connection', side_effect=asyncio.TimeoutError):
conn = ConnectionAsync(timeout=1)
await conn.connect()
if __name__ == '__main__':
unittest.main()
| mit |
kevinmel2000/brython | www/src/Lib/test/unittests/test_codeccallbacks.py | 28 | 33967 | import codecs
import html.entities
import sys
import test.support
import unicodedata
import unittest
import warnings
try:
import ctypes
except ImportError:
ctypes = None
SIZEOF_WCHAR_T = -1
else:
SIZEOF_WCHAR_T = ctypes.sizeof(ctypes.c_wchar)
class PosReturn:
# this can be used for configurable callbacks
def __init__(self):
self.pos = 0
def handle(self, exc):
oldpos = self.pos
realpos = oldpos
if realpos<0:
realpos = len(exc.object) + realpos
# if we don't advance this time, terminate on the next call
# otherwise we'd get an endless loop
if realpos <= exc.start:
self.pos = len(exc.object)
return ("<?>", oldpos)
# A UnicodeEncodeError object with a bad start attribute
class BadStartUnicodeEncodeError(UnicodeEncodeError):
def __init__(self):
UnicodeEncodeError.__init__(self, "ascii", "", 0, 1, "bad")
self.start = []
# A UnicodeEncodeError object with a bad object attribute
class BadObjectUnicodeEncodeError(UnicodeEncodeError):
def __init__(self):
UnicodeEncodeError.__init__(self, "ascii", "", 0, 1, "bad")
self.object = []
# A UnicodeDecodeError object without an end attribute
class NoEndUnicodeDecodeError(UnicodeDecodeError):
def __init__(self):
UnicodeDecodeError.__init__(self, "ascii", bytearray(b""), 0, 1, "bad")
del self.end
# A UnicodeDecodeError object with a bad object attribute
class BadObjectUnicodeDecodeError(UnicodeDecodeError):
def __init__(self):
UnicodeDecodeError.__init__(self, "ascii", bytearray(b""), 0, 1, "bad")
self.object = []
# A UnicodeTranslateError object without a start attribute
class NoStartUnicodeTranslateError(UnicodeTranslateError):
def __init__(self):
UnicodeTranslateError.__init__(self, "", 0, 1, "bad")
del self.start
# A UnicodeTranslateError object without an end attribute
class NoEndUnicodeTranslateError(UnicodeTranslateError):
def __init__(self):
UnicodeTranslateError.__init__(self, "", 0, 1, "bad")
del self.end
# A UnicodeTranslateError object without an object attribute
class NoObjectUnicodeTranslateError(UnicodeTranslateError):
def __init__(self):
UnicodeTranslateError.__init__(self, "", 0, 1, "bad")
del self.object
class CodecCallbackTest(unittest.TestCase):
def test_xmlcharrefreplace(self):
# replace unencodable characters which numeric character entities.
# For ascii, latin-1 and charmaps this is completely implemented
# in C and should be reasonably fast.
s = "\u30b9\u30d1\u30e2 \xe4nd eggs"
self.assertEqual(
s.encode("ascii", "xmlcharrefreplace"),
b"スパモ änd eggs"
)
self.assertEqual(
s.encode("latin-1", "xmlcharrefreplace"),
b"スパモ \xe4nd eggs"
)
def test_xmlcharnamereplace(self):
# This time use a named character entity for unencodable
# characters, if one is available.
def xmlcharnamereplace(exc):
if not isinstance(exc, UnicodeEncodeError):
raise TypeError("don't know how to handle %r" % exc)
l = []
for c in exc.object[exc.start:exc.end]:
try:
l.append("&%s;" % html.entities.codepoint2name[ord(c)])
except KeyError:
l.append("&#%d;" % ord(c))
return ("".join(l), exc.end)
codecs.register_error(
"test.xmlcharnamereplace", xmlcharnamereplace)
sin = "\xab\u211c\xbb = \u2329\u1234\u20ac\u232a"
sout = b"«ℜ» = ⟨ሴ€⟩"
self.assertEqual(sin.encode("ascii", "test.xmlcharnamereplace"), sout)
sout = b"\xabℜ\xbb = ⟨ሴ€⟩"
self.assertEqual(sin.encode("latin-1", "test.xmlcharnamereplace"), sout)
sout = b"\xabℜ\xbb = ⟨ሴ\xa4⟩"
self.assertEqual(sin.encode("iso-8859-15", "test.xmlcharnamereplace"), sout)
def test_uninamereplace(self):
# We're using the names from the unicode database this time,
# and we're doing "syntax highlighting" here, i.e. we include
# the replaced text in ANSI escape sequences. For this it is
# useful that the error handler is not called for every single
# unencodable character, but for a complete sequence of
# unencodable characters, otherwise we would output many
# unnecessary escape sequences.
def uninamereplace(exc):
if not isinstance(exc, UnicodeEncodeError):
raise TypeError("don't know how to handle %r" % exc)
l = []
for c in exc.object[exc.start:exc.end]:
l.append(unicodedata.name(c, "0x%x" % ord(c)))
return ("\033[1m%s\033[0m" % ", ".join(l), exc.end)
codecs.register_error(
"test.uninamereplace", uninamereplace)
sin = "\xac\u1234\u20ac\u8000"
sout = b"\033[1mNOT SIGN, ETHIOPIC SYLLABLE SEE, EURO SIGN, CJK UNIFIED IDEOGRAPH-8000\033[0m"
self.assertEqual(sin.encode("ascii", "test.uninamereplace"), sout)
sout = b"\xac\033[1mETHIOPIC SYLLABLE SEE, EURO SIGN, CJK UNIFIED IDEOGRAPH-8000\033[0m"
self.assertEqual(sin.encode("latin-1", "test.uninamereplace"), sout)
sout = b"\xac\033[1mETHIOPIC SYLLABLE SEE\033[0m\xa4\033[1mCJK UNIFIED IDEOGRAPH-8000\033[0m"
self.assertEqual(sin.encode("iso-8859-15", "test.uninamereplace"), sout)
def test_backslashescape(self):
# Does the same as the "unicode-escape" encoding, but with different
# base encodings.
sin = "a\xac\u1234\u20ac\u8000\U0010ffff"
sout = b"a\\xac\\u1234\\u20ac\\u8000\\U0010ffff"
self.assertEqual(sin.encode("ascii", "backslashreplace"), sout)
sout = b"a\xac\\u1234\\u20ac\\u8000\\U0010ffff"
self.assertEqual(sin.encode("latin-1", "backslashreplace"), sout)
sout = b"a\xac\\u1234\xa4\\u8000\\U0010ffff"
self.assertEqual(sin.encode("iso-8859-15", "backslashreplace"), sout)
def test_decoding_callbacks(self):
# This is a test for a decoding callback handler
# that allows the decoding of the invalid sequence
# "\xc0\x80" and returns "\x00" instead of raising an error.
# All other illegal sequences will be handled strictly.
def relaxedutf8(exc):
if not isinstance(exc, UnicodeDecodeError):
raise TypeError("don't know how to handle %r" % exc)
if exc.object[exc.start:exc.start+2] == b"\xc0\x80":
return ("\x00", exc.start+2) # retry after two bytes
else:
raise exc
codecs.register_error("test.relaxedutf8", relaxedutf8)
# all the "\xc0\x80" will be decoded to "\x00"
sin = b"a\x00b\xc0\x80c\xc3\xbc\xc0\x80\xc0\x80"
sout = "a\x00b\x00c\xfc\x00\x00"
self.assertEqual(sin.decode("utf-8", "test.relaxedutf8"), sout)
# "\xc0\x81" is not valid and a UnicodeDecodeError will be raised
sin = b"\xc0\x80\xc0\x81"
self.assertRaises(UnicodeDecodeError, sin.decode,
"utf-8", "test.relaxedutf8")
def test_charmapencode(self):
# For charmap encodings the replacement string will be
# mapped through the encoding again. This means, that
# to be able to use e.g. the "replace" handler, the
# charmap has to have a mapping for "?".
charmap = dict((ord(c), bytes(2*c.upper(), 'ascii')) for c in "abcdefgh")
sin = "abc"
sout = b"AABBCC"
self.assertEqual(codecs.charmap_encode(sin, "strict", charmap)[0], sout)
sin = "abcA"
self.assertRaises(UnicodeError, codecs.charmap_encode, sin, "strict", charmap)
charmap[ord("?")] = b"XYZ"
sin = "abcDEF"
sout = b"AABBCCXYZXYZXYZ"
self.assertEqual(codecs.charmap_encode(sin, "replace", charmap)[0], sout)
charmap[ord("?")] = "XYZ" # wrong type in mapping
self.assertRaises(TypeError, codecs.charmap_encode, sin, "replace", charmap)
def test_decodeunicodeinternal(self):
with test.support.check_warnings(('unicode_internal codec has been '
'deprecated', DeprecationWarning)):
self.assertRaises(
UnicodeDecodeError,
b"\x00\x00\x00\x00\x00".decode,
"unicode-internal",
)
if SIZEOF_WCHAR_T == 4:
def handler_unicodeinternal(exc):
if not isinstance(exc, UnicodeDecodeError):
raise TypeError("don't know how to handle %r" % exc)
return ("\x01", 1)
with test.support.check_warnings(('unicode_internal codec has been '
'deprecated', DeprecationWarning)):
self.assertEqual(
b"\x00\x00\x00\x00\x00".decode("unicode-internal", "ignore"),
"\u0000"
)
self.assertEqual(
b"\x00\x00\x00\x00\x00".decode("unicode-internal", "replace"),
"\u0000\ufffd"
)
codecs.register_error("test.hui", handler_unicodeinternal)
self.assertEqual(
b"\x00\x00\x00\x00\x00".decode("unicode-internal", "test.hui"),
"\u0000\u0001\u0000"
)
def test_callbacks(self):
def handler1(exc):
r = range(exc.start, exc.end)
if isinstance(exc, UnicodeEncodeError):
l = ["<%d>" % ord(exc.object[pos]) for pos in r]
elif isinstance(exc, UnicodeDecodeError):
l = ["<%d>" % exc.object[pos] for pos in r]
else:
raise TypeError("don't know how to handle %r" % exc)
return ("[%s]" % "".join(l), exc.end)
codecs.register_error("test.handler1", handler1)
def handler2(exc):
if not isinstance(exc, UnicodeDecodeError):
raise TypeError("don't know how to handle %r" % exc)
l = ["<%d>" % exc.object[pos] for pos in range(exc.start, exc.end)]
return ("[%s]" % "".join(l), exc.end+1) # skip one character
codecs.register_error("test.handler2", handler2)
s = b"\x00\x81\x7f\x80\xff"
self.assertEqual(
s.decode("ascii", "test.handler1"),
"\x00[<129>]\x7f[<128>][<255>]"
)
self.assertEqual(
s.decode("ascii", "test.handler2"),
"\x00[<129>][<128>]"
)
self.assertEqual(
b"\\u3042\u3xxx".decode("unicode-escape", "test.handler1"),
"\u3042[<92><117><51>]xxx"
)
self.assertEqual(
b"\\u3042\u3xx".decode("unicode-escape", "test.handler1"),
"\u3042[<92><117><51>]xx"
)
self.assertEqual(
codecs.charmap_decode(b"abc", "test.handler1", {ord("a"): "z"})[0],
"z[<98>][<99>]"
)
self.assertEqual(
"g\xfc\xdfrk".encode("ascii", "test.handler1"),
b"g[<252><223>]rk"
)
self.assertEqual(
"g\xfc\xdf".encode("ascii", "test.handler1"),
b"g[<252><223>]"
)
def test_longstrings(self):
# test long strings to check for memory overflow problems
errors = [ "strict", "ignore", "replace", "xmlcharrefreplace",
"backslashreplace"]
# register the handlers under different names,
# to prevent the codec from recognizing the name
for err in errors:
codecs.register_error("test." + err, codecs.lookup_error(err))
l = 1000
errors += [ "test." + err for err in errors ]
for uni in [ s*l for s in ("x", "\u3042", "a\xe4") ]:
for enc in ("ascii", "latin-1", "iso-8859-1", "iso-8859-15",
"utf-8", "utf-7", "utf-16", "utf-32"):
for err in errors:
try:
uni.encode(enc, err)
except UnicodeError:
pass
def check_exceptionobjectargs(self, exctype, args, msg):
# Test UnicodeError subclasses: construction, attribute assignment and __str__ conversion
# check with one missing argument
self.assertRaises(TypeError, exctype, *args[:-1])
# check with one argument too much
self.assertRaises(TypeError, exctype, *(args + ["too much"]))
# check with one argument of the wrong type
wrongargs = [ "spam", b"eggs", b"spam", 42, 1.0, None ]
for i in range(len(args)):
for wrongarg in wrongargs:
if type(wrongarg) is type(args[i]):
continue
# build argument array
callargs = []
for j in range(len(args)):
if i==j:
callargs.append(wrongarg)
else:
callargs.append(args[i])
self.assertRaises(TypeError, exctype, *callargs)
# check with the correct number and type of arguments
exc = exctype(*args)
self.assertEqual(str(exc), msg)
def test_unicodeencodeerror(self):
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "g\xfcrk", 1, 2, "ouch"],
"'ascii' codec can't encode character '\\xfc' in position 1: ouch"
)
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "g\xfcrk", 1, 4, "ouch"],
"'ascii' codec can't encode characters in position 1-3: ouch"
)
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "\xfcx", 0, 1, "ouch"],
"'ascii' codec can't encode character '\\xfc' in position 0: ouch"
)
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "\u0100x", 0, 1, "ouch"],
"'ascii' codec can't encode character '\\u0100' in position 0: ouch"
)
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "\uffffx", 0, 1, "ouch"],
"'ascii' codec can't encode character '\\uffff' in position 0: ouch"
)
if SIZEOF_WCHAR_T == 4:
self.check_exceptionobjectargs(
UnicodeEncodeError,
["ascii", "\U00010000x", 0, 1, "ouch"],
"'ascii' codec can't encode character '\\U00010000' in position 0: ouch"
)
def test_unicodedecodeerror(self):
self.check_exceptionobjectargs(
UnicodeDecodeError,
["ascii", bytearray(b"g\xfcrk"), 1, 2, "ouch"],
"'ascii' codec can't decode byte 0xfc in position 1: ouch"
)
self.check_exceptionobjectargs(
UnicodeDecodeError,
["ascii", bytearray(b"g\xfcrk"), 1, 3, "ouch"],
"'ascii' codec can't decode bytes in position 1-2: ouch"
)
def test_unicodetranslateerror(self):
self.check_exceptionobjectargs(
UnicodeTranslateError,
["g\xfcrk", 1, 2, "ouch"],
"can't translate character '\\xfc' in position 1: ouch"
)
self.check_exceptionobjectargs(
UnicodeTranslateError,
["g\u0100rk", 1, 2, "ouch"],
"can't translate character '\\u0100' in position 1: ouch"
)
self.check_exceptionobjectargs(
UnicodeTranslateError,
["g\uffffrk", 1, 2, "ouch"],
"can't translate character '\\uffff' in position 1: ouch"
)
if SIZEOF_WCHAR_T == 4:
self.check_exceptionobjectargs(
UnicodeTranslateError,
["g\U00010000rk", 1, 2, "ouch"],
"can't translate character '\\U00010000' in position 1: ouch"
)
self.check_exceptionobjectargs(
UnicodeTranslateError,
["g\xfcrk", 1, 3, "ouch"],
"can't translate characters in position 1-2: ouch"
)
def test_badandgoodstrictexceptions(self):
# "strict" complains about a non-exception passed in
self.assertRaises(
TypeError,
codecs.strict_errors,
42
)
# "strict" complains about the wrong exception type
self.assertRaises(
Exception,
codecs.strict_errors,
Exception("ouch")
)
# If the correct exception is passed in, "strict" raises it
self.assertRaises(
UnicodeEncodeError,
codecs.strict_errors,
UnicodeEncodeError("ascii", "\u3042", 0, 1, "ouch")
)
def test_badandgoodignoreexceptions(self):
# "ignore" complains about a non-exception passed in
self.assertRaises(
TypeError,
codecs.ignore_errors,
42
)
# "ignore" complains about the wrong exception type
self.assertRaises(
TypeError,
codecs.ignore_errors,
UnicodeError("ouch")
)
# If the correct exception is passed in, "ignore" returns an empty replacement
self.assertEqual(
codecs.ignore_errors(
UnicodeEncodeError("ascii", "\u3042", 0, 1, "ouch")),
("", 1)
)
self.assertEqual(
codecs.ignore_errors(
UnicodeDecodeError("ascii", bytearray(b"\xff"), 0, 1, "ouch")),
("", 1)
)
self.assertEqual(
codecs.ignore_errors(
UnicodeTranslateError("\u3042", 0, 1, "ouch")),
("", 1)
)
def test_badandgoodreplaceexceptions(self):
# "replace" complains about a non-exception passed in
self.assertRaises(
TypeError,
codecs.replace_errors,
42
)
# "replace" complains about the wrong exception type
self.assertRaises(
TypeError,
codecs.replace_errors,
UnicodeError("ouch")
)
self.assertRaises(
TypeError,
codecs.replace_errors,
BadObjectUnicodeEncodeError()
)
self.assertRaises(
TypeError,
codecs.replace_errors,
BadObjectUnicodeDecodeError()
)
# With the correct exception, "replace" returns an "?" or "\ufffd" replacement
self.assertEqual(
codecs.replace_errors(
UnicodeEncodeError("ascii", "\u3042", 0, 1, "ouch")),
("?", 1)
)
self.assertEqual(
codecs.replace_errors(
UnicodeDecodeError("ascii", bytearray(b"\xff"), 0, 1, "ouch")),
("\ufffd", 1)
)
self.assertEqual(
codecs.replace_errors(
UnicodeTranslateError("\u3042", 0, 1, "ouch")),
("\ufffd", 1)
)
def test_badandgoodxmlcharrefreplaceexceptions(self):
# "xmlcharrefreplace" complains about a non-exception passed in
self.assertRaises(
TypeError,
codecs.xmlcharrefreplace_errors,
42
)
# "xmlcharrefreplace" complains about the wrong exception types
self.assertRaises(
TypeError,
codecs.xmlcharrefreplace_errors,
UnicodeError("ouch")
)
# "xmlcharrefreplace" can only be used for encoding
self.assertRaises(
TypeError,
codecs.xmlcharrefreplace_errors,
UnicodeDecodeError("ascii", bytearray(b"\xff"), 0, 1, "ouch")
)
self.assertRaises(
TypeError,
codecs.xmlcharrefreplace_errors,
UnicodeTranslateError("\u3042", 0, 1, "ouch")
)
# Use the correct exception
cs = (0, 1, 9, 10, 99, 100, 999, 1000, 9999, 10000, 0x3042)
s = "".join(chr(c) for c in cs)
self.assertEqual(
codecs.xmlcharrefreplace_errors(
UnicodeEncodeError("ascii", s, 0, len(s), "ouch")
),
("".join("&#%d;" % ord(c) for c in s), len(s))
)
def test_badandgoodbackslashreplaceexceptions(self):
# "backslashreplace" complains about a non-exception passed in
self.assertRaises(
TypeError,
codecs.backslashreplace_errors,
42
)
# "backslashreplace" complains about the wrong exception types
self.assertRaises(
TypeError,
codecs.backslashreplace_errors,
UnicodeError("ouch")
)
# "backslashreplace" can only be used for encoding
self.assertRaises(
TypeError,
codecs.backslashreplace_errors,
UnicodeDecodeError("ascii", bytearray(b"\xff"), 0, 1, "ouch")
)
self.assertRaises(
TypeError,
codecs.backslashreplace_errors,
UnicodeTranslateError("\u3042", 0, 1, "ouch")
)
# Use the correct exception
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\u3042", 0, 1, "ouch")),
("\\u3042", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\x00", 0, 1, "ouch")),
("\\x00", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\xff", 0, 1, "ouch")),
("\\xff", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\u0100", 0, 1, "ouch")),
("\\u0100", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\uffff", 0, 1, "ouch")),
("\\uffff", 1)
)
if SIZEOF_WCHAR_T > 0:
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\U00010000",
0, 1, "ouch")),
("\\U00010000", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\U0010ffff",
0, 1, "ouch")),
("\\U0010ffff", 1)
)
# Lone surrogates (regardless of unicode width)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\ud800", 0, 1, "ouch")),
("\\ud800", 1)
)
self.assertEqual(
codecs.backslashreplace_errors(
UnicodeEncodeError("ascii", "\udfff", 0, 1, "ouch")),
("\\udfff", 1)
)
def test_badhandlerresults(self):
results = ( 42, "foo", (1,2,3), ("foo", 1, 3), ("foo", None), ("foo",), ("foo", 1, 3), ("foo", None), ("foo",) )
encs = ("ascii", "latin-1", "iso-8859-1", "iso-8859-15")
for res in results:
codecs.register_error("test.badhandler", lambda x: res)
for enc in encs:
self.assertRaises(
TypeError,
"\u3042".encode,
enc,
"test.badhandler"
)
for (enc, bytes) in (
("ascii", b"\xff"),
("utf-8", b"\xff"),
("utf-7", b"+x-"),
("unicode-internal", b"\x00"),
):
with test.support.check_warnings():
# unicode-internal has been deprecated
self.assertRaises(
TypeError,
bytes.decode,
enc,
"test.badhandler"
)
def test_lookup(self):
self.assertEqual(codecs.strict_errors, codecs.lookup_error("strict"))
self.assertEqual(codecs.ignore_errors, codecs.lookup_error("ignore"))
self.assertEqual(codecs.strict_errors, codecs.lookup_error("strict"))
self.assertEqual(
codecs.xmlcharrefreplace_errors,
codecs.lookup_error("xmlcharrefreplace")
)
self.assertEqual(
codecs.backslashreplace_errors,
codecs.lookup_error("backslashreplace")
)
def test_unencodablereplacement(self):
def unencrepl(exc):
if isinstance(exc, UnicodeEncodeError):
return ("\u4242", exc.end)
else:
raise TypeError("don't know how to handle %r" % exc)
codecs.register_error("test.unencreplhandler", unencrepl)
for enc in ("ascii", "iso-8859-1", "iso-8859-15"):
self.assertRaises(
UnicodeEncodeError,
"\u4242".encode,
enc,
"test.unencreplhandler"
)
def test_badregistercall(self):
# enhance coverage of:
# Modules/_codecsmodule.c::register_error()
# Python/codecs.c::PyCodec_RegisterError()
self.assertRaises(TypeError, codecs.register_error, 42)
self.assertRaises(TypeError, codecs.register_error, "test.dummy", 42)
def test_badlookupcall(self):
# enhance coverage of:
# Modules/_codecsmodule.c::lookup_error()
self.assertRaises(TypeError, codecs.lookup_error)
def test_unknownhandler(self):
# enhance coverage of:
# Modules/_codecsmodule.c::lookup_error()
self.assertRaises(LookupError, codecs.lookup_error, "test.unknown")
def test_xmlcharrefvalues(self):
# enhance coverage of:
# Python/codecs.c::PyCodec_XMLCharRefReplaceErrors()
# and inline implementations
v = (1, 5, 10, 50, 100, 500, 1000, 5000, 10000, 50000)
if SIZEOF_WCHAR_T == 4:
v += (100000, 500000, 1000000)
s = "".join([chr(x) for x in v])
codecs.register_error("test.xmlcharrefreplace", codecs.xmlcharrefreplace_errors)
for enc in ("ascii", "iso-8859-15"):
for err in ("xmlcharrefreplace", "test.xmlcharrefreplace"):
s.encode(enc, err)
def test_decodehelper(self):
# enhance coverage of:
# Objects/unicodeobject.c::unicode_decode_call_errorhandler()
# and callers
self.assertRaises(LookupError, b"\xff".decode, "ascii", "test.unknown")
def baddecodereturn1(exc):
return 42
codecs.register_error("test.baddecodereturn1", baddecodereturn1)
self.assertRaises(TypeError, b"\xff".decode, "ascii", "test.baddecodereturn1")
self.assertRaises(TypeError, b"\\".decode, "unicode-escape", "test.baddecodereturn1")
self.assertRaises(TypeError, b"\\x0".decode, "unicode-escape", "test.baddecodereturn1")
self.assertRaises(TypeError, b"\\x0y".decode, "unicode-escape", "test.baddecodereturn1")
self.assertRaises(TypeError, b"\\Uffffeeee".decode, "unicode-escape", "test.baddecodereturn1")
self.assertRaises(TypeError, b"\\uyyyy".decode, "raw-unicode-escape", "test.baddecodereturn1")
def baddecodereturn2(exc):
return ("?", None)
codecs.register_error("test.baddecodereturn2", baddecodereturn2)
self.assertRaises(TypeError, b"\xff".decode, "ascii", "test.baddecodereturn2")
handler = PosReturn()
codecs.register_error("test.posreturn", handler.handle)
# Valid negative position
handler.pos = -1
self.assertEqual(b"\xff0".decode("ascii", "test.posreturn"), "<?>0")
# Valid negative position
handler.pos = -2
self.assertEqual(b"\xff0".decode("ascii", "test.posreturn"), "<?><?>")
# Negative position out of bounds
handler.pos = -3
self.assertRaises(IndexError, b"\xff0".decode, "ascii", "test.posreturn")
# Valid positive position
handler.pos = 1
self.assertEqual(b"\xff0".decode("ascii", "test.posreturn"), "<?>0")
# Largest valid positive position (one beyond end of input)
handler.pos = 2
self.assertEqual(b"\xff0".decode("ascii", "test.posreturn"), "<?>")
# Invalid positive position
handler.pos = 3
self.assertRaises(IndexError, b"\xff0".decode, "ascii", "test.posreturn")
# Restart at the "0"
handler.pos = 6
self.assertEqual(b"\\uyyyy0".decode("raw-unicode-escape", "test.posreturn"), "<?>0")
class D(dict):
def __getitem__(self, key):
raise ValueError
self.assertRaises(UnicodeError, codecs.charmap_decode, b"\xff", "strict", {0xff: None})
self.assertRaises(ValueError, codecs.charmap_decode, b"\xff", "strict", D())
self.assertRaises(TypeError, codecs.charmap_decode, b"\xff", "strict", {0xff: sys.maxunicode+1})
def test_encodehelper(self):
# enhance coverage of:
# Objects/unicodeobject.c::unicode_encode_call_errorhandler()
# and callers
self.assertRaises(LookupError, "\xff".encode, "ascii", "test.unknown")
def badencodereturn1(exc):
return 42
codecs.register_error("test.badencodereturn1", badencodereturn1)
self.assertRaises(TypeError, "\xff".encode, "ascii", "test.badencodereturn1")
def badencodereturn2(exc):
return ("?", None)
codecs.register_error("test.badencodereturn2", badencodereturn2)
self.assertRaises(TypeError, "\xff".encode, "ascii", "test.badencodereturn2")
handler = PosReturn()
codecs.register_error("test.posreturn", handler.handle)
# Valid negative position
handler.pos = -1
self.assertEqual("\xff0".encode("ascii", "test.posreturn"), b"<?>0")
# Valid negative position
handler.pos = -2
self.assertEqual("\xff0".encode("ascii", "test.posreturn"), b"<?><?>")
# Negative position out of bounds
handler.pos = -3
self.assertRaises(IndexError, "\xff0".encode, "ascii", "test.posreturn")
# Valid positive position
handler.pos = 1
self.assertEqual("\xff0".encode("ascii", "test.posreturn"), b"<?>0")
# Largest valid positive position (one beyond end of input
handler.pos = 2
self.assertEqual("\xff0".encode("ascii", "test.posreturn"), b"<?>")
# Invalid positive position
handler.pos = 3
self.assertRaises(IndexError, "\xff0".encode, "ascii", "test.posreturn")
handler.pos = 0
class D(dict):
def __getitem__(self, key):
raise ValueError
for err in ("strict", "replace", "xmlcharrefreplace", "backslashreplace", "test.posreturn"):
self.assertRaises(UnicodeError, codecs.charmap_encode, "\xff", err, {0xff: None})
self.assertRaises(ValueError, codecs.charmap_encode, "\xff", err, D())
self.assertRaises(TypeError, codecs.charmap_encode, "\xff", err, {0xff: 300})
def test_translatehelper(self):
# enhance coverage of:
# Objects/unicodeobject.c::unicode_encode_call_errorhandler()
# and callers
# (Unfortunately the errors argument is not directly accessible
# from Python, so we can't test that much)
class D(dict):
def __getitem__(self, key):
raise ValueError
#self.assertRaises(ValueError, "\xff".translate, D())
self.assertRaises(TypeError, "\xff".translate, {0xff: sys.maxunicode+1})
self.assertRaises(TypeError, "\xff".translate, {0xff: ()})
def test_bug828737(self):
charmap = {
ord("&"): "&",
ord("<"): "<",
ord(">"): ">",
ord('"'): """,
}
for n in (1, 10, 100, 1000):
text = 'abc<def>ghi'*n
text.translate(charmap)
def test_mutatingdecodehandler(self):
baddata = [
("ascii", b"\xff"),
("utf-7", b"++"),
("utf-8", b"\xff"),
("utf-16", b"\xff"),
("utf-32", b"\xff"),
("unicode-escape", b"\\u123g"),
("raw-unicode-escape", b"\\u123g"),
("unicode-internal", b"\xff"),
]
def replacing(exc):
if isinstance(exc, UnicodeDecodeError):
exc.object = 42
return ("\u4242", 0)
else:
raise TypeError("don't know how to handle %r" % exc)
codecs.register_error("test.replacing", replacing)
with test.support.check_warnings():
# unicode-internal has been deprecated
for (encoding, data) in baddata:
with self.assertRaises(TypeError):
data.decode(encoding, "test.replacing")
def mutating(exc):
if isinstance(exc, UnicodeDecodeError):
exc.object[:] = b""
return ("\u4242", 0)
else:
raise TypeError("don't know how to handle %r" % exc)
codecs.register_error("test.mutating", mutating)
# If the decoder doesn't pick up the modified input the following
# will lead to an endless loop
with test.support.check_warnings():
# unicode-internal has been deprecated
for (encoding, data) in baddata:
with self.assertRaises(TypeError):
data.decode(encoding, "test.replacing")
def test_main():
test.support.run_unittest(CodecCallbackTest)
if __name__ == "__main__":
test_main()
| bsd-3-clause |
patrickwind/My_Blog | venv/lib/python2.7/site-packages/flask/ctx.py | 4 | 14199 | # -*- coding: utf-8 -*-
"""
flask.ctx
~~~~~~~~~
Implements the objects required to keep the context.
:copyright: (c) 2015 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
from __future__ import with_statement
import sys
from functools import update_wrapper
from werkzeug.exceptions import HTTPException
from .globals import _request_ctx_stack, _app_ctx_stack
from .signals import appcontext_pushed, appcontext_popped
from ._compat import BROKEN_PYPY_CTXMGR_EXIT, reraise
class _AppCtxGlobals(object):
"""A plain object."""
def get(self, name, default=None):
return self.__dict__.get(name, default)
def __contains__(self, item):
return item in self.__dict__
def __iter__(self):
return iter(self.__dict__)
def __repr__(self):
top = _app_ctx_stack.top
if top is not None:
return '<flask.g of %r>' % top.app.name
return object.__repr__(self)
def after_this_request(f):
"""Executes a function after this request. This is useful to modify
response objects. The function is passed the response object and has
to return the same or a new one.
Example::
@app.route('/')
def index():
@after_this_request
def add_header(response):
response.headers['X-Foo'] = 'Parachute'
return response
return 'Hello World!'
This is more useful if a function other than the view function wants to
modify a response. For instance think of a decorator that wants to add
some headers without converting the return value into a response object.
.. versionadded:: 0.9
"""
_request_ctx_stack.top._after_request_functions.append(f)
return f
def copy_current_request_context(f):
"""A helper function that decorates a function to retain the current
request context. This is useful when working with greenlets. The moment
the function is decorated a copy of the request context is created and
then pushed when the function is called.
Example::
import gevent
from flask import copy_current_request_context
@app.route('/')
def index():
@copy_current_request_context
def do_some_work():
# do some work here, it can access flask.request like you
# would otherwise in the view function.
...
gevent.spawn(do_some_work)
return 'Regular response'
.. versionadded:: 0.10
"""
top = _request_ctx_stack.top
if top is None:
raise RuntimeError('This decorator can only be used at local scopes '
'when a request context is on the stack. For instance within '
'view functions.')
reqctx = top.copy()
def wrapper(*args, **kwargs):
with reqctx:
return f(*args, **kwargs)
return update_wrapper(wrapper, f)
def has_request_context():
"""If you have code that wants to test if a request context is there or
not this function can be used. For instance, you may want to take advantage
of request information if the request object is available, but fail
silently if it is unavailable.
::
class User(db.Model):
def __init__(self, username, remote_addr=None):
self.username = username
if remote_addr is None and has_request_context():
remote_addr = request.remote_addr
self.remote_addr = remote_addr
Alternatively you can also just test any of the context bound objects
(such as :class:`request` or :class:`g` for truthness)::
class User(db.Model):
def __init__(self, username, remote_addr=None):
self.username = username
if remote_addr is None and request:
remote_addr = request.remote_addr
self.remote_addr = remote_addr
.. versionadded:: 0.7
"""
return _request_ctx_stack.top is not None
def has_app_context():
"""Works like :func:`has_request_context` but for the application
context. You can also just do a boolean check on the
:data:`current_app` object instead.
.. versionadded:: 0.9
"""
return _app_ctx_stack.top is not None
class AppContext(object):
"""The application context binds an application object implicitly
to the current thread or greenlet, similar to how the
:class:`RequestContext` binds request information. The application
context is also implicitly created if a request context is created
but the application is not on top of the individual application
context.
"""
def __init__(self, app):
self.app = app
self.url_adapter = app.create_url_adapter(None)
self.g = app.app_ctx_globals_class()
# Like request context, app contexts can be pushed multiple times
# but there a basic "refcount" is enough to track them.
self._refcnt = 0
def push(self):
"""Binds the app context to the current context."""
self._refcnt += 1
if hasattr(sys, 'exc_clear'):
sys.exc_clear()
_app_ctx_stack.push(self)
appcontext_pushed.send(self.app)
def pop(self, exc=None):
"""Pops the app context."""
self._refcnt -= 1
if self._refcnt <= 0:
if exc is None:
exc = sys.exc_info()[1]
self.app.do_teardown_appcontext(exc)
rv = _app_ctx_stack.pop()
assert rv is self, 'Popped wrong app context. (%r instead of %r)' \
% (rv, self)
appcontext_popped.send(self.app)
def __enter__(self):
self.push()
return self
def __exit__(self, exc_type, exc_value, tb):
self.pop(exc_value)
if BROKEN_PYPY_CTXMGR_EXIT and exc_type is not None:
reraise(exc_type, exc_value, tb)
class RequestContext(object):
"""The request context contains all request relevant information. It is
created at the beginning of the request and pushed to the
`_request_ctx_stack` and removed at the end of it. It will create the
URL adapter and request object for the WSGI environment provided.
Do not attempt to use this class directly, instead use
:meth:`~flask.Flask.test_request_context` and
:meth:`~flask.Flask.request_context` to create this object.
When the request context is popped, it will evaluate all the
functions registered on the application for teardown execution
(:meth:`~flask.Flask.teardown_request`).
The request context is automatically popped at the end of the request
for you. In debug mode the request context is kept around if
exceptions happen so that interactive debuggers have a chance to
introspect the data. With 0.4 this can also be forced for requests
that did not fail and outside of ``DEBUG`` mode. By setting
``'flask._preserve_context'`` to ``True`` on the WSGI environment the
context will not pop itself at the end of the request. This is used by
the :meth:`~flask.Flask.test_client` for example to implement the
deferred cleanup functionality.
You might find this helpful for unittests where you need the
information from the context local around for a little longer. Make
sure to properly :meth:`~werkzeug.LocalStack.pop` the stack yourself in
that situation, otherwise your unittests will leak memory.
"""
def __init__(self, app, environ, request=None):
self.app = app
if request is None:
request = app.request_class(environ)
self.request = request
self.url_adapter = app.create_url_adapter(self.request)
self.flashes = None
self.session = None
# Request contexts can be pushed multiple times and interleaved with
# other request contexts. Now only if the last level is popped we
# get rid of them. Additionally if an application context is missing
# one is created implicitly so for each level we add this information
self._implicit_app_ctx_stack = []
# indicator if the context was preserved. Next time another context
# is pushed the preserved context is popped.
self.preserved = False
# remembers the exception for pop if there is one in case the context
# preservation kicks in.
self._preserved_exc = None
# Functions that should be executed after the request on the response
# object. These will be called before the regular "after_request"
# functions.
self._after_request_functions = []
self.match_request()
def _get_g(self):
return _app_ctx_stack.top.g
def _set_g(self, value):
_app_ctx_stack.top.g = value
g = property(_get_g, _set_g)
del _get_g, _set_g
def copy(self):
"""Creates a copy of this request context with the same request object.
This can be used to move a request context to a different greenlet.
Because the actual request object is the same this cannot be used to
move a request context to a different thread unless access to the
request object is locked.
.. versionadded:: 0.10
"""
return self.__class__(self.app,
environ=self.request.environ,
request=self.request
)
def match_request(self):
"""Can be overridden by a subclass to hook into the matching
of the request.
"""
try:
url_rule, self.request.view_args = \
self.url_adapter.match(return_rule=True)
self.request.url_rule = url_rule
except HTTPException as e:
self.request.routing_exception = e
def push(self):
"""Binds the request context to the current context."""
# If an exception occurs in debug mode or if context preservation is
# activated under exception situations exactly one context stays
# on the stack. The rationale is that you want to access that
# information under debug situations. However if someone forgets to
# pop that context again we want to make sure that on the next push
# it's invalidated, otherwise we run at risk that something leaks
# memory. This is usually only a problem in test suite since this
# functionality is not active in production environments.
top = _request_ctx_stack.top
if top is not None and top.preserved:
top.pop(top._preserved_exc)
# Before we push the request context we have to ensure that there
# is an application context.
app_ctx = _app_ctx_stack.top
if app_ctx is None or app_ctx.app != self.app:
app_ctx = self.app.app_context()
app_ctx.push()
self._implicit_app_ctx_stack.append(app_ctx)
else:
self._implicit_app_ctx_stack.append(None)
if hasattr(sys, 'exc_clear'):
sys.exc_clear()
_request_ctx_stack.push(self)
# Open the session at the moment that the request context is
# available. This allows a custom open_session method to use the
# request context (e.g. code that access database information
# stored on `g` instead of the appcontext).
self.session = self.app.open_session(self.request)
if self.session is None:
self.session = self.app.make_null_session()
def pop(self, exc=None):
"""Pops the request context and unbinds it by doing that. This will
also trigger the execution of functions registered by the
:meth:`~flask.Flask.teardown_request` decorator.
.. versionchanged:: 0.9
Added the `exc` argument.
"""
app_ctx = self._implicit_app_ctx_stack.pop()
clear_request = False
if not self._implicit_app_ctx_stack:
self.preserved = False
self._preserved_exc = None
if exc is None:
exc = sys.exc_info()[1]
self.app.do_teardown_request(exc)
# If this interpreter supports clearing the exception information
# we do that now. This will only go into effect on Python 2.x,
# on 3.x it disappears automatically at the end of the exception
# stack.
if hasattr(sys, 'exc_clear'):
sys.exc_clear()
request_close = getattr(self.request, 'close', None)
if request_close is not None:
request_close()
clear_request = True
rv = _request_ctx_stack.pop()
assert rv is self, 'Popped wrong request context. (%r instead of %r)' \
% (rv, self)
# get rid of circular dependencies at the end of the request
# so that we don't require the GC to be active.
if clear_request:
rv.request.environ['werkzeug.request'] = None
# Get rid of the app as well if necessary.
if app_ctx is not None:
app_ctx.pop(exc)
def auto_pop(self, exc):
if self.request.environ.get('flask._preserve_context') or \
(exc is not None and self.app.preserve_context_on_exception):
self.preserved = True
self._preserved_exc = exc
else:
self.pop(exc)
def __enter__(self):
self.push()
return self
def __exit__(self, exc_type, exc_value, tb):
# do not pop the request stack if we are in debug mode and an
# exception happened. This will allow the debugger to still
# access the request object in the interactive shell. Furthermore
# the context can be force kept alive for the test client.
# See flask.testing for how this works.
self.auto_pop(exc_value)
if BROKEN_PYPY_CTXMGR_EXIT and exc_type is not None:
reraise(exc_type, exc_value, tb)
def __repr__(self):
return '<%s \'%s\' [%s] of %s>' % (
self.__class__.__name__,
self.request.url,
self.request.method,
self.app.name,
)
| gpl-2.0 |
nyaruka/smartmin | smartmin/users/views.py | 1 | 20916 | import random
import string
from datetime import timedelta
from django import forms
from django.conf import settings
from django.contrib import messages, auth
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Group
from django.contrib.auth.views import LoginView
from django.core.mail import send_mail
from django.urls import reverse
from django.http import HttpResponseRedirect
from django.template import loader
from django.utils import timezone
from django.utils.translation import ugettext_lazy as _
from django.views.generic import TemplateView
from smartmin.email import build_email_context
from smartmin.views import SmartCRUDL, SmartView, SmartFormView, SmartListView, SmartCreateView, SmartUpdateView
from .models import RecoveryToken, PasswordHistory, FailedLogin, is_password_complex
class UserForm(forms.ModelForm):
new_password = forms.CharField(label=_("New Password"), widget=forms.PasswordInput, strip=False)
groups = forms.ModelMultipleChoiceField(widget=forms.CheckboxSelectMultiple,
queryset=Group.objects.all(), required=False)
def clean_new_password(self):
password = self.cleaned_data['new_password']
# if they specified a new password
if password and not is_password_complex(password):
raise forms.ValidationError(_("Passwords must have at least 8 characters, including one uppercase, "
"one lowercase and one number"))
return password
def save(self, commit=True):
"""
Overloaded so we can save any new password that is included.
"""
is_new_user = self.instance.pk is None
user = super(UserForm, self).save(commit)
# new users should be made active by default
if is_new_user:
user.is_active = True
# if we had a new password set, use it
new_pass = self.cleaned_data['new_password']
if new_pass:
user.set_password(new_pass)
if commit:
user.save()
return user
class Meta:
model = get_user_model()
fields = ('username', 'new_password', 'first_name', 'last_name', 'email', 'groups', 'is_active')
class UserUpdateForm(UserForm):
new_password = forms.CharField(label=_("New Password"), widget=forms.PasswordInput, required=False, strip=False)
def clean_new_password(self):
password = self.cleaned_data['new_password']
if password and not is_password_complex(password):
raise forms.ValidationError(_("Passwords must have at least 8 characters, including one uppercase, "
"one lowercase and one number"))
if password and PasswordHistory.is_password_repeat(self.instance, password):
raise forms.ValidationError(_("You have used this password before in the past year, "
"please use a new password."))
return password
class UserProfileForm(UserForm):
old_password = forms.CharField(label=_("Password"), widget=forms.PasswordInput, required=False, strip=False)
new_password = forms.CharField(label=_("New Password"), widget=forms.PasswordInput, required=False, strip=False)
confirm_new_password = forms.CharField(
label=_("Confirm Password"), widget=forms.PasswordInput, required=False, strip=False
)
def clean_old_password(self):
user = self.instance
if(not user.check_password(self.cleaned_data['old_password'])):
raise forms.ValidationError(_("Please enter your password to save changes."))
return self.cleaned_data['old_password']
def clean_confirm_new_password(self):
if 'new_password' not in self.cleaned_data:
return None
if not self.cleaned_data['confirm_new_password'] and self.cleaned_data['new_password']:
raise forms.ValidationError(_("Confirm the new password by filling the this field"))
if self.cleaned_data['new_password'] != self.cleaned_data['confirm_new_password']:
raise forms.ValidationError(_("New password doesn't match with its confirmation"))
password = self.cleaned_data['new_password']
if password and not is_password_complex(password):
raise forms.ValidationError(_("Passwords must have at least 8 characters, including one uppercase, "
"one lowercase and one number"))
if password and PasswordHistory.is_password_repeat(self.instance, password):
raise forms.ValidationError(_("You have used this password before in the past year, "
"please use a new password."))
return self.cleaned_data['new_password']
class UserForgetForm(forms.Form):
email = forms.EmailField(label=_("Your Email"),)
def clean_email(self):
email = self.cleaned_data['email'].strip()
allow_email_recovery = getattr(settings, 'USER_ALLOW_EMAIL_RECOVERY', True)
if not allow_email_recovery:
raise forms.ValidationError(_("E-mail recovery is not supported, "
"please contact the website administrator to reset your password manually."))
return email
class SetPasswordForm(UserForm):
old_password = forms.CharField(label=_("Current Password"), widget=forms.PasswordInput, required=True, strip=False,
help_text=_("Your current password"))
new_password = forms.CharField(label=_("New Password"), widget=forms.PasswordInput, required=True,
help_text=_("Your new password."), strip=False)
confirm_new_password = forms.CharField(label=_("Confirm new Password"), widget=forms.PasswordInput, required=True,
help_text=_("Confirm your new password."), strip=False)
def clean_old_password(self):
user = self.instance
if not user.check_password(self.cleaned_data['old_password']):
raise forms.ValidationError(_("Please enter your password to save changes"))
return self.cleaned_data['old_password']
def clean_confirm_new_password(self):
if 'new_password' not in self.cleaned_data:
return None
if not self.cleaned_data['confirm_new_password'] and self.cleaned_data['new_password']:
raise forms.ValidationError(_("Confirm your new password by entering it here"))
if self.cleaned_data['new_password'] != self.cleaned_data['confirm_new_password']:
raise forms.ValidationError(_("Mismatch between your new password and confirmation, try again"))
password = self.cleaned_data['new_password']
if password and not is_password_complex(password):
raise forms.ValidationError(_("Passwords must have at least 8 characters, including one uppercase, "
"one lowercase and one number"))
if password and PasswordHistory.is_password_repeat(self.instance, password):
raise forms.ValidationError(_("You have used this password before in the past year, "
"please use a new password."))
return self.cleaned_data['new_password']
class UserCRUDL(SmartCRUDL):
model = get_user_model()
permissions = True
actions = ('create', 'list', 'update', 'profile', 'forget', 'recover', 'expired', 'failed', 'newpassword', 'mimic')
class List(SmartListView):
search_fields = ('username__icontains', 'first_name__icontains', 'last_name__icontains')
fields = ('username', 'name', 'group', 'last_login')
link_fields = ('username', 'name')
default_order = 'username'
add_button = True
template_name = "smartmin/users/user_list.html"
def get_context_data(self, **kwargs):
context = super(UserCRUDL.List, self).get_context_data(**kwargs)
context['groups'] = Group.objects.all()
group_id = self.request.POST.get('group_id', self.request.GET.get('group_id', 0))
context['group_id'] = int(group_id)
return context
def get_group(self, obj):
return ", ".join([group.name for group in obj.groups.all()])
def get_queryset(self, **kwargs):
queryset = super(UserCRUDL.List, self).get_queryset(**kwargs)
group_id = self.request.POST.get('group_id', self.request.GET.get('group_id', 0))
group_id = int(group_id)
# filter by the group
if group_id:
queryset = queryset.filter(groups=group_id)
# ignore superusers and staff users
return queryset.exclude(is_staff=True).exclude(is_superuser=True).exclude(password=None)
def get_name(self, obj):
return obj.get_full_name()
class Create(SmartCreateView):
form_class = UserForm
fields = ('username', 'new_password', 'first_name', 'last_name', 'email', 'groups')
success_message = _("New user created successfully.")
field_config = {
'groups': dict(label=_("Groups"),
help=_("Users will only get those permissions that are allowed for their group.")),
'new_password': dict(label=_("Password"), help=_("Set the user's initial password here.")),
}
def post_save(self, obj):
"""
Make sure our groups are up to date
"""
if 'groups' in self.form.cleaned_data:
for group in self.form.cleaned_data['groups']:
obj.groups.add(group)
return obj
class Update(SmartUpdateView):
form_class = UserUpdateForm
template_name = "smartmin/users/user_update.html"
success_message = "User saved successfully."
fields = ('username', 'new_password', 'first_name', 'last_name', 'email', 'groups', 'is_active', 'last_login')
field_config = {
'last_login': dict(readonly=True, label=_("Last Login")),
'is_active': dict(label=_("Is Active"), help=_("Whether this user is allowed to log into the site")),
'groups': dict(label=_("Groups"),
help=_("Users will only get those permissions that are allowed for their group")),
'new_password': dict(label=_("New Password"),
help=_("You can reset the user's password by entering a new password here")),
}
def post_save(self, obj):
"""
Make sure our groups are up to date
"""
if 'groups' in self.form.cleaned_data:
obj.groups.clear()
for group in self.form.cleaned_data['groups']:
obj.groups.add(group)
# if a new password was set, reset our failed logins
if 'new_password' in self.form.cleaned_data and self.form.cleaned_data['new_password']:
FailedLogin.objects.filter(username__iexact=self.object.username).delete()
PasswordHistory.objects.create(user=obj, password=obj.password)
return obj
class Profile(SmartUpdateView):
form_class = UserProfileForm
success_message = "User profile saved successfully."
fields = ('username', 'old_password', 'new_password', 'confirm_new_password',
'first_name', 'last_name', 'email')
field_config = {
'username': dict(readonly=True, label=_("Username")),
'old_password': dict(label=_("Password"), help=_("Your password")),
'new_password': dict(label=_("New Password"), help=_("If you want to set a new password, enter it here")),
'confirm_new_password': dict(label=_("Confirm New Password"), help=_("Confirm your new password")),
}
def post_save(self, obj):
obj = super(UserCRUDL.Profile, self).post_save(obj)
if 'new_password' in self.form.cleaned_data and self.form.cleaned_data['new_password']:
FailedLogin.objects.filter(username__iexact=self.object.username).delete()
PasswordHistory.objects.create(user=obj, password=obj.password)
return obj
def get_object(self, queryset=None):
return self.request.user
def derive_title(self):
return _("Edit your profile")
class Forget(SmartFormView):
title = _("Password Recovery")
template_name = 'smartmin/users/user_forget.html'
form_class = UserForgetForm
permission = None
success_message = _("An Email has been sent to your account with further instructions.")
success_url = "@users.user_login"
fields = ('email', )
def form_valid(self, form):
email = form.cleaned_data['email']
hostname = getattr(settings, 'HOSTNAME', self.request.get_host())
col_index = hostname.find(':')
domain = hostname[:col_index] if col_index > 0 else hostname
from_email = getattr(settings, 'DEFAULT_FROM_EMAIL', 'website@%s' % domain)
user_email_template = getattr(settings, "USER_FORGET_EMAIL_TEMPLATE", "smartmin/users/user_email.txt")
user = get_user_model().objects.filter(email__iexact=email).first()
context = build_email_context(self.request, user)
if user:
token = ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(32))
RecoveryToken.objects.create(token=token, user=user)
email_template = loader.get_template(user_email_template)
FailedLogin.objects.filter(username__iexact=user.username).delete()
context['user'] = user
context['path'] = "%s" % reverse('users.user_recover', args=[token])
send_mail(_('Password Recovery Request'), email_template.render(context), from_email,
[email], fail_silently=False)
response = super(UserCRUDL.Forget, self).form_valid(form)
return response
class Newpassword(SmartUpdateView):
form_class = SetPasswordForm
fields = ('old_password', 'new_password', 'confirm_new_password')
title = _("Pick a new password")
template_name = 'smartmin/users/user_newpassword.html'
success_message = _("Your password has successfully been updated, thank you.")
def get_context_data(self, *args, **kwargs):
context_data = super(UserCRUDL.Newpassword, self).get_context_data(*args, **kwargs)
context_data['expire_days'] = getattr(settings, 'USER_PASSWORD_EXPIRATION', -1)
context_data['window_days'] = getattr(settings, 'USER_PASSWORD_REPEAT_WINDOW', -1)
return context_data
def has_permission(self, request, *args, **kwargs):
return request.user.is_authenticated
def get_object(self, queryset=None):
return self.request.user
def post_save(self, obj):
obj = super(UserCRUDL.Newpassword, self).post_save(obj)
PasswordHistory.objects.create(user=obj, password=obj.password)
return obj
def get_success_url(self):
return settings.LOGIN_REDIRECT_URL
class Mimic(SmartUpdateView):
fields = ('id',)
def derive_success_message(self):
return _("You are now logged in as %s") % self.object.username
def pre_process(self, request, *args, **kwargs):
user = self.get_object()
Login.as_view()(request)
# After logging in it is important to change the user stored in the session
# otherwise the user will remain the same
request.session[auth.SESSION_KEY] = user.id
request.session[auth.HASH_SESSION_KEY] = user.get_session_auth_hash()
return HttpResponseRedirect(settings.LOGIN_REDIRECT_URL)
class Recover(SmartUpdateView):
form_class = SetPasswordForm
permission = None
success_message = _("Password Updated Successfully. Now you can log in using your new password.")
success_url = '@users.user_login'
fields = ('new_password', 'confirm_new_password')
title = _("Reset your Password")
template_name = 'smartmin/users/user_recover.html'
@classmethod
def derive_url_pattern(cls, path, action):
return r'^%s/%s/(?P<token>\w+)/$' % (path, action)
def pre_process(self, request, *args, **kwargs):
token = self.kwargs.get('token')
validity_time = timezone.now() - timedelta(hours=48)
recovery_token = RecoveryToken.objects.filter(created_on__gt=validity_time, token=token)
if not recovery_token:
messages.info(request, _("Your link has expired for security reasons. "
"Please reinitiate the process by entering your email here."))
return HttpResponseRedirect(reverse("users.user_forget"))
return super(UserCRUDL.Recover, self).pre_process(request, args, kwargs)
def get_object(self, queryset=None):
token = self.kwargs.get('token')
recovery_token = RecoveryToken.objects.get(token=token)
return recovery_token.user
def post_save(self, obj):
obj = super(UserCRUDL.Recover, self).post_save(obj)
validity_time = timezone.now() - timedelta(hours=48)
RecoveryToken.objects.filter(user=obj).delete()
RecoveryToken.objects.filter(created_on__lt=validity_time).delete()
PasswordHistory.objects.create(user=obj, password=obj.password)
return obj
class Expired(SmartView, TemplateView):
permission = None
template_name = 'smartmin/users/user_expired.html'
class Failed(SmartView, TemplateView):
permission = None
template_name = 'smartmin/users/user_failed.html'
def get_context_data(self, *args, **kwargs):
context = super(UserCRUDL.Failed, self).get_context_data(*args, **kwargs)
lockout_timeout = getattr(settings, 'USER_LOCKOUT_TIMEOUT', 10)
failed_login_limit = getattr(settings, 'USER_FAILED_LOGIN_LIMIT', 5)
allow_email_recovery = getattr(settings, 'USER_ALLOW_EMAIL_RECOVERY', True)
context['lockout_timeout'] = lockout_timeout
context['failed_login_limit'] = failed_login_limit
context['allow_email_recovery'] = allow_email_recovery
return context
class Login(LoginView):
template_name = 'smartmin/users/login.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['allow_email_recovery'] = getattr(settings, 'USER_ALLOW_EMAIL_RECOVERY', True)
return context
def post(self, request, *args, **kwargs):
form = self.get_form()
# clean form data
form_is_valid = form.is_valid()
lockout_timeout = getattr(settings, 'USER_LOCKOUT_TIMEOUT', 10)
failed_login_limit = getattr(settings, 'USER_FAILED_LOGIN_LIMIT', 5)
username = self.get_username(form)
if not username:
return self.form_invalid(form)
user = get_user_model().objects.filter(username__iexact=username).first()
valid_password = False
# this could be a valid login by a user
if user:
# incorrect password? create a failed login token
valid_password = user.check_password(form.cleaned_data.get('password'))
if not user or not valid_password:
FailedLogin.objects.create(username=username)
bad_interval = timezone.now() - timedelta(minutes=lockout_timeout)
failures = FailedLogin.objects.filter(username__iexact=username)
# if the failures reset after a period of time, then limit our query to that interval
if lockout_timeout > 0:
failures = failures.filter(failed_on__gt=bad_interval)
# if there are too many failed logins, take them to the failed page
if len(failures) >= failed_login_limit:
return HttpResponseRedirect(reverse('users.user_failed'))
# pass through the normal login process
if form_is_valid:
return self.form_valid(form)
else:
return self.form_invalid(form)
def form_valid(self, form):
# clean up any failed logins for this user
FailedLogin.objects.filter(username__iexact=self.get_username(form)).delete()
return super().form_valid(form)
def get_username(self, form):
return form.cleaned_data.get('username')
| bsd-3-clause |
arpitgoyaiitkgp/django-mongoengine | tests/views/urls.py | 4 | 5147 | from __future__ import absolute_import
from django.conf.urls import patterns, url
from django.views.decorators.cache import cache_page
from django_mongoengine.views import TemplateView
from . import views
urlpatterns = patterns('',
# TemplateView
(r'^template/no_template/$',
TemplateView.as_view()),
(r'^template/simple/(?P<foo>\w+)/$',
TemplateView.as_view(template_name='views/about.html')),
(r'^template/custom/(?P<foo>\w+)/$',
views.CustomTemplateView.as_view(template_name='views/about.html')),
(r'^template/cached/(?P<foo>\w+)/$',
cache_page(2.0)(TemplateView.as_view(template_name='views/about.html'))),
# DetailView
(r'^detail/obj/$',
views.ObjectDetail.as_view()),
url(r'^detail/artist/(?P<pk>\d+)/$',
views.ArtistDetail.as_view(),
name="artist_detail"),
url(r'^detail/author/(?P<pk>\d+)/$',
views.AuthorDetail.as_view(),
name="author_detail"),
(r'^detail/author/bycustompk/(?P<foo>\d+)/$',
views.AuthorDetail.as_view(pk_url_kwarg='foo')),
(r'^detail/author/byslug/(?P<slug>[\w-]+)/$',
views.AuthorDetail.as_view()),
(r'^detail/author/bycustomslug/(?P<foo>[\w-]+)/$',
views.AuthorDetail.as_view(slug_url_kwarg='foo')),
(r'^detail/author/(?P<pk>\d+)/template_name_suffix/$',
views.AuthorDetail.as_view(template_name_suffix='_view')),
(r'^detail/author/(?P<pk>\d+)/template_name/$',
views.AuthorDetail.as_view(template_name='views/about.html')),
(r'^detail/author/(?P<pk>\d+)/context_object_name/$',
views.AuthorDetail.as_view(context_object_name='thingy')),
(r'^detail/author/(?P<pk>\d+)/dupe_context_object_name/$',
views.AuthorDetail.as_view(context_object_name='object')),
(r'^detail/page/(?P<pk>\d+)/field/$',
views.PageDetail.as_view()),
(r'^detail/author/invalid/url/$',
views.AuthorDetail.as_view()),
(r'^detail/author/invalid/qs/$',
views.AuthorDetail.as_view(queryset=None)),
# Create/UpdateView
(r'^edit/artists/create/$',
views.ArtistCreate.as_view()),
(r'^edit/artists/(?P<pk>\d+)/update/$',
views.ArtistUpdate.as_view()),
(r'^edit/authors/create/naive/$',
views.NaiveAuthorCreate.as_view()),
(r'^edit/authors/create/redirect/$',
views.NaiveAuthorCreate.as_view(success_url='/edit/authors/create/')),
(r'^edit/authors/create/interpolate_redirect/$',
views.NaiveAuthorCreate.as_view(success_url='/edit/author/%(id)s/update/')),
(r'^edit/authors/create/$',
views.AuthorCreate.as_view()),
(r'^edit/authors/create/special/$',
views.SpecializedAuthorCreate.as_view()),
(r'^edit/author/(?P<pk>\d+)/update/naive/$',
views.NaiveAuthorUpdate.as_view()),
(r'^edit/author/(?P<pk>\d+)/update/redirect/$',
views.NaiveAuthorUpdate.as_view(success_url='/edit/authors/create/')),
(r'^edit/author/(?P<pk>\d+)/update/interpolate_redirect/$',
views.NaiveAuthorUpdate.as_view(success_url='/edit/author/%(id)s/update/')),
(r'^edit/author/(?P<pk>\d+)/update/$',
views.AuthorUpdate.as_view()),
(r'^edit/author/update/$',
views.OneAuthorUpdate.as_view()),
(r'^edit/author/(?P<pk>\d+)/update/special/$',
views.SpecializedAuthorUpdate.as_view()),
(r'^edit/author/(?P<pk>\d+)/delete/naive/$',
views.NaiveAuthorDelete.as_view()),
(r'^edit/author/(?P<pk>\d+)/delete/redirect/$',
views.NaiveAuthorDelete.as_view(success_url='/edit/authors/create/')),
(r'^edit/author/(?P<pk>\d+)/delete/$',
views.AuthorDelete.as_view()),
(r'^edit/author/(?P<pk>\d+)/delete/special/$',
views.SpecializedAuthorDelete.as_view()),
# ListView
(r'^list/dict/$',
views.DictList.as_view()),
(r'^list/dict/paginated/$',
views.DictList.as_view(paginate_by=1)),
url(r'^list/artists/$',
views.ArtistList.as_view(),
name="artists_list"),
url(r'^list/authors/$',
views.AuthorList.as_view(),
name="authors_list"),
(r'^list/authors/paginated/$',
views.AuthorList.as_view(paginate_by=30)),
(r'^list/authors/paginated/(?P<page>\d+)/$',
views.AuthorList.as_view(paginate_by=30)),
(r'^list/authors/notempty/$',
views.AuthorList.as_view(allow_empty=False)),
(r'^list/authors/template_name/$',
views.AuthorList.as_view(template_name='views/list.html')),
(r'^list/authors/template_name_suffix/$',
views.AuthorList.as_view(template_name_suffix='_objects')),
(r'^list/authors/context_object_name/$',
views.AuthorList.as_view(context_object_name='author_list')),
(r'^list/authors/dupe_context_object_name/$',
views.AuthorList.as_view(context_object_name='object_list')),
(r'^list/authors/invalid/$',
views.AuthorList.as_view(queryset=None)),
(r'^list/authors/paginated/custom_class/$',
views.AuthorList.as_view(paginate_by=5, paginator_class=views.CustomPaginator)),
(r'^list/authors/paginated/custom_constructor/$',
views.AuthorListCustomPaginator.as_view()),
)
| bsd-3-clause |
sodo13/openpli-gls | lib/python/Plugins/SystemPlugins/NFIFlash/downloader.py | 32 | 30705 | # -*- coding: utf-8 -*-
from Plugins.SystemPlugins.Hotplug.plugin import hotplugNotifier
from Screens.Screen import Screen
from Screens.MessageBox import MessageBox
from Screens.ChoiceBox import ChoiceBox
from Screens.HelpMenu import HelpableScreen
from Screens.TaskView import JobView
from Components.About import about
from Components.ActionMap import ActionMap
from Components.Sources.StaticText import StaticText
from Components.Sources.List import List
from Components.Label import Label
from Components.FileList import FileList
from Components.MenuList import MenuList
from Components.MultiContent import MultiContentEntryText
from Components.ScrollLabel import ScrollLabel
from Components.Harddisk import harddiskmanager
from Components.Task import Task, Job, job_manager, Condition
from Tools.Directories import fileExists, isMount, resolveFilename, SCOPE_HDD, SCOPE_MEDIA
from Tools.HardwareInfo import HardwareInfo
from Tools.Downloader import downloadWithProgress
from enigma import eConsoleAppContainer, gFont, RT_HALIGN_LEFT, RT_HALIGN_CENTER, RT_VALIGN_CENTER, RT_WRAP, eTimer
from os import system, path, access, stat, remove, W_OK, R_OK
from twisted.web import client
from twisted.internet import reactor, defer
from twisted.python import failure
import re
class ImageDownloadJob(Job):
def __init__(self, url, filename, device=None, mountpoint="/"):
Job.__init__(self, _("Download .NFI-files for USB-flasher"))
if device:
if isMount(mountpoint):
UmountTask(self, mountpoint)
MountTask(self, device, mountpoint)
ImageDownloadTask(self, url, mountpoint+filename)
ImageDownloadTask(self, url[:-4]+".nfo", mountpoint+filename[:-4]+".nfo")
#if device:
#UmountTask(self, mountpoint)
def retry(self):
self.tasks[0].args += self.tasks[0].retryargs
Job.retry(self)
class MountTask(Task):
def __init__(self, job, device, mountpoint):
Task.__init__(self, job, ("mount"))
self.setTool("mount")
options = "rw,sync"
self.mountpoint = mountpoint
self.args += [ device, mountpoint, "-o"+options ]
self.weighting = 1
def processOutput(self, data):
print "[MountTask] output:", data
class UmountTask(Task):
def __init__(self, job, mountpoint):
Task.__init__(self, job, ("mount"))
self.setTool("umount")
self.args += [mountpoint]
self.weighting = 1
class DownloaderPostcondition(Condition):
def check(self, task):
return task.returncode == 0
def getErrorMessage(self, task):
return self.error_message
class ImageDownloadTask(Task):
def __init__(self, job, url, path):
Task.__init__(self, job, _("Downloading"))
self.postconditions.append(DownloaderPostcondition())
self.job = job
self.url = url
self.path = path
self.error_message = ""
self.last_recvbytes = 0
self.error_message = None
self.download = None
self.aborted = False
def run(self, callback):
self.callback = callback
self.download = downloadWithProgress(self.url,self.path)
self.download.addProgress(self.download_progress)
self.download.start().addCallback(self.download_finished).addErrback(self.download_failed)
print "[ImageDownloadTask] downloading", self.url, "to", self.path
def abort(self):
print "[ImageDownloadTask] aborting", self.url
if self.download:
self.download.stop()
self.aborted = True
def download_progress(self, recvbytes, totalbytes):
#print "[update_progress] recvbytes=%d, totalbytes=%d" % (recvbytes, totalbytes)
if ( recvbytes - self.last_recvbytes ) > 10000: # anti-flicker
self.progress = int(100*(float(recvbytes)/float(totalbytes)))
self.name = _("Downloading") + ' ' + "%d of %d kBytes" % (recvbytes/1024, totalbytes/1024)
self.last_recvbytes = recvbytes
def download_failed(self, failure_instance=None, error_message=""):
self.error_message = error_message
if error_message == "" and failure_instance is not None:
self.error_message = failure_instance.getErrorMessage()
Task.processFinished(self, 1)
def download_finished(self, string=""):
if self.aborted:
self.finish(aborted = True)
else:
Task.processFinished(self, 0)
class StickWizardJob(Job):
def __init__(self, path):
Job.__init__(self, _("USB stick wizard"))
self.path = path
self.device = path
while self.device[-1:] == "/" or self.device[-1:].isdigit():
self.device = self.device[:-1]
box = HardwareInfo().get_device_name()
url = "http://www.dreamboxupdate.com/download/opendreambox/dreambox-nfiflasher-%s.tar.bz2" % box
self.downloadfilename = "/tmp/dreambox-nfiflasher-%s.tar.bz2" % box
self.imagefilename = "/tmp/nfiflash_%s.img" % box
#UmountTask(self, device)
PartitionTask(self)
ImageDownloadTask(self, url, self.downloadfilename)
UnpackTask(self)
CopyTask(self)
class PartitionTaskPostcondition(Condition):
def check(self, task):
return task.returncode == 0
def getErrorMessage(self, task):
return {
task.ERROR_BLKRRPART: ("Device or resource busy"),
task.ERROR_UNKNOWN: (task.errormsg)
}[task.error]
class PartitionTask(Task):
ERROR_UNKNOWN, ERROR_BLKRRPART = range(2)
def __init__(self, job):
Task.__init__(self, job, ("partitioning"))
self.postconditions.append(PartitionTaskPostcondition())
self.job = job
self.setTool("sfdisk")
self.args += [self.job.device]
self.weighting = 10
self.initial_input = "0 - 0x6 *\n;\n;\n;\ny"
self.errormsg = ""
def run(self, callback):
Task.run(self, callback)
def processOutput(self, data):
print "[PartitionTask] output:", data
if data.startswith("BLKRRPART:"):
self.error = self.ERROR_BLKRRPART
else:
self.error = self.ERROR_UNKNOWN
self.errormsg = data
class UnpackTask(Task):
def __init__(self, job):
Task.__init__(self, job, ("Unpacking USB flasher image..."))
self.job = job
self.setTool("tar")
self.args += ["-xjvf", self.job.downloadfilename]
self.weighting = 80
self.end = 80
self.delayTimer = eTimer()
self.delayTimer.callback.append(self.progress_increment)
def run(self, callback):
Task.run(self, callback)
self.delayTimer.start(950, False)
def progress_increment(self):
self.progress += 1
def processOutput(self, data):
print "[UnpackTask] output: \'%s\'" % data
self.job.imagefilename = data
def afterRun(self):
self.delayTimer.callback.remove(self.progress_increment)
class CopyTask(Task):
def __init__(self, job):
Task.__init__(self, job, ("Copying USB flasher boot image to stick..."))
self.job = job
self.setTool("dd")
self.args += ["if=%s" % self.job.imagefilename, "of=%s1" % self.job.device]
self.weighting = 20
self.end = 20
self.delayTimer = eTimer()
self.delayTimer.callback.append(self.progress_increment)
def run(self, callback):
Task.run(self, callback)
self.delayTimer.start(100, False)
def progress_increment(self):
self.progress += 1
def processOutput(self, data):
print "[CopyTask] output:", data
def afterRun(self):
self.delayTimer.callback.remove(self.progress_increment)
class NFOViewer(Screen):
skin = """
<screen name="NFOViewer" position="center,center" size="610,410" title="Changelog" >
<widget name="changelog" position="10,10" size="590,380" font="Regular;16" />
</screen>"""
def __init__(self, session, nfo):
Screen.__init__(self, session)
self["changelog"] = ScrollLabel(nfo)
self["ViewerActions"] = ActionMap(["SetupActions", "ColorActions", "DirectionActions"],
{
"green": self.exit,
"red": self.exit,
"ok": self.exit,
"cancel": self.exit,
"down": self.pageDown,
"up": self.pageUp
})
def pageUp(self):
self["changelog"].pageUp()
def pageDown(self):
self["changelog"].pageDown()
def exit(self):
self.close(False)
class feedDownloader:
def __init__(self, feed_base, box, OE_vers):
print "[feedDownloader::init] feed_base=%s, box=%s" % (feed_base, box)
self.feed_base = feed_base
self.OE_vers = OE_vers
self.box = box
def getList(self, callback, errback):
self.urlbase = "%s/%s/%s/images/" % (self.feed_base, self.OE_vers, self.box)
print "[getList]", self.urlbase
self.callback = callback
self.errback = errback
client.getPage(self.urlbase).addCallback(self.feed_finished).addErrback(self.feed_failed)
def feed_failed(self, failure_instance):
print "[feed_failed]", str(failure_instance)
self.errback(failure_instance.getErrorMessage())
def feed_finished(self, feedhtml):
print "[feed_finished]"
fileresultmask = re.compile("<a class=[\'\"]nfi[\'\"] href=[\'\"](?P<url>.*?)[\'\"]>(?P<name>.*?.nfi)</a>", re.DOTALL)
searchresults = fileresultmask.finditer(feedhtml)
fileresultlist = []
if searchresults:
for x in searchresults:
url = x.group("url")
if url[0:7] != "http://":
url = self.urlbase + x.group("url")
name = x.group("name")
entry = (name, url)
fileresultlist.append(entry)
self.callback(fileresultlist, self.OE_vers)
class DeviceBrowser(Screen, HelpableScreen):
skin = """
<screen name="DeviceBrowser" position="center,center" size="520,430" title="Please select target medium" >
<ePixmap pixmap="skin_default/buttons/red.png" position="0,0" size="140,40" alphatest="on" />
<ePixmap pixmap="skin_default/buttons/green.png" position="140,0" size="140,40" alphatest="on" />
<widget source="key_red" render="Label" position="0,0" zPosition="1" size="140,40" font="Regular;20" halign="center" valign="center" backgroundColor="#9f1313" transparent="1" />
<widget source="key_green" render="Label" position="140,0" zPosition="1" size="140,40" font="Regular;20" halign="center" valign="center" backgroundColor="#1f771f" transparent="1" />
<widget source="message" render="Label" position="5,50" size="510,150" font="Regular;16" />
<widget name="filelist" position="5,210" size="510,220" scrollbarMode="showOnDemand" />
</screen>"""
def __init__(self, session, startdir, message="", showDirectories = True, showFiles = True, showMountpoints = True, matchingPattern = "", useServiceRef = False, inhibitDirs = False, inhibitMounts = False, isTop = False, enableWrapAround = False, additionalExtensions = None):
Screen.__init__(self, session)
HelpableScreen.__init__(self)
self["key_red"] = StaticText(_("Cancel"))
self["key_green"] = StaticText()
self["message"] = StaticText(message)
self.filelist = FileList(startdir, showDirectories = showDirectories, showFiles = showFiles, showMountpoints = showMountpoints, matchingPattern = matchingPattern, useServiceRef = useServiceRef, inhibitDirs = inhibitDirs, inhibitMounts = inhibitMounts, isTop = isTop, enableWrapAround = enableWrapAround, additionalExtensions = additionalExtensions)
self["filelist"] = self.filelist
self["FilelistActions"] = ActionMap(["SetupActions", "ColorActions"],
{
"green": self.use,
"red": self.exit,
"ok": self.ok,
"cancel": self.exit
})
hotplugNotifier.append(self.hotplugCB)
self.onShown.append(self.updateButton)
self.onClose.append(self.removeHotplug)
def hotplugCB(self, dev, action):
print "[hotplugCB]", dev, action
self.updateButton()
def updateButton(self):
if self["filelist"].getFilename() or self["filelist"].getCurrentDirectory():
self["key_green"].text = _("Use")
else:
self["key_green"].text = ""
def removeHotplug(self):
print "[removeHotplug]"
hotplugNotifier.remove(self.hotplugCB)
def ok(self):
if self.filelist.canDescent():
if self["filelist"].showMountpoints == True and self["filelist"].showDirectories == False:
self.use()
else:
self.filelist.descent()
def use(self):
print "[use]", self["filelist"].getCurrentDirectory(), self["filelist"].getFilename()
if self["filelist"].getCurrentDirectory() is not None:
if self.filelist.canDescent() and self["filelist"].getFilename() and len(self["filelist"].getFilename()) > len(self["filelist"].getCurrentDirectory()):
self.filelist.descent()
self.close(self["filelist"].getCurrentDirectory())
elif self["filelist"].getFilename():
self.close(self["filelist"].getFilename())
def exit(self):
self.close(False)
(ALLIMAGES, RELEASE, EXPERIMENTAL, STICK_WIZARD, START) = range(5)
class NFIDownload(Screen):
skin = """
<screen name="NFIDownload" position="center,center" size="610,410" title="NFIDownload" >
<ePixmap pixmap="skin_default/buttons/red.png" position="0,0" size="140,40" alphatest="on" />
<ePixmap pixmap="skin_default/buttons/green.png" position="140,0" size="140,40" alphatest="on" />
<ePixmap pixmap="skin_default/buttons/yellow.png" position="280,0" size="140,40" alphatest="on" />
<ePixmap pixmap="skin_default/buttons/blue.png" position="420,0" size="140,40" alphatest="on" />
<widget source="key_red" render="Label" position="0,0" zPosition="1" size="140,40" font="Regular;20" valign="center" halign="center" backgroundColor="#9f1313" transparent="1" />
<widget source="key_green" render="Label" position="140,0" zPosition="1" size="140,40" font="Regular;20" valign="center" halign="center" backgroundColor="#1f771f" transparent="1" />
<widget source="key_yellow" render="Label" position="280,0" zPosition="1" size="140,40" font="Regular;20" valign="center" halign="center" backgroundColor="#a08500" transparent="1" />
<widget source="key_blue" render="Label" position="420,0" zPosition="1" size="140,40" font="Regular;20" valign="center" halign="center" backgroundColor="#18188b" transparent="1" />
<ePixmap pixmap="skin_default/border_menu_350.png" position="5,50" zPosition="1" size="350,300" transparent="1" alphatest="on" />
<widget source="menu" render="Listbox" position="15,60" size="330,290" scrollbarMode="showOnDemand">
<convert type="TemplatedMultiContent">
{"templates":
{"default": (25, [
MultiContentEntryText(pos = (2, 2), size = (330, 24), flags = RT_HALIGN_LEFT, text = 1), # index 0 is the MenuText,
], True, "showOnDemand")
},
"fonts": [gFont("Regular", 22)],
"itemHeight": 25
}
</convert>
</widget>
<widget source="menu" render="Listbox" position="360,50" size="240,300" scrollbarMode="showNever" selectionDisabled="1">
<convert type="TemplatedMultiContent">
{"templates":
{"default": (300, [
MultiContentEntryText(pos = (2, 2), size = (240, 300), flags = RT_HALIGN_CENTER|RT_VALIGN_CENTER|RT_WRAP, text = 2), # index 2 is the Description,
], False, "showNever")
},
"fonts": [gFont("Regular", 22)],
"itemHeight": 300
}
</convert>
</widget>
<widget source="status" render="Label" position="5,360" zPosition="10" size="600,50" halign="center" valign="center" font="Regular;22" transparent="1" shadowColor="black" shadowOffset="-1,-1" />
</screen>"""
def __init__(self, session, destdir=None):
Screen.__init__(self, session)
#self.skin_path = plugin_path
#self.menu = args
self.box = HardwareInfo().get_device_name()
self.feed_base = "http://www.dreamboxupdate.com/opendreambox" #/1.5/%s/images/" % self.box
self.usbmountpoint = resolveFilename(SCOPE_MEDIA)+"usb/"
self.menulist = []
self["menu"] = List(self.menulist)
self["key_red"] = StaticText(_("Close"))
self["key_green"] = StaticText()
self["key_yellow"] = StaticText()
self["key_blue"] = StaticText()
self["status"] = StaticText(_("Please wait... Loading list..."))
self["shortcuts"] = ActionMap(["OkCancelActions", "ColorActions", "ShortcutActions", "DirectionActions"],
{
"ok": self.keyOk,
"green": self.keyOk,
"red": self.keyRed,
"blue": self.keyBlue,
"up": self.keyUp,
"upRepeated": self.keyUp,
"downRepeated": self.keyDown,
"down": self.keyDown,
"cancel": self.close,
}, -1)
self.onShown.append(self.go)
self.feedlists = [[],[],[]]
self.branch = START
self.container = eConsoleAppContainer()
self.container.dataAvail.append(self.tool_avail)
self.taskstring = ""
self.image_idx = 0
self.nfofilename = ""
self.nfo = ""
self.target_dir = None
def tool_avail(self, string):
print "[tool_avail]" + string
self.taskstring += string
def go(self):
self.onShown.remove(self.go)
self.umountCallback = self.getMD5
self.umount()
def getMD5(self):
url = "http://www.dreamboxupdate.com/download/opendreambox/dreambox-nfiflasher-%s-md5sums" % self.box
client.getPage(url).addCallback(self.md5sums_finished).addErrback(self.feed_failed)
def md5sums_finished(self, data):
print "[md5sums_finished]", data
self.stickimage_md5 = data
self.checkUSBStick()
def keyRed(self):
if self.branch == START:
self.close()
else:
self.branch = START
self["menu"].setList(self.menulist)
#elif self.branch == ALLIMAGES or self.branch == STICK_WIZARD:
def keyBlue(self):
if self.nfo != "":
self.session.open(NFOViewer, self.nfo)
def keyOk(self):
print "[keyOk]", self["menu"].getCurrent()
current = self["menu"].getCurrent()
if current:
if self.branch == START:
currentEntry = current[0]
if currentEntry == RELEASE:
self.image_idx = 0
self.branch = RELEASE
self.askDestination()
elif currentEntry == EXPERIMENTAL:
self.image_idx = 0
self.branch = EXPERIMENTAL
self.askDestination()
elif currentEntry == ALLIMAGES:
self.branch = ALLIMAGES
self.listImages()
elif currentEntry == STICK_WIZARD:
self.askStartWizard()
elif self.branch == ALLIMAGES:
self.image_idx = self["menu"].getIndex()
self.askDestination()
self.updateButtons()
def keyUp(self):
self["menu"].selectPrevious()
self.updateButtons()
def keyDown(self):
self["menu"].selectNext()
self.updateButtons()
def updateButtons(self):
current = self["menu"].getCurrent()
if current:
if self.branch == START:
self["key_red"].text = _("Close")
currentEntry = current[0]
if currentEntry in (RELEASE, EXPERIMENTAL):
self.nfo_download(currentEntry, 0)
self["key_green"].text = _("Download")
else:
self.nfofilename = ""
self.nfo = ""
self["key_blue"].text = ""
self["key_green"].text = _("continue")
elif self.branch == ALLIMAGES:
self["key_red"].text = _("Back")
self["key_green"].text = _("Download")
self.nfo_download(ALLIMAGES, self["menu"].getIndex())
def listImages(self):
print "[listImages]"
imagelist = []
mask = re.compile("%s/(?P<OE_vers>1\.\d)/%s/images/(?P<branch>.*?)-%s_(?P<version>.*?).nfi" % (self.feed_base, self.box, self.box), re.DOTALL)
for name, url in self.feedlists[ALLIMAGES]:
result = mask.match(url)
if result:
if result.group("version").startswith("20"):
version = ( result.group("version")[:4]+'-'+result.group("version")[4:6]+'-'+result.group("version")[6:8] )
else:
version = result.group("version")
description = "\nOpendreambox %s\n%s image\n%s\n" % (result.group("OE_vers"), result.group("branch"), version)
imagelist.append((url, name, _("Download %s from server" ) % description, None))
self["menu"].setList(imagelist)
def getUSBPartitions(self):
allpartitions = [ (r.description, r.mountpoint) for r in harddiskmanager.getMountedPartitions(onlyhotplug = True)]
print "[getUSBPartitions]", allpartitions
usbpartition = []
for x in allpartitions:
print x, x[1] == '/', x[0].find("USB"), access(x[1], R_OK)
if x[1] != '/' and x[0].find("USB") > -1: # and access(x[1], R_OK) is True:
usbpartition.append(x)
return usbpartition
def askDestination(self):
usbpartition = self.getUSBPartitions()
if len(usbpartition) == 1:
self.target_dir = usbpartition[0][1]
self.ackDestinationDevice(device_description=usbpartition[0][0])
else:
self.openDeviceBrowser()
def openDeviceBrowser(self):
self.session.openWithCallback(self.DeviceBrowserClosed, DeviceBrowser, None, showDirectories=True, showMountpoints=True, inhibitMounts=["/autofs/sr0/"])
def DeviceBrowserClosed(self, path):
print "[DeviceBrowserClosed]", str(path)
self.target_dir = path
if path:
self.ackDestinationDevice()
else:
self.keyRed()
def ackDestinationDevice(self, device_description=None):
if device_description == None:
dev = self.target_dir
else:
dev = device_description
message = _("Do you want to download the image to %s ?") % (dev)
choices = [(_("Yes"), self.ackedDestination), (_("List of storage devices"),self.openDeviceBrowser), (_("Cancel"),self.keyRed)]
self.session.openWithCallback(self.ackDestination_query, ChoiceBox, title=message, list=choices)
def ackDestination_query(self, choice):
print "[ackDestination_query]", choice
if isinstance(choice, tuple):
choice[1]()
else:
self.keyRed()
def ackedDestination(self):
print "[ackedDestination]", self.branch, self.target_dir
self.container.setCWD(resolveFilename(SCOPE_MEDIA)+"usb/")
if self.target_dir[:8] == "/autofs/":
self.target_dir = "/dev/" + self.target_dir[8:-1]
if self.branch == STICK_WIZARD:
job = StickWizardJob(self.target_dir)
job.afterEvent = "close"
job_manager.AddJob(job)
job_manager.failed_jobs = []
self.session.openWithCallback(self.StickWizardCB, JobView, job, afterEventChangeable = False)
elif self.branch != STICK_WIZARD:
url = self.feedlists[self.branch][self.image_idx][1]
filename = self.feedlists[self.branch][self.image_idx][0]
print "[getImage] start downloading %s to %s" % (url, filename)
if self.target_dir.startswith("/dev/"):
job = ImageDownloadJob(url, filename, self.target_dir, self.usbmountpoint)
else:
job = ImageDownloadJob(url, filename, None, self.target_dir)
job.afterEvent = "close"
job_manager.AddJob(job)
job_manager.failed_jobs = []
self.session.openWithCallback(self.ImageDownloadCB, JobView, job, afterEventChangeable = False)
def StickWizardCB(self, ret=None):
print "[StickWizardCB]", ret
# print job_manager.active_jobs, job_manager.failed_jobs, job_manager.job_classes, job_manager.in_background, job_manager.active_job
if len(job_manager.failed_jobs) == 0:
self.session.open(MessageBox, _("The USB stick was prepared to be bootable.\nNow you can download an NFI image file!"), type = MessageBox.TYPE_INFO)
if len(self.feedlists[ALLIMAGES]) == 0:
self.getFeed()
else:
self.setMenu()
else:
self.umountCallback = self.checkUSBStick
self.umount()
def ImageDownloadCB(self, ret):
print "[ImageDownloadCB]", ret
# print job_manager.active_jobs, job_manager.failed_jobs, job_manager.job_classes, job_manager.in_background, job_manager.active_job
if len(job_manager.failed_jobs) == 0:
self.session.openWithCallback(self.askBackupCB, MessageBox, _("The wizard can backup your current settings. Do you want to do a backup now?"), MessageBox.TYPE_YESNO)
else:
self.umountCallback = self.keyRed
self.umount()
def askBackupCB(self, ret):
if ret:
from Plugins.SystemPlugins.SoftwareManager.BackupRestore import BackupScreen
class USBBackupScreen(BackupScreen):
def __init__(self, session, usbmountpoint):
BackupScreen.__init__(self, session, runBackup = True)
self.backuppath = usbmountpoint
self.fullbackupfilename = self.backuppath + "/" + self.backupfile
self.session.openWithCallback(self.showHint, USBBackupScreen, self.usbmountpoint)
else:
self.showHint()
def showHint(self, ret=None):
self.session.open(MessageBox, _("To update your receiver firmware, please follow these steps:\n1) Turn off your box with the rear power switch and make sure the bootable USB stick is plugged in.\n2) Turn mains back on and hold the DOWN button on the front panel pressed for 10 seconds.\n3) Wait for bootup and follow instructions of the wizard."), type = MessageBox.TYPE_INFO)
self.umountCallback = self.keyRed
self.umount()
def getFeed(self):
self.feedDownloader15 = feedDownloader(self.feed_base, self.box, OE_vers="1.5")
self.feedDownloader16 = feedDownloader(self.feed_base, self.box, OE_vers="1.6")
self.feedlists = [[],[],[]]
self.feedDownloader15.getList(self.gotFeed, self.feed_failed)
self.feedDownloader16.getList(self.gotFeed, self.feed_failed)
def feed_failed(self, message=""):
self["status"].text = _("Could not connect to receiver .NFI image feed server:") + "\n" + str(message) + "\n" + _("Please check your network settings!")
def gotFeed(self, feedlist, OE_vers):
print "[gotFeed]", OE_vers
releaselist = []
experimentallist = []
for name, url in feedlist:
if name.find("release") > -1:
releaselist.append((name, url))
if name.find("experimental") > -1:
experimentallist.append((name, url))
self.feedlists[ALLIMAGES].append((name, url))
if OE_vers == "1.6":
self.feedlists[RELEASE] = releaselist + self.feedlists[RELEASE]
self.feedlists[EXPERIMENTAL] = experimentallist + self.feedlists[RELEASE]
elif OE_vers == "1.5":
self.feedlists[RELEASE] = self.feedlists[RELEASE] + releaselist
self.feedlists[EXPERIMENTAL] = self.feedlists[EXPERIMENTAL] + experimentallist
self.setMenu()
def checkUSBStick(self):
self.target_dir = None
allpartitions = [ (r.description, r.mountpoint) for r in harddiskmanager.getMountedPartitions(onlyhotplug = True)]
print "[checkUSBStick] found partitions:", allpartitions
usbpartition = []
for x in allpartitions:
print x, x[1] == '/', x[0].find("USB"), access(x[1], R_OK)
if x[1] != '/' and x[0].find("USB") > -1: # and access(x[1], R_OK) is True:
usbpartition.append(x)
print usbpartition
if len(usbpartition) == 1:
self.target_dir = usbpartition[0][1]
self.md5_passback = self.getFeed
self.md5_failback = self.askStartWizard
self.md5verify(self.stickimage_md5, self.target_dir)
elif usbpartition == []:
print "[NFIFlash] needs to create usb flasher stick first!"
self.askStartWizard()
else:
self.askStartWizard()
def askStartWizard(self):
self.branch = STICK_WIZARD
message = _("""This plugin creates a USB stick which can be used to update the firmware of your receiver without the need for a network or WLAN connection.
First, a USB stick needs to be prepared so that it becomes bootable.
In the next step, an NFI image file can be downloaded from the update server and saved on the USB stick.
If you already have a prepared bootable USB stick, please insert it now. Otherwise plug in a USB stick with a minimum size of 64 MB!""")
self.session.openWithCallback(self.wizardDeviceBrowserClosed, DeviceBrowser, None, message, showDirectories=True, showMountpoints=True, inhibitMounts=["/","/autofs/sr0/","/autofs/sda1/","/media/hdd/","/media/net/",self.usbmountpoint,"/media/dvd/"])
def wizardDeviceBrowserClosed(self, path):
print "[wizardDeviceBrowserClosed]", path
self.target_dir = path
if path:
self.md5_passback = self.getFeed
self.md5_failback = self.wizardQuery
self.md5verify(self.stickimage_md5, self.target_dir)
else:
self.close()
def wizardQuery(self):
print "[wizardQuery]"
description = self.target_dir
for name, dev in self.getUSBPartitions():
if dev == self.target_dir:
description = name
message = _("You have chosen to create a new .NFI flasher bootable USB stick. This will repartition the USB stick and therefore all data on it will be erased.") + "\n"
message += _("The following device was found:\n\n%s\n\nDo you want to write the USB flasher to this stick?") % description
choices = [(_("Yes"), self.ackedDestination), (_("List of storage devices"),self.askStartWizard), (_("Cancel"),self.close)]
self.session.openWithCallback(self.ackDestination_query, ChoiceBox, title=message, list=choices)
def setMenu(self):
self.menulist = []
try:
latest_release = "Release %s (Opendreambox 1.5)" % self.feedlists[RELEASE][0][0][-9:-4]
self.menulist.append((RELEASE, _("Get latest release image"), _("Download %s from server" ) % latest_release, None))
except IndexError:
pass
try:
dat = self.feedlists[EXPERIMENTAL][0][0][-12:-4]
latest_experimental = "Experimental %s-%s-%s (Opendreambox 1.6)" % (dat[:4], dat[4:6], dat[6:])
self.menulist.append((EXPERIMENTAL, _("Get latest experimental image"), _("Download %s from server") % latest_experimental, None))
except IndexError:
pass
self.menulist.append((ALLIMAGES, _("Select an image to be downloaded"), _("Select desired image from feed list" ), None))
self.menulist.append((STICK_WIZARD, _("USB stick wizard"), _("Prepare another USB stick for image flashing" ), None))
self["menu"].setList(self.menulist)
self["status"].text = _("Currently installed image") + ": %s" % (about.getImageVersionString())
self.branch = START
self.updateButtons()
def nfo_download(self, branch, idx):
nfourl = (self.feedlists[branch][idx][1])[:-4]+".nfo"
self.nfofilename = (self.feedlists[branch][idx][0])[:-4]+".nfo"
print "[check_for_NFO]", nfourl
client.getPage(nfourl).addCallback(self.nfo_finished).addErrback(self.nfo_failed)
def nfo_failed(self, failure_instance):
print "[nfo_failed] " + str(failure_instance)
self["key_blue"].text = ""
self.nfofilename = ""
self.nfo = ""
def nfo_finished(self,nfodata=""):
print "[nfo_finished] " + str(nfodata)
self["key_blue"].text = _("Changelog")
self.nfo = nfodata
def md5verify(self, md5, path):
cmd = "md5sum -c -s"
print "[verify_md5]", md5, path, cmd
self.container.setCWD(path)
self.container.appClosed.append(self.md5finished)
self.container.execute(cmd)
self.container.write(md5)
self.container.dataSent.append(self.md5ready)
def md5ready(self, retval):
self.container.sendEOF()
def md5finished(self, retval):
print "[md5finished]", str(retval)
self.container.appClosed.remove(self.md5finished)
self.container.dataSent.remove(self.md5ready)
if retval==0:
print "check passed! calling", repr(self.md5_passback)
self.md5_passback()
else:
print "check failed! calling", repr(self.md5_failback)
self.md5_failback()
def umount(self):
cmd = "umount " + self.usbmountpoint
print "[umount]", cmd
self.container.setCWD('/')
self.container.appClosed.append(self.umountFinished)
self.container.execute(cmd)
def umountFinished(self, retval):
print "[umountFinished]", str(retval)
self.container.appClosed.remove(self.umountFinished)
self.umountCallback()
def main(session, **kwargs):
session.open(NFIDownload,resolveFilename(SCOPE_HDD))
def filescan_open(list, session, **kwargs):
dev = "/dev/" + (list[0].path).rsplit('/',1)[0][7:]
print "mounting device " + dev + " to /media/usb..."
usbmountpoint = resolveFilename(SCOPE_MEDIA)+"usb/"
system("mount %s %s -o rw,sync" % (dev, usbmountpoint))
session.open(NFIDownload,usbmountpoint)
def filescan(**kwargs):
from Components.Scanner import Scanner, ScanPath
return \
Scanner(mimetypes = ["application/x-dream-image"],
paths_to_scan =
[
ScanPath(path = "", with_subdirs = False),
],
name = "NFI",
description = (_("Download .NFI-files for USB-flasher")+"..."),
openfnc = filescan_open, )
| gpl-2.0 |
NINAnor/QGIS | python/ext-libs/pygments/lexers/parsers.py | 363 | 25835 | # -*- coding: utf-8 -*-
"""
pygments.lexers.parsers
~~~~~~~~~~~~~~~~~~~~~~~
Lexers for parser generators.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, DelegatingLexer, \
include, bygroups, using
from pygments.token import Punctuation, Other, Text, Comment, Operator, \
Keyword, Name, String, Number, Whitespace
from pygments.lexers.compiled import JavaLexer, CLexer, CppLexer, \
ObjectiveCLexer, DLexer
from pygments.lexers.dotnet import CSharpLexer
from pygments.lexers.agile import RubyLexer, PythonLexer, PerlLexer
from pygments.lexers.web import ActionScriptLexer
__all__ = ['RagelLexer', 'RagelEmbeddedLexer', 'RagelCLexer', 'RagelDLexer',
'RagelCppLexer', 'RagelObjectiveCLexer', 'RagelRubyLexer',
'RagelJavaLexer', 'AntlrLexer', 'AntlrPythonLexer',
'AntlrPerlLexer', 'AntlrRubyLexer', 'AntlrCppLexer',
#'AntlrCLexer',
'AntlrCSharpLexer', 'AntlrObjectiveCLexer',
'AntlrJavaLexer', "AntlrActionScriptLexer",
'TreetopLexer']
class RagelLexer(RegexLexer):
"""
A pure `Ragel <http://www.complang.org/ragel/>`_ lexer. Use this for
fragments of Ragel. For ``.rl`` files, use RagelEmbeddedLexer instead
(or one of the language-specific subclasses).
*New in Pygments 1.1.*
"""
name = 'Ragel'
aliases = ['ragel']
filenames = []
tokens = {
'whitespace': [
(r'\s+', Whitespace)
],
'comments': [
(r'\#.*$', Comment),
],
'keywords': [
(r'(access|action|alphtype)\b', Keyword),
(r'(getkey|write|machine|include)\b', Keyword),
(r'(any|ascii|extend|alpha|digit|alnum|lower|upper)\b', Keyword),
(r'(xdigit|cntrl|graph|print|punct|space|zlen|empty)\b', Keyword)
],
'numbers': [
(r'0x[0-9A-Fa-f]+', Number.Hex),
(r'[+-]?[0-9]+', Number.Integer),
],
'literals': [
(r'"(\\\\|\\"|[^"])*"', String), # double quote string
(r"'(\\\\|\\'|[^'])*'", String), # single quote string
(r'\[(\\\\|\\\]|[^\]])*\]', String), # square bracket literals
(r'/(?!\*)(\\\\|\\/|[^/])*/', String.Regex), # regular expressions
],
'identifiers': [
(r'[a-zA-Z_][a-zA-Z_0-9]*', Name.Variable),
],
'operators': [
(r',', Operator), # Join
(r'\||&|--?', Operator), # Union, Intersection and Subtraction
(r'\.|<:|:>>?', Operator), # Concatention
(r':', Operator), # Label
(r'->', Operator), # Epsilon Transition
(r'(>|\$|%|<|@|<>)(/|eof\b)', Operator), # EOF Actions
(r'(>|\$|%|<|@|<>)(!|err\b)', Operator), # Global Error Actions
(r'(>|\$|%|<|@|<>)(\^|lerr\b)', Operator), # Local Error Actions
(r'(>|\$|%|<|@|<>)(~|to\b)', Operator), # To-State Actions
(r'(>|\$|%|<|@|<>)(\*|from\b)', Operator), # From-State Actions
(r'>|@|\$|%', Operator), # Transition Actions and Priorities
(r'\*|\?|\+|{[0-9]*,[0-9]*}', Operator), # Repetition
(r'!|\^', Operator), # Negation
(r'\(|\)', Operator), # Grouping
],
'root': [
include('literals'),
include('whitespace'),
include('comments'),
include('keywords'),
include('numbers'),
include('identifiers'),
include('operators'),
(r'{', Punctuation, 'host'),
(r'=', Operator),
(r';', Punctuation),
],
'host': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^{}\'"/#]+', # exclude unsafe characters
r'[^\\][\\][{}]', # allow escaped { or }
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'\#.*$\n?', # ruby comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# / is safe now that we've handled regex and javadoc comments
r'/',
)) + r')+', Other),
(r'{', Punctuation, '#push'),
(r'}', Punctuation, '#pop'),
],
}
class RagelEmbeddedLexer(RegexLexer):
"""
A lexer for `Ragel`_ embedded in a host language file.
This will only highlight Ragel statements. If you want host language
highlighting then call the language-specific Ragel lexer.
*New in Pygments 1.1.*
"""
name = 'Embedded Ragel'
aliases = ['ragel-em']
filenames = ['*.rl']
tokens = {
'root': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^%\'"/#]+', # exclude unsafe characters
r'%(?=[^%]|$)', # a single % sign is okay, just not 2 of them
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'//.*$\n?', # single line comment
r'\#.*$\n?', # ruby/ragel comment
r'/(?!\*)(\\\\|\\/|[^/])*/', # regular expression
# / is safe now that we've handled regex and javadoc comments
r'/',
)) + r')+', Other),
# Single Line FSM.
# Please don't put a quoted newline in a single line FSM.
# That's just mean. It will break this.
(r'(%%)(?![{%])(.*)($|;)(\n?)', bygroups(Punctuation,
using(RagelLexer),
Punctuation, Text)),
# Multi Line FSM.
(r'(%%%%|%%){', Punctuation, 'multi-line-fsm'),
],
'multi-line-fsm': [
(r'(' + r'|'.join(( # keep ragel code in largest possible chunks.
r'(' + r'|'.join((
r'[^}\'"\[/#]', # exclude unsafe characters
r'}(?=[^%]|$)', # } is okay as long as it's not followed by %
r'}%(?=[^%]|$)', # ...well, one %'s okay, just not two...
r'[^\\][\\][{}]', # ...and } is okay if it's escaped
# allow / if it's preceded with one of these symbols
# (ragel EOF actions)
r'(>|\$|%|<|@|<>)/',
# specifically allow regex followed immediately by *
# so it doesn't get mistaken for a comment
r'/(?!\*)(\\\\|\\/|[^/])*/\*',
# allow / as long as it's not followed by another / or by a *
r'/(?=[^/\*]|$)',
# We want to match as many of these as we can in one block.
# Not sure if we need the + sign here,
# does it help performance?
)) + r')+',
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r"\[(\\\\|\\\]|[^\]])*\]", # square bracket literal
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
r'//.*$\n?', # single line comment
r'\#.*$\n?', # ruby/ragel comment
)) + r')+', using(RagelLexer)),
(r'}%%', Punctuation, '#pop'),
]
}
def analyse_text(text):
return '@LANG: indep' in text or 0.1
class RagelRubyLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a Ruby host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Ruby Host'
aliases = ['ragel-ruby', 'ragel-rb']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelRubyLexer, self).__init__(RubyLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: ruby' in text
class RagelCLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a C host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in C Host'
aliases = ['ragel-c']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelCLexer, self).__init__(CLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: c' in text
class RagelDLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a D host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in D Host'
aliases = ['ragel-d']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelDLexer, self).__init__(DLexer, RagelEmbeddedLexer, **options)
def analyse_text(text):
return '@LANG: d' in text
class RagelCppLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a CPP host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in CPP Host'
aliases = ['ragel-cpp']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelCppLexer, self).__init__(CppLexer, RagelEmbeddedLexer, **options)
def analyse_text(text):
return '@LANG: c++' in text
class RagelObjectiveCLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in an Objective C host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Objective C Host'
aliases = ['ragel-objc']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelObjectiveCLexer, self).__init__(ObjectiveCLexer,
RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: objc' in text
class RagelJavaLexer(DelegatingLexer):
"""
A lexer for `Ragel`_ in a Java host file.
*New in Pygments 1.1.*
"""
name = 'Ragel in Java Host'
aliases = ['ragel-java']
filenames = ['*.rl']
def __init__(self, **options):
super(RagelJavaLexer, self).__init__(JavaLexer, RagelEmbeddedLexer,
**options)
def analyse_text(text):
return '@LANG: java' in text
class AntlrLexer(RegexLexer):
"""
Generic `ANTLR`_ Lexer.
Should not be called directly, instead
use DelegatingLexer for your target language.
*New in Pygments 1.1.*
.. _ANTLR: http://www.antlr.org/
"""
name = 'ANTLR'
aliases = ['antlr']
filenames = []
_id = r'[A-Za-z][A-Za-z_0-9]*'
_TOKEN_REF = r'[A-Z][A-Za-z_0-9]*'
_RULE_REF = r'[a-z][A-Za-z_0-9]*'
_STRING_LITERAL = r'\'(?:\\\\|\\\'|[^\']*)\''
_INT = r'[0-9]+'
tokens = {
'whitespace': [
(r'\s+', Whitespace),
],
'comments': [
(r'//.*$', Comment),
(r'/\*(.|\n)*?\*/', Comment),
],
'root': [
include('whitespace'),
include('comments'),
(r'(lexer|parser|tree)?(\s*)(grammar\b)(\s*)(' + _id + ')(;)',
bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Class,
Punctuation)),
# optionsSpec
(r'options\b', Keyword, 'options'),
# tokensSpec
(r'tokens\b', Keyword, 'tokens'),
# attrScope
(r'(scope)(\s*)(' + _id + ')(\s*)({)',
bygroups(Keyword, Whitespace, Name.Variable, Whitespace,
Punctuation), 'action'),
# exception
(r'(catch|finally)\b', Keyword, 'exception'),
# action
(r'(@' + _id + ')(\s*)(::)?(\s*)(' + _id + ')(\s*)({)',
bygroups(Name.Label, Whitespace, Punctuation, Whitespace,
Name.Label, Whitespace, Punctuation), 'action'),
# rule
(r'((?:protected|private|public|fragment)\b)?(\s*)(' + _id + ')(!)?', \
bygroups(Keyword, Whitespace, Name.Label, Punctuation),
('rule-alts', 'rule-prelims')),
],
'exception': [
(r'\n', Whitespace, '#pop'),
(r'\s', Whitespace),
include('comments'),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
],
'rule-prelims': [
include('whitespace'),
include('comments'),
(r'returns\b', Keyword),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
# throwsSpec
(r'(throws)(\s+)(' + _id + ')',
bygroups(Keyword, Whitespace, Name.Label)),
(r'(,)(\s*)(' + _id + ')',
bygroups(Punctuation, Whitespace, Name.Label)), # Additional throws
# optionsSpec
(r'options\b', Keyword, 'options'),
# ruleScopeSpec - scope followed by target language code or name of action
# TODO finish implementing other possibilities for scope
# L173 ANTLRv3.g from ANTLR book
(r'(scope)(\s+)({)', bygroups(Keyword, Whitespace, Punctuation),
'action'),
(r'(scope)(\s+)(' + _id + ')(\s*)(;)',
bygroups(Keyword, Whitespace, Name.Label, Whitespace, Punctuation)),
# ruleAction
(r'(@' + _id + ')(\s*)({)',
bygroups(Name.Label, Whitespace, Punctuation), 'action'),
# finished prelims, go to rule alts!
(r':', Punctuation, '#pop')
],
'rule-alts': [
include('whitespace'),
include('comments'),
# These might need to go in a separate 'block' state triggered by (
(r'options\b', Keyword, 'options'),
(r':', Punctuation),
# literals
(r"'(\\\\|\\'|[^'])*'", String),
(r'"(\\\\|\\"|[^"])*"', String),
(r'<<([^>]|>[^>])>>', String),
# identifiers
# Tokens start with capital letter.
(r'\$?[A-Z_][A-Za-z_0-9]*', Name.Constant),
# Rules start with small letter.
(r'\$?[a-z_][A-Za-z_0-9]*', Name.Variable),
# operators
(r'(\+|\||->|=>|=|\(|\)|\.\.|\.|\?|\*|\^|!|\#|~)', Operator),
(r',', Punctuation),
(r'\[', Punctuation, 'nested-arg-action'),
(r'\{', Punctuation, 'action'),
(r';', Punctuation, '#pop')
],
'tokens': [
include('whitespace'),
include('comments'),
(r'{', Punctuation),
(r'(' + _TOKEN_REF + r')(\s*)(=)?(\s*)(' + _STRING_LITERAL
+ ')?(\s*)(;)',
bygroups(Name.Label, Whitespace, Punctuation, Whitespace,
String, Whitespace, Punctuation)),
(r'}', Punctuation, '#pop'),
],
'options': [
include('whitespace'),
include('comments'),
(r'{', Punctuation),
(r'(' + _id + r')(\s*)(=)(\s*)(' +
'|'.join((_id, _STRING_LITERAL, _INT, '\*'))+ ')(\s*)(;)',
bygroups(Name.Variable, Whitespace, Punctuation, Whitespace,
Text, Whitespace, Punctuation)),
(r'}', Punctuation, '#pop'),
],
'action': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks
r'[^\${}\'"/\\]+', # exclude unsafe characters
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# backslashes are okay, as long as we are not backslashing a %
r'\\(?!%)',
# Now that we've handled regex and javadoc comments
# it's safe to let / through.
r'/',
)) + r')+', Other),
(r'(\\)(%)', bygroups(Punctuation, Other)),
(r'(\$[a-zA-Z]+)(\.?)(text|value)?',
bygroups(Name.Variable, Punctuation, Name.Property)),
(r'{', Punctuation, '#push'),
(r'}', Punctuation, '#pop'),
],
'nested-arg-action': [
(r'(' + r'|'.join(( # keep host code in largest possible chunks.
r'[^\$\[\]\'"/]+', # exclude unsafe characters
# strings and comments may safely contain unsafe characters
r'"(\\\\|\\"|[^"])*"', # double quote string
r"'(\\\\|\\'|[^'])*'", # single quote string
r'//.*$\n?', # single line comment
r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
r'/(?!\*)(\\\\|\\/|[^/])*/',
# Now that we've handled regex and javadoc comments
# it's safe to let / through.
r'/',
)) + r')+', Other),
(r'\[', Punctuation, '#push'),
(r'\]', Punctuation, '#pop'),
(r'(\$[a-zA-Z]+)(\.?)(text|value)?',
bygroups(Name.Variable, Punctuation, Name.Property)),
(r'(\\\\|\\\]|\\\[|[^\[\]])+', Other),
]
}
def analyse_text(text):
return re.search(r'^\s*grammar\s+[a-zA-Z0-9]+\s*;', text, re.M)
# http://www.antlr.org/wiki/display/ANTLR3/Code+Generation+Targets
# TH: I'm not aware of any language features of C++ that will cause
# incorrect lexing of C files. Antlr doesn't appear to make a distinction,
# so just assume they're C++. No idea how to make Objective C work in the
# future.
#class AntlrCLexer(DelegatingLexer):
# """
# ANTLR with C Target
#
# *New in Pygments 1.1*
# """
#
# name = 'ANTLR With C Target'
# aliases = ['antlr-c']
# filenames = ['*.G', '*.g']
#
# def __init__(self, **options):
# super(AntlrCLexer, self).__init__(CLexer, AntlrLexer, **options)
#
# def analyse_text(text):
# return re.match(r'^\s*language\s*=\s*C\s*;', text)
class AntlrCppLexer(DelegatingLexer):
"""
`ANTLR`_ with CPP Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With CPP Target'
aliases = ['antlr-cpp']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrCppLexer, self).__init__(CppLexer, AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*C\s*;', text, re.M)
class AntlrObjectiveCLexer(DelegatingLexer):
"""
`ANTLR`_ with Objective-C Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With ObjectiveC Target'
aliases = ['antlr-objc']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrObjectiveCLexer, self).__init__(ObjectiveCLexer,
AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*ObjC\s*;', text)
class AntlrCSharpLexer(DelegatingLexer):
"""
`ANTLR`_ with C# Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With C# Target'
aliases = ['antlr-csharp', 'antlr-c#']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrCSharpLexer, self).__init__(CSharpLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*CSharp2\s*;', text, re.M)
class AntlrPythonLexer(DelegatingLexer):
"""
`ANTLR`_ with Python Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Python Target'
aliases = ['antlr-python']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrPythonLexer, self).__init__(PythonLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Python\s*;', text, re.M)
class AntlrJavaLexer(DelegatingLexer):
"""
`ANTLR`_ with Java Target
*New in Pygments 1.1*
"""
name = 'ANTLR With Java Target'
aliases = ['antlr-java']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrJavaLexer, self).__init__(JavaLexer, AntlrLexer,
**options)
def analyse_text(text):
# Antlr language is Java by default
return AntlrLexer.analyse_text(text) and 0.9
class AntlrRubyLexer(DelegatingLexer):
"""
`ANTLR`_ with Ruby Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Ruby Target'
aliases = ['antlr-ruby', 'antlr-rb']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrRubyLexer, self).__init__(RubyLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Ruby\s*;', text, re.M)
class AntlrPerlLexer(DelegatingLexer):
"""
`ANTLR`_ with Perl Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With Perl Target'
aliases = ['antlr-perl']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrPerlLexer, self).__init__(PerlLexer, AntlrLexer,
**options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*Perl5\s*;', text, re.M)
class AntlrActionScriptLexer(DelegatingLexer):
"""
`ANTLR`_ with ActionScript Target
*New in Pygments 1.1.*
"""
name = 'ANTLR With ActionScript Target'
aliases = ['antlr-as', 'antlr-actionscript']
filenames = ['*.G', '*.g']
def __init__(self, **options):
super(AntlrActionScriptLexer, self).__init__(ActionScriptLexer,
AntlrLexer, **options)
def analyse_text(text):
return AntlrLexer.analyse_text(text) and \
re.search(r'^\s*language\s*=\s*ActionScript\s*;', text, re.M)
class TreetopBaseLexer(RegexLexer):
"""
A base lexer for `Treetop <http://treetop.rubyforge.org/>`_ grammars.
Not for direct use; use TreetopLexer instead.
*New in Pygments 1.6.*
"""
tokens = {
'root': [
include('space'),
(r'require[ \t]+[^\n\r]+[\n\r]', Other),
(r'module\b', Keyword.Namespace, 'module'),
(r'grammar\b', Keyword, 'grammar'),
],
'module': [
include('space'),
include('end'),
(r'module\b', Keyword, '#push'),
(r'grammar\b', Keyword, 'grammar'),
(r'[A-Z][A-Za-z_0-9]*(?:::[A-Z][A-Za-z_0-9]*)*', Name.Namespace),
],
'grammar': [
include('space'),
include('end'),
(r'rule\b', Keyword, 'rule'),
(r'include\b', Keyword, 'include'),
(r'[A-Z][A-Za-z_0-9]*', Name),
],
'include': [
include('space'),
(r'[A-Z][A-Za-z_0-9]*(?:::[A-Z][A-Za-z_0-9]*)*', Name.Class, '#pop'),
],
'rule': [
include('space'),
include('end'),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
(r'([A-Za-z_][A-Za-z_0-9]*)(:)', bygroups(Name.Label, Punctuation)),
(r'[A-Za-z_][A-Za-z_0-9]*', Name),
(r'[()]', Punctuation),
(r'[?+*/&!~]', Operator),
(r'\[(?:\\.|\[:\^?[a-z]+:\]|[^\\\]])+\]', String.Regex),
(r'([0-9]*)(\.\.)([0-9]*)',
bygroups(Number.Integer, Operator, Number.Integer)),
(r'(<)([^>]+)(>)', bygroups(Punctuation, Name.Class, Punctuation)),
(r'{', Punctuation, 'inline_module'),
(r'\.', String.Regex),
],
'inline_module': [
(r'{', Other, 'ruby'),
(r'}', Punctuation, '#pop'),
(r'[^{}]+', Other),
],
'ruby': [
(r'{', Other, '#push'),
(r'}', Other, '#pop'),
(r'[^{}]+', Other),
],
'space': [
(r'[ \t\n\r]+', Whitespace),
(r'#[^\n]*', Comment.Single),
],
'end': [
(r'end\b', Keyword, '#pop'),
],
}
class TreetopLexer(DelegatingLexer):
"""
A lexer for `Treetop <http://treetop.rubyforge.org/>`_ grammars.
*New in Pygments 1.6.*
"""
name = 'Treetop'
aliases = ['treetop']
filenames = ['*.treetop', '*.tt']
def __init__(self, **options):
super(TreetopLexer, self).__init__(RubyLexer, TreetopBaseLexer, **options)
| gpl-2.0 |
Jeff-Tian/mybnb | Python27/Lib/site-packages/pip/_vendor/distlib/metadata.py | 427 | 38314 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2012 The Python Software Foundation.
# See LICENSE.txt and CONTRIBUTORS.txt.
#
"""Implementation of the Metadata for Python packages PEPs.
Supports all metadata formats (1.0, 1.1, 1.2, and 2.0 experimental).
"""
from __future__ import unicode_literals
import codecs
from email import message_from_file
import json
import logging
import re
from . import DistlibException, __version__
from .compat import StringIO, string_types, text_type
from .markers import interpret
from .util import extract_by_key, get_extras
from .version import get_scheme, PEP440_VERSION_RE
logger = logging.getLogger(__name__)
class MetadataMissingError(DistlibException):
"""A required metadata is missing"""
class MetadataConflictError(DistlibException):
"""Attempt to read or write metadata fields that are conflictual."""
class MetadataUnrecognizedVersionError(DistlibException):
"""Unknown metadata version number."""
class MetadataInvalidError(DistlibException):
"""A metadata value is invalid"""
# public API of this module
__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION']
# Encoding used for the PKG-INFO files
PKG_INFO_ENCODING = 'utf-8'
# preferred version. Hopefully will be changed
# to 1.2 once PEP 345 is supported everywhere
PKG_INFO_PREFERRED_VERSION = '1.1'
_LINE_PREFIX = re.compile('\n \|')
_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
'Summary', 'Description',
'Keywords', 'Home-page', 'Author', 'Author-email',
'License')
_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
'Supported-Platform', 'Summary', 'Description',
'Keywords', 'Home-page', 'Author', 'Author-email',
'License', 'Classifier', 'Download-URL', 'Obsoletes',
'Provides', 'Requires')
_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier',
'Download-URL')
_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
'Supported-Platform', 'Summary', 'Description',
'Keywords', 'Home-page', 'Author', 'Author-email',
'Maintainer', 'Maintainer-email', 'License',
'Classifier', 'Download-URL', 'Obsoletes-Dist',
'Project-URL', 'Provides-Dist', 'Requires-Dist',
'Requires-Python', 'Requires-External')
_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python',
'Obsoletes-Dist', 'Requires-External', 'Maintainer',
'Maintainer-email', 'Project-URL')
_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
'Supported-Platform', 'Summary', 'Description',
'Keywords', 'Home-page', 'Author', 'Author-email',
'Maintainer', 'Maintainer-email', 'License',
'Classifier', 'Download-URL', 'Obsoletes-Dist',
'Project-URL', 'Provides-Dist', 'Requires-Dist',
'Requires-Python', 'Requires-External', 'Private-Version',
'Obsoleted-By', 'Setup-Requires-Dist', 'Extension',
'Provides-Extra')
_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By',
'Setup-Requires-Dist', 'Extension')
_ALL_FIELDS = set()
_ALL_FIELDS.update(_241_FIELDS)
_ALL_FIELDS.update(_314_FIELDS)
_ALL_FIELDS.update(_345_FIELDS)
_ALL_FIELDS.update(_426_FIELDS)
EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''')
def _version2fieldlist(version):
if version == '1.0':
return _241_FIELDS
elif version == '1.1':
return _314_FIELDS
elif version == '1.2':
return _345_FIELDS
elif version == '2.0':
return _426_FIELDS
raise MetadataUnrecognizedVersionError(version)
def _best_version(fields):
"""Detect the best version depending on the fields used."""
def _has_marker(keys, markers):
for marker in markers:
if marker in keys:
return True
return False
keys = []
for key, value in fields.items():
if value in ([], 'UNKNOWN', None):
continue
keys.append(key)
possible_versions = ['1.0', '1.1', '1.2', '2.0']
# first let's try to see if a field is not part of one of the version
for key in keys:
if key not in _241_FIELDS and '1.0' in possible_versions:
possible_versions.remove('1.0')
if key not in _314_FIELDS and '1.1' in possible_versions:
possible_versions.remove('1.1')
if key not in _345_FIELDS and '1.2' in possible_versions:
possible_versions.remove('1.2')
if key not in _426_FIELDS and '2.0' in possible_versions:
possible_versions.remove('2.0')
# possible_version contains qualified versions
if len(possible_versions) == 1:
return possible_versions[0] # found !
elif len(possible_versions) == 0:
raise MetadataConflictError('Unknown metadata set')
# let's see if one unique marker is found
is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS)
is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS)
is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS)
if int(is_1_1) + int(is_1_2) + int(is_2_0) > 1:
raise MetadataConflictError('You used incompatible 1.1/1.2/2.0 fields')
# we have the choice, 1.0, or 1.2, or 2.0
# - 1.0 has a broken Summary field but works with all tools
# - 1.1 is to avoid
# - 1.2 fixes Summary but has little adoption
# - 2.0 adds more features and is very new
if not is_1_1 and not is_1_2 and not is_2_0:
# we couldn't find any specific marker
if PKG_INFO_PREFERRED_VERSION in possible_versions:
return PKG_INFO_PREFERRED_VERSION
if is_1_1:
return '1.1'
if is_1_2:
return '1.2'
return '2.0'
_ATTR2FIELD = {
'metadata_version': 'Metadata-Version',
'name': 'Name',
'version': 'Version',
'platform': 'Platform',
'supported_platform': 'Supported-Platform',
'summary': 'Summary',
'description': 'Description',
'keywords': 'Keywords',
'home_page': 'Home-page',
'author': 'Author',
'author_email': 'Author-email',
'maintainer': 'Maintainer',
'maintainer_email': 'Maintainer-email',
'license': 'License',
'classifier': 'Classifier',
'download_url': 'Download-URL',
'obsoletes_dist': 'Obsoletes-Dist',
'provides_dist': 'Provides-Dist',
'requires_dist': 'Requires-Dist',
'setup_requires_dist': 'Setup-Requires-Dist',
'requires_python': 'Requires-Python',
'requires_external': 'Requires-External',
'requires': 'Requires',
'provides': 'Provides',
'obsoletes': 'Obsoletes',
'project_url': 'Project-URL',
'private_version': 'Private-Version',
'obsoleted_by': 'Obsoleted-By',
'extension': 'Extension',
'provides_extra': 'Provides-Extra',
}
_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist')
_VERSIONS_FIELDS = ('Requires-Python',)
_VERSION_FIELDS = ('Version',)
_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes',
'Requires', 'Provides', 'Obsoletes-Dist',
'Provides-Dist', 'Requires-Dist', 'Requires-External',
'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist',
'Provides-Extra', 'Extension')
_LISTTUPLEFIELDS = ('Project-URL',)
_ELEMENTSFIELD = ('Keywords',)
_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description')
_MISSING = object()
_FILESAFE = re.compile('[^A-Za-z0-9.]+')
def _get_name_and_version(name, version, for_filename=False):
"""Return the distribution name with version.
If for_filename is true, return a filename-escaped form."""
if for_filename:
# For both name and version any runs of non-alphanumeric or '.'
# characters are replaced with a single '-'. Additionally any
# spaces in the version string become '.'
name = _FILESAFE.sub('-', name)
version = _FILESAFE.sub('-', version.replace(' ', '.'))
return '%s-%s' % (name, version)
class LegacyMetadata(object):
"""The legacy metadata of a release.
Supports versions 1.0, 1.1 and 1.2 (auto-detected). You can
instantiate the class with one of these arguments (or none):
- *path*, the path to a metadata file
- *fileobj* give a file-like object with metadata as content
- *mapping* is a dict-like object
- *scheme* is a version scheme name
"""
# TODO document the mapping API and UNKNOWN default key
def __init__(self, path=None, fileobj=None, mapping=None,
scheme='default'):
if [path, fileobj, mapping].count(None) < 2:
raise TypeError('path, fileobj and mapping are exclusive')
self._fields = {}
self.requires_files = []
self._dependencies = None
self.scheme = scheme
if path is not None:
self.read(path)
elif fileobj is not None:
self.read_file(fileobj)
elif mapping is not None:
self.update(mapping)
self.set_metadata_version()
def set_metadata_version(self):
self._fields['Metadata-Version'] = _best_version(self._fields)
def _write_field(self, fileobj, name, value):
fileobj.write('%s: %s\n' % (name, value))
def __getitem__(self, name):
return self.get(name)
def __setitem__(self, name, value):
return self.set(name, value)
def __delitem__(self, name):
field_name = self._convert_name(name)
try:
del self._fields[field_name]
except KeyError:
raise KeyError(name)
def __contains__(self, name):
return (name in self._fields or
self._convert_name(name) in self._fields)
def _convert_name(self, name):
if name in _ALL_FIELDS:
return name
name = name.replace('-', '_').lower()
return _ATTR2FIELD.get(name, name)
def _default_value(self, name):
if name in _LISTFIELDS or name in _ELEMENTSFIELD:
return []
return 'UNKNOWN'
def _remove_line_prefix(self, value):
return _LINE_PREFIX.sub('\n', value)
def __getattr__(self, name):
if name in _ATTR2FIELD:
return self[name]
raise AttributeError(name)
#
# Public API
#
# dependencies = property(_get_dependencies, _set_dependencies)
def get_fullname(self, filesafe=False):
"""Return the distribution name with version.
If filesafe is true, return a filename-escaped form."""
return _get_name_and_version(self['Name'], self['Version'], filesafe)
def is_field(self, name):
"""return True if name is a valid metadata key"""
name = self._convert_name(name)
return name in _ALL_FIELDS
def is_multi_field(self, name):
name = self._convert_name(name)
return name in _LISTFIELDS
def read(self, filepath):
"""Read the metadata values from a file path."""
fp = codecs.open(filepath, 'r', encoding='utf-8')
try:
self.read_file(fp)
finally:
fp.close()
def read_file(self, fileob):
"""Read the metadata values from a file object."""
msg = message_from_file(fileob)
self._fields['Metadata-Version'] = msg['metadata-version']
# When reading, get all the fields we can
for field in _ALL_FIELDS:
if field not in msg:
continue
if field in _LISTFIELDS:
# we can have multiple lines
values = msg.get_all(field)
if field in _LISTTUPLEFIELDS and values is not None:
values = [tuple(value.split(',')) for value in values]
self.set(field, values)
else:
# single line
value = msg[field]
if value is not None and value != 'UNKNOWN':
self.set(field, value)
self.set_metadata_version()
def write(self, filepath, skip_unknown=False):
"""Write the metadata fields to filepath."""
fp = codecs.open(filepath, 'w', encoding='utf-8')
try:
self.write_file(fp, skip_unknown)
finally:
fp.close()
def write_file(self, fileobject, skip_unknown=False):
"""Write the PKG-INFO format data to a file object."""
self.set_metadata_version()
for field in _version2fieldlist(self['Metadata-Version']):
values = self.get(field)
if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']):
continue
if field in _ELEMENTSFIELD:
self._write_field(fileobject, field, ','.join(values))
continue
if field not in _LISTFIELDS:
if field == 'Description':
values = values.replace('\n', '\n |')
values = [values]
if field in _LISTTUPLEFIELDS:
values = [','.join(value) for value in values]
for value in values:
self._write_field(fileobject, field, value)
def update(self, other=None, **kwargs):
"""Set metadata values from the given iterable `other` and kwargs.
Behavior is like `dict.update`: If `other` has a ``keys`` method,
they are looped over and ``self[key]`` is assigned ``other[key]``.
Else, ``other`` is an iterable of ``(key, value)`` iterables.
Keys that don't match a metadata field or that have an empty value are
dropped.
"""
def _set(key, value):
if key in _ATTR2FIELD and value:
self.set(self._convert_name(key), value)
if not other:
# other is None or empty container
pass
elif hasattr(other, 'keys'):
for k in other.keys():
_set(k, other[k])
else:
for k, v in other:
_set(k, v)
if kwargs:
for k, v in kwargs.items():
_set(k, v)
def set(self, name, value):
"""Control then set a metadata field."""
name = self._convert_name(name)
if ((name in _ELEMENTSFIELD or name == 'Platform') and
not isinstance(value, (list, tuple))):
if isinstance(value, string_types):
value = [v.strip() for v in value.split(',')]
else:
value = []
elif (name in _LISTFIELDS and
not isinstance(value, (list, tuple))):
if isinstance(value, string_types):
value = [value]
else:
value = []
if logger.isEnabledFor(logging.WARNING):
project_name = self['Name']
scheme = get_scheme(self.scheme)
if name in _PREDICATE_FIELDS and value is not None:
for v in value:
# check that the values are valid
if not scheme.is_valid_matcher(v.split(';')[0]):
logger.warning(
'%r: %r is not valid (field %r)',
project_name, v, name)
# FIXME this rejects UNKNOWN, is that right?
elif name in _VERSIONS_FIELDS and value is not None:
if not scheme.is_valid_constraint_list(value):
logger.warning('%r: %r is not a valid version (field %r)',
project_name, value, name)
elif name in _VERSION_FIELDS and value is not None:
if not scheme.is_valid_version(value):
logger.warning('%r: %r is not a valid version (field %r)',
project_name, value, name)
if name in _UNICODEFIELDS:
if name == 'Description':
value = self._remove_line_prefix(value)
self._fields[name] = value
def get(self, name, default=_MISSING):
"""Get a metadata field."""
name = self._convert_name(name)
if name not in self._fields:
if default is _MISSING:
default = self._default_value(name)
return default
if name in _UNICODEFIELDS:
value = self._fields[name]
return value
elif name in _LISTFIELDS:
value = self._fields[name]
if value is None:
return []
res = []
for val in value:
if name not in _LISTTUPLEFIELDS:
res.append(val)
else:
# That's for Project-URL
res.append((val[0], val[1]))
return res
elif name in _ELEMENTSFIELD:
value = self._fields[name]
if isinstance(value, string_types):
return value.split(',')
return self._fields[name]
def check(self, strict=False):
"""Check if the metadata is compliant. If strict is True then raise if
no Name or Version are provided"""
self.set_metadata_version()
# XXX should check the versions (if the file was loaded)
missing, warnings = [], []
for attr in ('Name', 'Version'): # required by PEP 345
if attr not in self:
missing.append(attr)
if strict and missing != []:
msg = 'missing required metadata: %s' % ', '.join(missing)
raise MetadataMissingError(msg)
for attr in ('Home-page', 'Author'):
if attr not in self:
missing.append(attr)
# checking metadata 1.2 (XXX needs to check 1.1, 1.0)
if self['Metadata-Version'] != '1.2':
return missing, warnings
scheme = get_scheme(self.scheme)
def are_valid_constraints(value):
for v in value:
if not scheme.is_valid_matcher(v.split(';')[0]):
return False
return True
for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints),
(_VERSIONS_FIELDS,
scheme.is_valid_constraint_list),
(_VERSION_FIELDS,
scheme.is_valid_version)):
for field in fields:
value = self.get(field, None)
if value is not None and not controller(value):
warnings.append('Wrong value for %r: %s' % (field, value))
return missing, warnings
def todict(self, skip_missing=False):
"""Return fields as a dict.
Field names will be converted to use the underscore-lowercase style
instead of hyphen-mixed case (i.e. home_page instead of Home-page).
"""
self.set_metadata_version()
mapping_1_0 = (
('metadata_version', 'Metadata-Version'),
('name', 'Name'),
('version', 'Version'),
('summary', 'Summary'),
('home_page', 'Home-page'),
('author', 'Author'),
('author_email', 'Author-email'),
('license', 'License'),
('description', 'Description'),
('keywords', 'Keywords'),
('platform', 'Platform'),
('classifier', 'Classifier'),
('download_url', 'Download-URL'),
)
data = {}
for key, field_name in mapping_1_0:
if not skip_missing or field_name in self._fields:
data[key] = self[field_name]
if self['Metadata-Version'] == '1.2':
mapping_1_2 = (
('requires_dist', 'Requires-Dist'),
('requires_python', 'Requires-Python'),
('requires_external', 'Requires-External'),
('provides_dist', 'Provides-Dist'),
('obsoletes_dist', 'Obsoletes-Dist'),
('project_url', 'Project-URL'),
('maintainer', 'Maintainer'),
('maintainer_email', 'Maintainer-email'),
)
for key, field_name in mapping_1_2:
if not skip_missing or field_name in self._fields:
if key != 'project_url':
data[key] = self[field_name]
else:
data[key] = [','.join(u) for u in self[field_name]]
elif self['Metadata-Version'] == '1.1':
mapping_1_1 = (
('provides', 'Provides'),
('requires', 'Requires'),
('obsoletes', 'Obsoletes'),
)
for key, field_name in mapping_1_1:
if not skip_missing or field_name in self._fields:
data[key] = self[field_name]
return data
def add_requirements(self, requirements):
if self['Metadata-Version'] == '1.1':
# we can't have 1.1 metadata *and* Setuptools requires
for field in ('Obsoletes', 'Requires', 'Provides'):
if field in self:
del self[field]
self['Requires-Dist'] += requirements
# Mapping API
# TODO could add iter* variants
def keys(self):
return list(_version2fieldlist(self['Metadata-Version']))
def __iter__(self):
for key in self.keys():
yield key
def values(self):
return [self[key] for key in self.keys()]
def items(self):
return [(key, self[key]) for key in self.keys()]
def __repr__(self):
return '<%s %s %s>' % (self.__class__.__name__, self.name,
self.version)
METADATA_FILENAME = 'pydist.json'
class Metadata(object):
"""
The metadata of a release. This implementation uses 2.0 (JSON)
metadata where possible. If not possible, it wraps a LegacyMetadata
instance which handles the key-value metadata format.
"""
METADATA_VERSION_MATCHER = re.compile('^\d+(\.\d+)*$')
NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I)
VERSION_MATCHER = PEP440_VERSION_RE
SUMMARY_MATCHER = re.compile('.{1,2047}')
METADATA_VERSION = '2.0'
GENERATOR = 'distlib (%s)' % __version__
MANDATORY_KEYS = {
'name': (),
'version': (),
'summary': ('legacy',),
}
INDEX_KEYS = ('name version license summary description author '
'author_email keywords platform home_page classifiers '
'download_url')
DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires '
'dev_requires provides meta_requires obsoleted_by '
'supports_environments')
SYNTAX_VALIDATORS = {
'metadata_version': (METADATA_VERSION_MATCHER, ()),
'name': (NAME_MATCHER, ('legacy',)),
'version': (VERSION_MATCHER, ('legacy',)),
'summary': (SUMMARY_MATCHER, ('legacy',)),
}
__slots__ = ('_legacy', '_data', 'scheme')
def __init__(self, path=None, fileobj=None, mapping=None,
scheme='default'):
if [path, fileobj, mapping].count(None) < 2:
raise TypeError('path, fileobj and mapping are exclusive')
self._legacy = None
self._data = None
self.scheme = scheme
#import pdb; pdb.set_trace()
if mapping is not None:
try:
self._validate_mapping(mapping, scheme)
self._data = mapping
except MetadataUnrecognizedVersionError:
self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme)
self.validate()
else:
data = None
if path:
with open(path, 'rb') as f:
data = f.read()
elif fileobj:
data = fileobj.read()
if data is None:
# Initialised with no args - to be added
self._data = {
'metadata_version': self.METADATA_VERSION,
'generator': self.GENERATOR,
}
else:
if not isinstance(data, text_type):
data = data.decode('utf-8')
try:
self._data = json.loads(data)
self._validate_mapping(self._data, scheme)
except ValueError:
# Note: MetadataUnrecognizedVersionError does not
# inherit from ValueError (it's a DistlibException,
# which should not inherit from ValueError).
# The ValueError comes from the json.load - if that
# succeeds and we get a validation error, we want
# that to propagate
self._legacy = LegacyMetadata(fileobj=StringIO(data),
scheme=scheme)
self.validate()
common_keys = set(('name', 'version', 'license', 'keywords', 'summary'))
none_list = (None, list)
none_dict = (None, dict)
mapped_keys = {
'run_requires': ('Requires-Dist', list),
'build_requires': ('Setup-Requires-Dist', list),
'dev_requires': none_list,
'test_requires': none_list,
'meta_requires': none_list,
'extras': ('Provides-Extra', list),
'modules': none_list,
'namespaces': none_list,
'exports': none_dict,
'commands': none_dict,
'classifiers': ('Classifier', list),
'source_url': ('Download-URL', None),
'metadata_version': ('Metadata-Version', None),
}
del none_list, none_dict
def __getattribute__(self, key):
common = object.__getattribute__(self, 'common_keys')
mapped = object.__getattribute__(self, 'mapped_keys')
if key in mapped:
lk, maker = mapped[key]
if self._legacy:
if lk is None:
result = None if maker is None else maker()
else:
result = self._legacy.get(lk)
else:
value = None if maker is None else maker()
if key not in ('commands', 'exports', 'modules', 'namespaces',
'classifiers'):
result = self._data.get(key, value)
else:
# special cases for PEP 459
sentinel = object()
result = sentinel
d = self._data.get('extensions')
if d:
if key == 'commands':
result = d.get('python.commands', value)
elif key == 'classifiers':
d = d.get('python.details')
if d:
result = d.get(key, value)
else:
d = d.get('python.exports')
if d:
result = d.get(key, value)
if result is sentinel:
result = value
elif key not in common:
result = object.__getattribute__(self, key)
elif self._legacy:
result = self._legacy.get(key)
else:
result = self._data.get(key)
return result
def _validate_value(self, key, value, scheme=None):
if key in self.SYNTAX_VALIDATORS:
pattern, exclusions = self.SYNTAX_VALIDATORS[key]
if (scheme or self.scheme) not in exclusions:
m = pattern.match(value)
if not m:
raise MetadataInvalidError('%r is an invalid value for '
'the %r property' % (value,
key))
def __setattr__(self, key, value):
self._validate_value(key, value)
common = object.__getattribute__(self, 'common_keys')
mapped = object.__getattribute__(self, 'mapped_keys')
if key in mapped:
lk, _ = mapped[key]
if self._legacy:
if lk is None:
raise NotImplementedError
self._legacy[lk] = value
elif key not in ('commands', 'exports', 'modules', 'namespaces',
'classifiers'):
self._data[key] = value
else:
# special cases for PEP 459
d = self._data.setdefault('extensions', {})
if key == 'commands':
d['python.commands'] = value
elif key == 'classifiers':
d = d.setdefault('python.details', {})
d[key] = value
else:
d = d.setdefault('python.exports', {})
d[key] = value
elif key not in common:
object.__setattr__(self, key, value)
else:
if key == 'keywords':
if isinstance(value, string_types):
value = value.strip()
if value:
value = value.split()
else:
value = []
if self._legacy:
self._legacy[key] = value
else:
self._data[key] = value
@property
def name_and_version(self):
return _get_name_and_version(self.name, self.version, True)
@property
def provides(self):
if self._legacy:
result = self._legacy['Provides-Dist']
else:
result = self._data.setdefault('provides', [])
s = '%s (%s)' % (self.name, self.version)
if s not in result:
result.append(s)
return result
@provides.setter
def provides(self, value):
if self._legacy:
self._legacy['Provides-Dist'] = value
else:
self._data['provides'] = value
def get_requirements(self, reqts, extras=None, env=None):
"""
Base method to get dependencies, given a set of extras
to satisfy and an optional environment context.
:param reqts: A list of sometimes-wanted dependencies,
perhaps dependent on extras and environment.
:param extras: A list of optional components being requested.
:param env: An optional environment for marker evaluation.
"""
if self._legacy:
result = reqts
else:
result = []
extras = get_extras(extras or [], self.extras)
for d in reqts:
if 'extra' not in d and 'environment' not in d:
# unconditional
include = True
else:
if 'extra' not in d:
# Not extra-dependent - only environment-dependent
include = True
else:
include = d.get('extra') in extras
if include:
# Not excluded because of extras, check environment
marker = d.get('environment')
if marker:
include = interpret(marker, env)
if include:
result.extend(d['requires'])
for key in ('build', 'dev', 'test'):
e = ':%s:' % key
if e in extras:
extras.remove(e)
# A recursive call, but it should terminate since 'test'
# has been removed from the extras
reqts = self._data.get('%s_requires' % key, [])
result.extend(self.get_requirements(reqts, extras=extras,
env=env))
return result
@property
def dictionary(self):
if self._legacy:
return self._from_legacy()
return self._data
@property
def dependencies(self):
if self._legacy:
raise NotImplementedError
else:
return extract_by_key(self._data, self.DEPENDENCY_KEYS)
@dependencies.setter
def dependencies(self, value):
if self._legacy:
raise NotImplementedError
else:
self._data.update(value)
def _validate_mapping(self, mapping, scheme):
if mapping.get('metadata_version') != self.METADATA_VERSION:
raise MetadataUnrecognizedVersionError()
missing = []
for key, exclusions in self.MANDATORY_KEYS.items():
if key not in mapping:
if scheme not in exclusions:
missing.append(key)
if missing:
msg = 'Missing metadata items: %s' % ', '.join(missing)
raise MetadataMissingError(msg)
for k, v in mapping.items():
self._validate_value(k, v, scheme)
def validate(self):
if self._legacy:
missing, warnings = self._legacy.check(True)
if missing or warnings:
logger.warning('Metadata: missing: %s, warnings: %s',
missing, warnings)
else:
self._validate_mapping(self._data, self.scheme)
def todict(self):
if self._legacy:
return self._legacy.todict(True)
else:
result = extract_by_key(self._data, self.INDEX_KEYS)
return result
def _from_legacy(self):
assert self._legacy and not self._data
result = {
'metadata_version': self.METADATA_VERSION,
'generator': self.GENERATOR,
}
lmd = self._legacy.todict(True) # skip missing ones
for k in ('name', 'version', 'license', 'summary', 'description',
'classifier'):
if k in lmd:
if k == 'classifier':
nk = 'classifiers'
else:
nk = k
result[nk] = lmd[k]
kw = lmd.get('Keywords', [])
if kw == ['']:
kw = []
result['keywords'] = kw
keys = (('requires_dist', 'run_requires'),
('setup_requires_dist', 'build_requires'))
for ok, nk in keys:
if ok in lmd and lmd[ok]:
result[nk] = [{'requires': lmd[ok]}]
result['provides'] = self.provides
author = {}
maintainer = {}
return result
LEGACY_MAPPING = {
'name': 'Name',
'version': 'Version',
'license': 'License',
'summary': 'Summary',
'description': 'Description',
'classifiers': 'Classifier',
}
def _to_legacy(self):
def process_entries(entries):
reqts = set()
for e in entries:
extra = e.get('extra')
env = e.get('environment')
rlist = e['requires']
for r in rlist:
if not env and not extra:
reqts.add(r)
else:
marker = ''
if extra:
marker = 'extra == "%s"' % extra
if env:
if marker:
marker = '(%s) and %s' % (env, marker)
else:
marker = env
reqts.add(';'.join((r, marker)))
return reqts
assert self._data and not self._legacy
result = LegacyMetadata()
nmd = self._data
for nk, ok in self.LEGACY_MAPPING.items():
if nk in nmd:
result[ok] = nmd[nk]
r1 = process_entries(self.run_requires + self.meta_requires)
r2 = process_entries(self.build_requires + self.dev_requires)
if self.extras:
result['Provides-Extra'] = sorted(self.extras)
result['Requires-Dist'] = sorted(r1)
result['Setup-Requires-Dist'] = sorted(r2)
# TODO: other fields such as contacts
return result
def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True):
if [path, fileobj].count(None) != 1:
raise ValueError('Exactly one of path and fileobj is needed')
self.validate()
if legacy:
if self._legacy:
legacy_md = self._legacy
else:
legacy_md = self._to_legacy()
if path:
legacy_md.write(path, skip_unknown=skip_unknown)
else:
legacy_md.write_file(fileobj, skip_unknown=skip_unknown)
else:
if self._legacy:
d = self._from_legacy()
else:
d = self._data
if fileobj:
json.dump(d, fileobj, ensure_ascii=True, indent=2,
sort_keys=True)
else:
with codecs.open(path, 'w', 'utf-8') as f:
json.dump(d, f, ensure_ascii=True, indent=2,
sort_keys=True)
def add_requirements(self, requirements):
if self._legacy:
self._legacy.add_requirements(requirements)
else:
run_requires = self._data.setdefault('run_requires', [])
always = None
for entry in run_requires:
if 'environment' not in entry and 'extra' not in entry:
always = entry
break
if always is None:
always = { 'requires': requirements }
run_requires.insert(0, always)
else:
rset = set(always['requires']) | set(requirements)
always['requires'] = sorted(rset)
def __repr__(self):
name = self.name or '(no name)'
version = self.version or 'no version'
return '<%s %s %s (%s)>' % (self.__class__.__name__,
self.metadata_version, name, version)
| apache-2.0 |
yifanzh/ohmypaw | red.py | 1 | 22850 | import asyncio
import os
import sys
sys.path.insert(0, "lib")
import logging
import logging.handlers
import traceback
import datetime
import subprocess
try:
from discord.ext import commands
import discord
except ImportError:
print("Discord.py is not installed.\n"
"Consult the guide for your operating system "
"and do ALL the steps in order.\n"
"https://twentysix26.github.io/Red-Docs/\n")
sys.exit(1)
from cogs.utils.settings import Settings
from cogs.utils.dataIO import dataIO
from cogs.utils.chat_formatting import inline
from collections import Counter
from io import TextIOWrapper
#
# Red, a Discord bot by Twentysix, based on discord.py and its command
# extension.
#
# https://github.com/Twentysix26/
#
#
# red.py and cogs/utils/checks.py both contain some modified functions
# originally made by Rapptz.
#
# https://github.com/Rapptz/RoboDanny/
#
description = "Red - A multifunction Discord bot by Twentysix"
class Bot(commands.Bot):
def __init__(self, *args, **kwargs):
def prefix_manager(bot, message):
"""
Returns prefixes of the message's server if set.
If none are set or if the message's server is None
it will return the global prefixes instead.
Requires a Bot instance and a Message object to be
passed as arguments.
"""
return bot.settings.get_prefixes(message.server)
self.counter = Counter()
self.uptime = datetime.datetime.utcnow() # Refreshed before login
self._message_modifiers = []
self.settings = Settings()
self._intro_displayed = False
self._shutdown_mode = None
self.logger = set_logger(self)
self._last_exception = None
self.oauth_url = ""
if 'self_bot' in kwargs:
self.settings.self_bot = kwargs['self_bot']
else:
kwargs['self_bot'] = self.settings.self_bot
if self.settings.self_bot:
kwargs['pm_help'] = False
super().__init__(*args, command_prefix=prefix_manager, **kwargs)
async def send_message(self, *args, **kwargs):
if self._message_modifiers:
if "content" in kwargs:
pass
elif len(args) == 2:
args = list(args)
kwargs["content"] = args.pop()
else:
return await super().send_message(*args, **kwargs)
content = kwargs['content']
for m in self._message_modifiers:
try:
content = str(m(content))
except: # Faulty modifiers should not
pass # break send_message
kwargs['content'] = content
return await super().send_message(*args, **kwargs)
async def shutdown(self, *, restart=False):
"""Gracefully quits Red with exit code 0
If restart is True, the exit code will be 26 instead
The launcher automatically restarts Red when that happens"""
self._shutdown_mode = not restart
await self.logout()
def add_message_modifier(self, func):
"""
Adds a message modifier to the bot
A message modifier is a callable that accepts a message's
content as the first positional argument.
Before a message gets sent, func will get called with
the message's content as the only argument. The message's
content will then be modified to be the func's return
value.
Exceptions thrown by the callable will be catched and
silenced.
"""
if not callable(func):
raise TypeError("The message modifier function "
"must be a callable.")
self._message_modifiers.append(func)
def remove_message_modifier(self, func):
"""Removes a message modifier from the bot"""
if func not in self._message_modifiers:
raise RuntimeError("Function not present in the message "
"modifiers.")
self._message_modifiers.remove(func)
def clear_message_modifiers(self):
"""Removes all message modifiers from the bot"""
self._message_modifiers.clear()
async def send_cmd_help(self, ctx):
if ctx.invoked_subcommand:
pages = self.formatter.format_help_for(ctx, ctx.invoked_subcommand)
for page in pages:
await self.send_message(ctx.message.channel, page)
else:
pages = self.formatter.format_help_for(ctx, ctx.command)
for page in pages:
await self.send_message(ctx.message.channel, page)
def user_allowed(self, message):
author = message.author
if author.bot:
return False
if author == self.user:
return self.settings.self_bot
mod_cog = self.get_cog('Mod')
global_ignores = self.get_cog('Owner').global_ignores
if self.settings.owner == author.id:
return True
if author.id in global_ignores["blacklist"]:
return False
if global_ignores["whitelist"]:
if author.id not in global_ignores["whitelist"]:
return False
if not message.channel.is_private:
server = message.server
names = (self.settings.get_server_admin(
server), self.settings.get_server_mod(server))
results = map(
lambda name: discord.utils.get(author.roles, name=name),
names)
for r in results:
if r is not None:
return True
if mod_cog is not None:
if not message.channel.is_private:
if message.server.id in mod_cog.ignore_list["SERVERS"]:
return False
if message.channel.id in mod_cog.ignore_list["CHANNELS"]:
return False
return True
async def pip_install(self, name, *, timeout=None):
"""
Installs a pip package in the local 'lib' folder in a thread safe
way. On Mac systems the 'lib' folder is not used.
Can specify the max seconds to wait for the task to complete
Returns a bool indicating if the installation was successful
"""
IS_MAC = sys.platform == "darwin"
interpreter = sys.executable
if interpreter is None:
raise RuntimeError("Couldn't find Python's interpreter")
args = [
interpreter, "-m",
"pip", "install",
"--upgrade",
"--target", "lib",
name
]
if IS_MAC: # --target is a problem on Homebrew. See PR #552
args.remove("--target")
args.remove("lib")
def install():
code = subprocess.call(args)
sys.path_importer_cache = {}
return not bool(code)
response = self.loop.run_in_executor(None, install)
return await asyncio.wait_for(response, timeout=timeout)
class Formatter(commands.HelpFormatter):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _add_subcommands_to_page(self, max_width, commands):
for name, command in sorted(commands, key=lambda t: t[0]):
if name in command.aliases:
# skip aliases
continue
entry = ' {0:<{width}} {1}'.format(name, command.short_doc,
width=max_width)
shortened = self.shorten(entry)
self._paginator.add_line(shortened)
def initialize(bot_class=Bot, formatter_class=Formatter):
formatter = formatter_class(show_check_failure=False)
bot = bot_class(formatter=formatter, description=description, pm_help=None)
import __main__
__main__.send_cmd_help = bot.send_cmd_help # Backwards
__main__.user_allowed = bot.user_allowed # compatibility
__main__.settings = bot.settings # sucks
async def get_oauth_url():
try:
data = await bot.application_info()
except Exception as e:
return "Couldn't retrieve invite link.Error: {}".format(e)
return discord.utils.oauth_url(data.id)
async def set_bot_owner():
if bot.settings.self_bot:
bot.settings.owner = bot.user.id
return "[Selfbot mode]"
if bot.settings.owner:
owner = discord.utils.get(bot.get_all_members(),
id=bot.settings.owner)
if not owner:
try:
owner = await bot.get_user_info(bot.settings.owner)
except:
owner = None
if not owner:
owner = bot.settings.owner # Just the ID then
return owner
how_to = "Do `[p]set owner` in chat to set it"
if bot.user.bot: # Can fetch owner
try:
data = await bot.application_info()
bot.settings.owner = data.owner.id
bot.settings.save_settings()
return data.owner
except:
return "Failed to fetch owner. " + how_to
else:
return "Yet to be set. " + how_to
@bot.event
async def on_ready():
if bot._intro_displayed:
return
bot._intro_displayed = True
owner_cog = bot.get_cog('Owner')
total_cogs = len(owner_cog._list_cogs())
users = len(set(bot.get_all_members()))
servers = len(bot.servers)
channels = len([c for c in bot.get_all_channels()])
login_time = datetime.datetime.utcnow() - bot.uptime
login_time = login_time.seconds + login_time.microseconds/1E6
print("Login successful. ({}ms)\n".format(login_time))
owner = await set_bot_owner()
print("-----------------")
print("Red - Discord Bot")
print("-----------------")
print(str(bot.user))
print("\nConnected to:")
print("{} servers".format(servers))
print("{} channels".format(channels))
print("{} users\n".format(users))
prefix_label = 'Prefix'
if len(bot.settings.prefixes) > 1:
prefix_label += 'es'
print("{}: {}".format(prefix_label, " ".join(bot.settings.prefixes)))
print("Owner: " + str(owner))
print("{}/{} active cogs with {} commands".format(
len(bot.cogs), total_cogs, len(bot.commands)))
print("-----------------")
if bot.settings.token and not bot.settings.self_bot:
print("\nUse this url to bring your bot to a server:")
url = await get_oauth_url()
bot.oauth_url = url
print(url)
print("\nOfficial server: https://discord.gg/red")
print("Make sure to keep your bot updated. Select the 'Update' "
"option from the launcher.")
await bot.get_cog('Owner').disable_commands()
@bot.event
async def on_resumed():
bot.counter["session_resumed"] += 1
@bot.event
async def on_command(command, ctx):
bot.counter["processed_commands"] += 1
@bot.event
async def on_message(message):
bot.counter["messages_read"] += 1
if bot.user_allowed(message):
await bot.process_commands(message)
@bot.event
async def on_command_error(error, ctx):
channel = ctx.message.channel
if isinstance(error, commands.MissingRequiredArgument):
await bot.send_cmd_help(ctx)
elif isinstance(error, commands.BadArgument):
await bot.send_cmd_help(ctx)
elif isinstance(error, commands.DisabledCommand):
await bot.send_message(channel, "That command is disabled.")
elif isinstance(error, commands.CommandInvokeError):
# A bit hacky, couldn't find a better way
no_dms = "Cannot send messages to this user"
is_help_cmd = ctx.command.qualified_name == "help"
is_forbidden = isinstance(error.original, discord.Forbidden)
if is_help_cmd and is_forbidden and error.original.text == no_dms:
msg = ("I couldn't send the help message to you in DM. Either"
" you blocked me or you disabled DMs in this server.")
await bot.send_message(channel, msg)
return
bot.logger.exception("Exception in command '{}'".format(
ctx.command.qualified_name), exc_info=error.original)
message = ("Error in command '{}'. Check your console or "
"logs for details."
"".format(ctx.command.qualified_name))
log = ("Exception in command '{}'\n"
"".format(ctx.command.qualified_name))
log += "".join(traceback.format_exception(type(error), error,
error.__traceback__))
bot._last_exception = log
await ctx.bot.send_message(channel, inline(message))
elif isinstance(error, commands.CommandNotFound):
pass
elif isinstance(error, commands.CheckFailure):
pass
elif isinstance(error, commands.NoPrivateMessage):
await bot.send_message(channel, "That command is not "
"available in DMs.")
elif isinstance(error, commands.CommandOnCooldown):
await bot.send_message(channel, "This command is on cooldown. "
"Try again in {:.2f}s"
"".format(error.retry_after))
else:
bot.logger.exception(type(error).__name__, exc_info=error)
return bot
def check_folders():
folders = ("data", "data/red", "cogs", "cogs/utils")
for folder in folders:
if not os.path.exists(folder):
print("Creating " + folder + " folder...")
os.makedirs(folder)
def interactive_setup(settings):
first_run = settings.bot_settings == settings.default_settings
if first_run:
print("Red - First run configuration\n")
print("If you haven't already, create a new account:\n"
"https://twentysix26.github.io/Red-Docs/red_guide_bot_accounts/"
"#creating-a-new-bot-account")
print("and obtain your bot's token like described.")
if not settings.login_credentials:
print("\nInsert your bot's token:")
while settings.token is None and settings.email is None:
choice = input("> ")
if "@" not in choice and len(choice) >= 50: # Assuming token
settings.token = choice
elif "@" in choice:
settings.email = choice
settings.password = input("\nPassword> ")
else:
print("That doesn't look like a valid token.")
settings.save_settings()
if not settings.prefixes:
print("\nChoose a prefix. A prefix is what you type before a command."
"\nA typical prefix would be the exclamation mark.\n"
"Can be multiple characters. You will be able to change it "
"later and add more of them.\nChoose your prefix:")
confirmation = False
while confirmation is False:
new_prefix = ensure_reply("\nPrefix> ").strip()
print("\nAre you sure you want {0} as your prefix?\nYou "
"will be able to issue commands like this: {0}help"
"\nType yes to confirm or no to change it".format(
new_prefix))
confirmation = get_answer()
settings.prefixes = [new_prefix]
settings.save_settings()
if first_run:
print("\nInput the admin role's name. Anyone with this role in Discord"
" will be able to use the bot's admin commands")
print("Leave blank for default name (Transistor)")
settings.default_admin = input("\nAdmin role> ")
if settings.default_admin == "":
settings.default_admin = "Transistor"
settings.save_settings()
print("\nInput the moderator role's name. Anyone with this role in"
" Discord will be able to use the bot's mod commands")
print("Leave blank for default name (Process)")
settings.default_mod = input("\nModerator role> ")
if settings.default_mod == "":
settings.default_mod = "Process"
settings.save_settings()
print("\nThe configuration is done. Leave this window always open to"
" keep Red online.\nAll commands will have to be issued through"
" Discord's chat, *this window will now be read only*.\n"
"Please read this guide for a good overview on how Red works:\n"
"https://twentysix26.github.io/Red-Docs/red_getting_started/\n"
"Press enter to continue")
input("\n")
def set_logger(bot):
logger = logging.getLogger("red")
logger.setLevel(logging.DEBUG)
red_format = logging.Formatter(
'%(asctime)s %(levelname)s %(module)s %(funcName)s %(lineno)d: '
'%(message)s',
datefmt="[%d/%m/%Y %H:%M]")
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(red_format)
if bot.settings.debug:
stdout_handler.setLevel(logging.DEBUG)
logger.setLevel(logging.DEBUG)
else:
stdout_handler.setLevel(logging.INFO)
logger.setLevel(logging.DEBUG)
fhandler = logging.handlers.RotatingFileHandler(
filename='data/red/red.log', encoding='utf-8', mode='a',
maxBytes=10**7, backupCount=5)
fhandler.setFormatter(red_format)
logger.addHandler(fhandler)
logger.addHandler(stdout_handler)
dpy_logger = logging.getLogger("discord")
if bot.settings.debug:
dpy_logger.setLevel(logging.DEBUG)
else:
dpy_logger.setLevel(logging.WARNING)
handler = logging.FileHandler(
filename='data/red/discord.log', encoding='utf-8', mode='a')
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s %(module)s %(funcName)s %(lineno)d: '
'%(message)s',
datefmt="[%d/%m/%Y %H:%M]"))
dpy_logger.addHandler(handler)
return logger
def ensure_reply(msg):
choice = ""
while choice == "":
choice = input(msg)
return choice
def get_answer():
choices = ("yes", "y", "no", "n")
c = ""
while c not in choices:
c = input(">").lower()
if c.startswith("y"):
return True
else:
return False
def set_cog(cog, value): # TODO: move this out of red.py
data = dataIO.load_json("data/red/cogs.json")
data[cog] = value
dataIO.save_json("data/red/cogs.json", data)
def load_cogs(bot):
defaults = ("alias", "audio", "customcom", "downloader", "economy",
"general", "image", "mod", "streams", "trivia")
try:
registry = dataIO.load_json("data/red/cogs.json")
except:
registry = {}
bot.load_extension('cogs.owner')
owner_cog = bot.get_cog('Owner')
if owner_cog is None:
print("The owner cog is missing. It contains core functions without "
"which Red cannot function. Reinstall.")
exit(1)
if bot.settings._no_cogs:
bot.logger.debug("Skipping initial cogs loading (--no-cogs)")
if not os.path.isfile("data/red/cogs.json"):
dataIO.save_json("data/red/cogs.json", {})
return
failed = []
extensions = owner_cog._list_cogs()
if not registry: # All default cogs enabled by default
for ext in defaults:
registry["cogs." + ext] = True
for extension in extensions:
if extension.lower() == "cogs.owner":
continue
to_load = registry.get(extension, False)
if to_load:
try:
owner_cog._load_cog(extension)
except Exception as e:
print("{}: {}".format(e.__class__.__name__, str(e)))
bot.logger.exception(e)
failed.append(extension)
registry[extension] = False
dataIO.save_json("data/red/cogs.json", registry)
if failed:
print("\nFailed to load: {}\n".format(" ".join(failed)))
def main(bot):
check_folders()
if not bot.settings.no_prompt:
interactive_setup(bot.settings)
load_cogs(bot)
if bot.settings._dry_run:
print("Quitting: dry run")
bot._shutdown_mode = True
exit(0)
print("Logging into Discord...")
bot.uptime = datetime.datetime.utcnow()
if bot.settings.login_credentials:
yield from bot.login(*bot.settings.login_credentials,
bot=not bot.settings.self_bot)
else:
print("No credentials available to login.")
raise RuntimeError()
yield from bot.connect()
if __name__ == '__main__':
sys.stdout = TextIOWrapper(sys.stdout.detach(),
encoding=sys.stdout.encoding,
errors="replace",
line_buffering=True)
bot = initialize()
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main(bot))
except discord.LoginFailure:
bot.logger.error(traceback.format_exc())
if not bot.settings.no_prompt:
choice = input("Invalid login credentials. If they worked before "
"Discord might be having temporary technical "
"issues.\nIn this case, press enter and try again "
"later.\nOtherwise you can type 'reset' to reset "
"the current credentials and set them again the "
"next start.\n> ")
if choice.lower().strip() == "reset":
bot.settings.token = None
bot.settings.email = None
bot.settings.password = None
bot.settings.save_settings()
print("Login credentials have been reset.")
except KeyboardInterrupt:
loop.run_until_complete(bot.logout())
except Exception as e:
bot.logger.exception("Fatal exception, attempting graceful logout",
exc_info=e)
loop.run_until_complete(bot.logout())
finally:
loop.close()
if bot._shutdown_mode is True:
exit(0)
elif bot._shutdown_mode is False:
exit(26) # Restart
else:
exit(1)
| gpl-3.0 |
CloudVLab/professional-services | examples/tensorflow-unit-testing/example.py | 2 | 2853 | """Classes to demonstrate how to write unit tests for TensorFlow code."""
# Copyright 2020 Google Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from absl import logging
import tensorflow as tf
@tf.keras.utils.register_keras_serializable(package='Custom')
class LinearBlockFull(tf.keras.layers.Layer):
"""Custom keras liner Layer (serializable)."""
def __init__(self, units=32, **kwargs):
super(LinearBlockFull, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='zeros',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(LinearBlockFull, self).get_config()
custom_config = {'units': self.units}
config.update(custom_config)
return config
class LinearBlock(tf.keras.layers.Layer):
"""Custom keras liner Layer."""
def __init__(self, units=32):
super(LinearBlock, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='zeros',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_model(dim):
"""Creates a keras model.
Args:
dim: a dimension of an input vector
Returns:
A complied keras model used in tutorial.
"""
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=[dim]),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1)
])
model.summary(print_fn=logging.info)
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse'])
return model
| apache-2.0 |
nedlowe/amaas-core-sdk-python | amaascore/assets/private_investment.py | 2 | 6874 | from datetime import datetime, date
from dateutil.parser import parse
from decimal import Decimal
import sys
from amaascore.assets.asset import Asset
from amaascore.assets.enums import PRIVATE_INVESTMENT_CATEGORY, PRIVATE_INVESTMENT_SHARE_TYPE,\
PRIVATE_INVESTMENT_SUBCATEGORY
# This extremely ugly hack is due to the whole Python 2 vs 3 debacle.
type_check = str if sys.version_info >= (3, 0, 0) else (str, unicode)
class PrivateInvestment(Asset):
def __init__(self, asset_manager_id, asset_id, client_id, asset_issuer_id=None, asset_status='Active',
display_name='', roll_price=True,
description='', country_id=None, venue_id=None, currency=None, additional=None,
comments=None, links=None, references=None,
category=None, sub_category=None, investment_date=None, num_shares=None,
price_share=None, share_class=None, series=None, share_type=None, coupon=None, coupon_freq=None,
upfront_fee=None, exit_fee=None, management_fee=None, performance_fee=None,
hurdle=None, margin=None, high_water_mark=None, maturity_date=None,
lock_up_period=None, investment_term=None,
*args, **kwargs):
if not hasattr(self, 'asset_class'): # A more specific child class may have already set this
self.asset_class = 'PrivateInvestment'
super(PrivateInvestment, self).__init__(asset_manager_id=asset_manager_id, asset_id=asset_id,
fungible=False, asset_issuer_id=asset_issuer_id,
asset_status=asset_status, display_name=display_name,
roll_price=roll_price, description=description,
country_id=country_id, venue_id=venue_id,
currency=currency,
comments=comments, links=links, references=references,
client_id=client_id, additional=additional, *args, **kwargs)
self.category = category
self.sub_category = sub_category
self.investment_date = investment_date
self.num_shares = num_shares
self.price_share = price_share
self.share_class = share_class
self.series = series
self.share_type = share_type
self.coupon = coupon
self.coupon_freq = coupon_freq
self.upfront_fee = upfront_fee # These fees should probably be on the Transaction. TODO.
self.exit_fee = exit_fee # These fees should probably be on the Transaction. TODO.
self.management_fee = management_fee # These fees should probably be on the Transaction. TODO.
self.performance_fee = performance_fee # These fees should probably be on the Transaction. TODO.
self.hurdle = hurdle
self.margin = margin
self.high_water_mark = high_water_mark
self.maturity_date = maturity_date
self.lock_up_period = lock_up_period
self.investment_term = investment_term
@property
def category(self):
return self._category
@category.setter
def category(self, category):
if category in PRIVATE_INVESTMENT_CATEGORY:
self._category=category
else:
raise ValueError('Invalid input of category, please indicate Others if %s not in our list' % category)
@property
def sub_category(self):
return self._sub_category
@sub_category.setter
def sub_category(self, sub_category):
category = self._category
if category in PRIVATE_INVESTMENT_SUBCATEGORY.keys():
if sub_category in PRIVATE_INVESTMENT_SUBCATEGORY[category]:
self._sub_category = sub_category
else:
raise ValueError('Invalid input of sub_category: %s' % sub_category)
else:
raise ValueError('please set up category correctly')
@property
def investment_date(self):
return self._investment_date
@investment_date.setter
def investment_date(self, investment_date):
if investment_date:
self._investment_date = parse(investment_date).date() if isinstance(investment_date, type_check)\
else investment_date
@property
def num_shares(self):
return self._num_shares
@num_shares.setter
def num_shares(self, num_shares):
if isinstance(num_shares, (str, int)):
self._num_shares = int(num_shares)
else:
raise ValueError("num_shares should be an integer :%s" % num_shares)
@property
def price_share(self):
return self._price_share
@price_share.setter
def price_share(self, price_share):
if price_share:
self._price_share = Decimal(price_share)
@property
def share_class(self):
return self._share_class
@share_class.setter
def share_class(self, share_class):
self._share_class = share_class
@property
def share_type(self):
return self._share_type
@share_type.setter
def share_type(self, share_type):
if share_type in PRIVATE_INVESTMENT_SHARE_TYPE:
self._share_type = share_type
else:
raise ValueError('Invalid input of share_type %s not in our list' % share_type)
@property
def maturity_date(self):
return self._maturity_date
@maturity_date.setter
def maturity_date(self, maturity_date):
if maturity_date:
self._maturity_date = parse(maturity_date).date() if isinstance(maturity_date, type_check)\
else maturity_date
@property
def lock_up_period(self):
return self._lock_up_period
@lock_up_period.setter
def lock_up_period(self, lock_up_period):
""" This lockup period is in months. This might change to a relative delta."""
try:
if isinstance(lock_up_period, (str, int)):
self._lock_up_period = int(lock_up_period)
except Exception:
raise ValueError('invalid input of lock up period %s, cannot be converted to an int' %
lock_up_period)
@property
def investment_term(self):
return self._investment_term
@investment_term.setter
def investment_term(self, investment_term):
""" This investment term is in months. This might change to a relative delta."""
try:
if isinstance(investment_term, (str, int)):
self._investment_term = int(investment_term)
except Exception:
raise ValueError('invalid input of investment type %s, cannot be converted to an int' %
investment_term)
| apache-2.0 |
dkodnik/arp | addons/base_report_designer/plugin/openerp_report_designer/bin/script/lib/functions.py | 89 | 11250 | ##########################################################################
#
# Copyright (c) 2003-2004 Danny Brewer [email protected]
# Copyright (C) 2004-2010 OpenERP SA (<http://openerp.com>).
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
# See: http://www.gnu.org/licenses/lgpl.html
#
##############################################################################
import uno
import xmlrpclib
import re
import socket
import cPickle
import marshal
import tempfile
if __name__<>"package":
from gui import *
from logreport import *
from rpc import *
database="test"
uid = 1
def genTree(object, aList, insField, host, level=3, ending=None, ending_excl=None, recur=None, root='', actualroot=""):
if ending is None:
ending = []
if ending_excl is None:
ending_excl = []
if recur is None:
recur = []
try:
global url
sock=RPCSession(url)
global passwd
res = sock.execute(database, uid, passwd, object , 'fields_get')
key = res.keys()
key.sort()
for k in key:
if (not ending or res[k]['type'] in ending) and ((not ending_excl) or not (res[k]['type'] in ending_excl)):
insField.addItem(root+'/'+res[k]["string"],len(aList))
aList.append(actualroot+'/'+k)
if (res[k]['type'] in recur) and (level>0):
genTree(res[k]['relation'],aList,insField,host ,level-1, ending, ending_excl, recur,root+'/'+res[k]["string"],actualroot+'/'+k)
except:
obj=Logger()
import traceback,sys
info = reduce(lambda x, y: x+y, traceback.format_exception(sys.exc_type, sys.exc_value, sys.exc_traceback))
obj.log_write('Function', LOG_ERROR, info)
def VariableScope(oTcur, insVariable, aObjectList, aComponentAdd, aItemList, sTableName=""):
if sTableName.find(".") != -1:
for i in range(len(aItemList)):
if aComponentAdd[i]==sTableName:
sLVal=aItemList[i][1][aItemList[i][1].find(",'")+2:aItemList[i][1].find("')")]
for j in range(len(aObjectList)):
if aObjectList[j][:aObjectList[j].find("(")] == sLVal:
insVariable.append(aObjectList[j])
VariableScope(oTcur,insVariable,aObjectList,aComponentAdd,aItemList, sTableName[:sTableName.rfind(".")])
else:
for i in range(len(aItemList)):
if aComponentAdd[i]==sTableName:
sLVal=aItemList[i][1][aItemList[i][1].find(",'")+2:aItemList[i][1].find("')")]
for j in range(len(aObjectList)):
if aObjectList[j][:aObjectList[j].find("(")] == sLVal and sLVal!="":
insVariable.append(aObjectList[j])
def getList(aObjectList, host, count):
desktop=getDesktop()
doc =desktop.getCurrentComponent()
docinfo=doc.getDocumentInfo()
sMain=""
if not count == 0:
if count >= 1:
oParEnum = doc.getTextFields().createEnumeration()
while oParEnum.hasMoreElements():
oPar = oParEnum.nextElement()
if oPar.supportsService("com.sun.star.text.TextField.DropDown"):
sItem=oPar.Items[1]
if sItem[sItem.find("(")+1:sItem.find(",")]=="objects":
sMain = sItem[sItem.find(",'")+2:sItem.find("')")]
oParEnum = doc.getTextFields().createEnumeration()
while oParEnum.hasMoreElements():
oPar = oParEnum.nextElement()
if oPar.supportsService("com.sun.star.text.TextField.DropDown"):
sItem=oPar.Items[1]
if sItem[sItem.find("[[ ")+3:sItem.find("(")]=="repeatIn":
if sItem[sItem.find("(")+1:sItem.find(",")]=="objects":
aObjectList.append(sItem[sItem.rfind(",'")+2:sItem.rfind("')")] + "(" + docinfo.getUserFieldValue(3) + ")")
else:
sTemp=sItem[sItem.find("(")+1:sItem.find(",")]
if sMain == sTemp[:sTemp.find(".")]:
getRelation(docinfo.getUserFieldValue(3), sItem[sItem.find(".")+1:sItem.find(",")], sItem[sItem.find(",'")+2:sItem.find("')")],aObjectList,host)
else:
sPath=getPath(sItem[sItem.find("(")+1:sItem.find(",")], sMain)
getRelation(docinfo.getUserFieldValue(3), sPath, sItem[sItem.find(",'")+2:sItem.find("')")],aObjectList,host)
else:
aObjectList.append("List of " + docinfo.getUserFieldValue(3))
def getRelation(sRelName, sItem, sObjName, aObjectList, host):
global url
sock=RPCSession(url)
global passwd
res = sock.execute(database, uid, passwd, sRelName , 'fields_get')
key = res.keys()
for k in key:
if sItem.find(".") == -1:
if k == sItem:
aObjectList.append(sObjName + "(" + res[k]['relation'] + ")")
return 0
if k == sItem[:sItem.find(".")]:
getRelation(res[k]['relation'], sItem[sItem.find(".")+1:], sObjName,aObjectList,host)
def getPath(sPath, sMain):
desktop=getDesktop()
doc =desktop.getCurrentComponent()
oParEnum = doc.getTextFields().createEnumeration()
while oParEnum.hasMoreElements():
oPar = oParEnum.nextElement()
if oPar.supportsService("com.sun.star.text.TextField.DropDown"):
sItem=oPar.Items[1]
if sPath[:sPath.find(".")] == sMain:
break;
else:
res = re.findall('\\[\\[ *([a-zA-Z0-9_\.]+) *\\]\\]',sPath)
if len(res) <> 0:
if sItem[sItem.find(",'")+2:sItem.find("')")] == sPath[:sPath.find(".")]:
sPath = sItem[sItem.find("(")+1:sItem.find(",")] + sPath[sPath.find("."):]
getPath(sPath, sMain)
return sPath
def EnumDocument(aItemList, aComponentAdd):
desktop = getDesktop()
parent=""
bFlag = False
Doc =desktop.getCurrentComponent()
#oVC = Doc.CurrentController.getViewCursor()
oParEnum = Doc.getTextFields().createEnumeration()
while oParEnum.hasMoreElements():
oPar = oParEnum.nextElement()
if oPar.Anchor.TextTable:
#parent = oPar.Anchor.TextTable.Name
getChildTable(oPar.Anchor.TextTable,aItemList,aComponentAdd)
elif oPar.Anchor.TextSection:
parent = oPar.Anchor.TextSection.Name
elif oPar.Anchor.Text:
parent = "Document"
sItem=oPar.Items[1].replace(' ',"")
if sItem[sItem.find("[[ ")+3:sItem.find("(")]=="repeatIn" and not oPar.Items in aItemList:
templist=oPar.Items[0],sItem
aItemList.append( templist )
aComponentAdd.append( parent )
def getChildTable(oPar, aItemList, aComponentAdd, sTableName=""):
sNames = oPar.getCellNames()
bEmptyTableFlag=True
for val in sNames:
oCell = oPar.getCellByName(val)
oCurEnum = oCell.createEnumeration()
while oCurEnum.hasMoreElements():
try:
oCur = oCurEnum.nextElement()
if oCur.supportsService("com.sun.star.text.TextTable"):
if sTableName=="":
getChildTable(oCur,aItemList,aComponentAdd,oPar.Name)
else:
getChildTable(oCur,aItemList,aComponentAdd,sTableName+"."+oPar.Name)
else:
oSecEnum = oCur.createEnumeration()
while oSecEnum.hasMoreElements():
oSubSection = oSecEnum.nextElement()
if oSubSection.supportsService("com.sun.star.text.TextField"):
bEmptyTableFlag=False
sItem=oSubSection.TextField.Items[1]
if sItem[sItem.find("[[ ")+3:sItem.find("(")]=="repeatIn":
if aItemList.__contains__(oSubSection.TextField.Items)==False:
aItemList.append(oSubSection.TextField.Items)
if sTableName=="":
if aComponentAdd.__contains__(oPar.Name)==False:
aComponentAdd.append(oPar.Name)
else:
if aComponentAdd.__contains__(sTableName+"."+oPar.Name)==False:
aComponentAdd.append(sTableName+"."+oPar.Name)
except:
obj=Logger()
import traceback,sys
info = reduce(lambda x, y: x+y, traceback.format_exception(sys.exc_type, sys.exc_value, sys.exc_traceback))
obj.log_write('Function', LOG_ERROR, info)
if bEmptyTableFlag==True:
aItemList.append((u'',u''))
if sTableName=="":
if aComponentAdd.__contains__(oPar.Name)==False:
aComponentAdd.append(oPar.Name)
else:
if aComponentAdd.__contains__(sTableName+"."+oPar.Name)==False:
aComponentAdd.append(sTableName+"."+oPar.Name)
return 0
def getRecersiveSection(oCurrentSection, aSectionList):
desktop=getDesktop()
doc =desktop.getCurrentComponent()
oParEnum=doc.getText().createEnumeration()
aSectionList.append(oCurrentSection.Name)
if oCurrentSection.ParentSection:
getRecersiveSection(oCurrentSection.ParentSection,aSectionList)
else:
return
def GetAFileName():
oFileDialog=None
iAccept=None
sPath=""
InitPath=""
oUcb=None
oFileDialog = createUnoService("com.sun.star.ui.dialogs.FilePicker")
oUcb = createUnoService("com.sun.star.ucb.SimpleFileAccess")
oFileDialog.appendFilter("OpenERP Report File","*.sxw")
oFileDialog.setCurrentFilter("OpenERP Report File")
if InitPath == "":
InitPath =tempfile.gettempdir()
#End If
if oUcb.exists(InitPath):
oFileDialog.setDisplayDirectory(InitPath)
#End If
iAccept = oFileDialog.execute()
if iAccept == 1:
sPath = oFileDialog.Files[0]
oFileDialog.dispose()
return sPath
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
SnabbCo/neutron | neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py | 3 | 15240 | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012 Cisco Systems, Inc.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Sumit Naiksatam, Cisco Systems, Inc.
# @author: Edgar Magana, Cisco Systems, Inc.
# @author: Arvind Somya, Cisco Systems, Inc. ([email protected])
#
"""
PlugIn for Nexus OS driver
"""
import logging
from neutron.openstack.common import excutils
from neutron.openstack.common import importutils
from neutron.plugins.cisco.common import cisco_constants as const
from neutron.plugins.cisco.common import cisco_exceptions as cisco_exc
from neutron.plugins.cisco.common import config as conf
from neutron.plugins.cisco.db import network_db_v2 as cdb
from neutron.plugins.cisco.db import nexus_db_v2 as nxos_db
from neutron.plugins.cisco import l2device_plugin_base
LOG = logging.getLogger(__name__)
class NexusPlugin(l2device_plugin_base.L2DevicePluginBase):
"""Nexus PlugIn Main Class."""
_networks = {}
def __init__(self):
"""Extract configuration parameters from the configuration file."""
self._client = importutils.import_object(conf.CISCO.nexus_driver)
LOG.debug(_("Loaded driver %s"), conf.CISCO.nexus_driver)
self._nexus_switches = conf.get_device_dictionary()
def create_network(self, network, attachment):
"""Create or update a network when an attachment is changed.
This method is not invoked at the usual plugin create_network() time.
Instead, it is invoked on create/update port.
:param network: Network on which the port operation is happening
:param attachment: Details about the owner of the port
Create a VLAN in the appropriate switch/port, and configure the
appropriate interfaces for this VLAN.
"""
LOG.debug(_("NexusPlugin:create_network() called"))
# Grab the switch IPs and ports for this host
host_connections = []
host = attachment['host_name']
for switch_type, switch_ip, attr in self._nexus_switches:
if str(attr) == str(host):
port = self._nexus_switches[switch_type, switch_ip, attr]
# Get ether type for port, assume an ethernet type
# if none specified.
if ':' in port:
etype, port_id = port.split(':')
else:
etype, port_id = 'ethernet', port
host_connections.append((switch_ip, etype, port_id))
if not host_connections:
raise cisco_exc.NexusComputeHostNotConfigured(host=host)
vlan_id = network[const.NET_VLAN_ID]
vlan_name = network[const.NET_VLAN_NAME]
auto_create = True
auto_trunk = True
if cdb.is_provider_vlan(vlan_id):
vlan_name = ''.join([conf.CISCO.provider_vlan_name_prefix,
str(vlan_id)])
auto_create = conf.CISCO.provider_vlan_auto_create
auto_trunk = conf.CISCO.provider_vlan_auto_trunk
# Check if this network is already in the DB
for switch_ip, etype, port_id in host_connections:
vlan_created = False
vlan_trunked = False
eport_id = '%s:%s' % (etype, port_id)
# Check for switch vlan bindings
try:
# This vlan has already been created on this switch
# via another operation, like SVI bindings.
nxos_db.get_nexusvlan_binding(vlan_id, switch_ip)
vlan_created = True
auto_create = False
except cisco_exc.NexusPortBindingNotFound:
# No changes, proceed as normal
pass
try:
nxos_db.get_port_vlan_switch_binding(eport_id, vlan_id,
switch_ip)
except cisco_exc.NexusPortBindingNotFound:
if auto_create and auto_trunk:
# Create vlan and trunk vlan on the port
LOG.debug(_("Nexus: create & trunk vlan %s"), vlan_name)
self._client.create_and_trunk_vlan(
switch_ip, vlan_id, vlan_name, etype, port_id)
vlan_created = True
vlan_trunked = True
elif auto_create:
# Create vlan but do not trunk it on the port
LOG.debug(_("Nexus: create vlan %s"), vlan_name)
self._client.create_vlan(switch_ip, vlan_id, vlan_name)
vlan_created = True
elif auto_trunk:
# Only trunk vlan on the port
LOG.debug(_("Nexus: trunk vlan %s"), vlan_name)
self._client.enable_vlan_on_trunk_int(
switch_ip, vlan_id, etype, port_id)
vlan_trunked = True
try:
instance = attachment[const.INSTANCE_ID]
nxos_db.add_nexusport_binding(eport_id, str(vlan_id),
switch_ip, instance)
except Exception:
with excutils.save_and_reraise_exception():
# Add binding failed, roll back any vlan creation/enabling
if vlan_created and vlan_trunked:
LOG.debug(_("Nexus: delete & untrunk vlan %s"),
vlan_name)
self._client.delete_and_untrunk_vlan(switch_ip,
vlan_id,
etype, port_id)
elif vlan_created:
LOG.debug(_("Nexus: delete vlan %s"), vlan_name)
self._client.delete_vlan(switch_ip, vlan_id)
elif vlan_trunked:
LOG.debug(_("Nexus: untrunk vlan %s"), vlan_name)
self._client.disable_vlan_on_trunk_int(switch_ip,
vlan_id,
etype,
port_id)
net_id = network[const.NET_ID]
new_net_dict = {const.NET_ID: net_id,
const.NET_NAME: network[const.NET_NAME],
const.NET_PORTS: {},
const.NET_VLAN_NAME: vlan_name,
const.NET_VLAN_ID: vlan_id}
self._networks[net_id] = new_net_dict
return new_net_dict
def add_router_interface(self, vlan_name, vlan_id, subnet_id,
gateway_ip, router_id):
"""Create VLAN SVI on the Nexus switch."""
# Find a switch to create the SVI on
switch_ip = self._find_switch_for_svi()
if not switch_ip:
raise cisco_exc.NoNexusSviSwitch()
# Check if this vlan exists on the switch already
try:
nxos_db.get_nexusvlan_binding(vlan_id, switch_ip)
except cisco_exc.NexusPortBindingNotFound:
# Create vlan and trunk vlan on the port
self._client.create_and_trunk_vlan(
switch_ip, vlan_id, vlan_name, etype=None, nexus_port=None)
# Check if a router interface has already been created
try:
nxos_db.get_nexusvm_bindings(vlan_id, router_id)
raise cisco_exc.SubnetInterfacePresent(subnet_id=subnet_id,
router_id=router_id)
except cisco_exc.NexusPortBindingNotFound:
self._client.create_vlan_svi(switch_ip, vlan_id, gateway_ip)
nxos_db.add_nexusport_binding('router', str(vlan_id),
switch_ip, router_id)
return True
def remove_router_interface(self, vlan_id, router_id):
"""Remove VLAN SVI from the Nexus Switch."""
# Grab switch_ip from database
switch_ip = nxos_db.get_nexusvm_bindings(vlan_id,
router_id)[0].switch_ip
# Delete the SVI interface from the switch
self._client.delete_vlan_svi(switch_ip, vlan_id)
# Invoke delete_port to delete this row
# And delete vlan if required
return self.delete_port(router_id, vlan_id)
def _find_switch_for_svi(self):
"""Get a switch to create the SVI on."""
LOG.debug(_("Grabbing a switch to create SVI"))
nexus_switches = self._client.nexus_switches
if conf.CISCO.svi_round_robin:
LOG.debug(_("Using round robin to create SVI"))
switch_dict = dict(
(switch_ip, 0) for switch_ip, _ in nexus_switches)
try:
bindings = nxos_db.get_nexussvi_bindings()
# Build a switch dictionary with weights
for binding in bindings:
switch_ip = binding.switch_ip
if switch_ip not in switch_dict:
switch_dict[switch_ip] = 1
else:
switch_dict[switch_ip] += 1
# Search for the lowest value in the dict
if switch_dict:
switch_ip = min(switch_dict, key=switch_dict.get)
return switch_ip
except cisco_exc.NexusPortBindingNotFound:
pass
LOG.debug(_("No round robin or zero weights, using first switch"))
# Return the first switch in the config
return conf.first_device_ip
def delete_network(self, tenant_id, net_id, **kwargs):
"""Delete network.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:delete_network() called")) # pragma no cover
def update_network(self, tenant_id, net_id, **kwargs):
"""Update the properties of a particular Virtual Network.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:update_network() called")) # pragma no cover
def create_port(self, tenant_id, net_id, port_state, port_id, **kwargs):
"""Create port.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:create_port() called")) # pragma no cover
def delete_port(self, device_id, vlan_id):
"""Delete port.
Delete port bindings from the database and scan whether the network
is still required on the interfaces trunked.
"""
LOG.debug(_("NexusPlugin:delete_port() called"))
# Delete DB row(s) for this port
try:
rows = nxos_db.get_nexusvm_bindings(vlan_id, device_id)
except cisco_exc.NexusPortBindingNotFound:
return
auto_delete = True
auto_untrunk = True
if cdb.is_provider_vlan(vlan_id):
auto_delete = conf.CISCO.provider_vlan_auto_create
auto_untrunk = conf.CISCO.provider_vlan_auto_trunk
LOG.debug(_("delete_network(): provider vlan %s"), vlan_id)
instance_id = False
for row in rows:
instance_id = row['instance_id']
switch_ip = row.switch_ip
etype, nexus_port = '', ''
if row['port_id'] == 'router':
etype, nexus_port = 'vlan', row['port_id']
auto_untrunk = False
else:
etype, nexus_port = row['port_id'].split(':')
nxos_db.remove_nexusport_binding(row.port_id, row.vlan_id,
row.switch_ip,
row.instance_id)
# Check whether there are any remaining instances using this
# vlan on this Nexus port.
try:
nxos_db.get_port_vlan_switch_binding(row.port_id,
row.vlan_id,
row.switch_ip)
except cisco_exc.NexusPortBindingNotFound:
try:
if nexus_port and auto_untrunk:
# Untrunk the vlan from this Nexus interface
self._client.disable_vlan_on_trunk_int(
switch_ip, row.vlan_id, etype, nexus_port)
# Check whether there are any remaining instances
# using this vlan on the Nexus switch.
if auto_delete:
try:
nxos_db.get_nexusvlan_binding(row.vlan_id,
row.switch_ip)
except cisco_exc.NexusPortBindingNotFound:
# Delete this vlan from this switch
self._client.delete_vlan(switch_ip, row.vlan_id)
except Exception:
# The delete vlan operation on the Nexus failed,
# so this delete_port request has failed. For
# consistency, roll back the Nexus database to what
# it was before this request.
with excutils.save_and_reraise_exception():
nxos_db.add_nexusport_binding(row.port_id,
row.vlan_id,
row.switch_ip,
row.instance_id)
return instance_id
def update_port(self, tenant_id, net_id, port_id, port_state, **kwargs):
"""Update port.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:update_port() called")) # pragma no cover
def plug_interface(self, tenant_id, net_id, port_id, remote_interface_id,
**kwargs):
"""Plug interfaces.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:plug_interface() called")) # pragma no cover
def unplug_interface(self, tenant_id, net_id, port_id, **kwargs):
"""Unplug interface.
Not applicable to Nexus plugin. Defined here to satisfy abstract
method requirements.
"""
LOG.debug(_("NexusPlugin:unplug_interface() called")
) # pragma no cover
| apache-2.0 |
simdugas/childcare | languages/cs.py | 52 | 23580 | # coding: utf8
{
'!langcode!': 'cs-cz',
'!langname!': 'čeština',
'"update" is an optional expression like "field1=\'newvalue\'". You cannot update or delete the results of a JOIN': 'Kolonka "Upravit" je nepovinný výraz, například "pole1=\'nováhodnota\'". Výsledky databázového JOINu nemůžete mazat ani upravovat.',
'"User Exception" debug mode. An error ticket could be issued!': '"User Exception" debug mode. An error ticket could be issued!',
'%%{Row} in Table': '%%{řádek} v tabulce',
'%%{Row} selected': 'označených %%{řádek}',
'%s %%{row} deleted': '%s smazaných %%{záznam}',
'%s %%{row} updated': '%s upravených %%{záznam}',
'%s selected': '%s označených',
'%Y-%m-%d': '%d.%m.%Y',
'%Y-%m-%d %H:%M:%S': '%d.%m.%Y %H:%M:%S',
'(requires internet access)': '(vyžaduje připojení k internetu)',
'(requires internet access, experimental)': '(requires internet access, experimental)',
'(something like "it-it")': '(například "cs-cs")',
'@markmin\x01(file **gluon/contrib/plural_rules/%s.py** is not found)': '(soubor **gluon/contrib/plural_rules/%s.py** nenalezen)',
'@markmin\x01Searching: **%s** %%{file}': 'Hledání: **%s** %%{soubor}',
'About': 'O programu',
'About application': 'O aplikaci',
'Access Control': 'Řízení přístupu',
'Add breakpoint': 'Přidat bod přerušení',
'Additional code for your application': 'Další kód pro Vaši aplikaci',
'Admin design page': 'Admin design page',
'Admin language': 'jazyk rozhraní',
'Administrative interface': 'pro administrátorské rozhraní klikněte sem',
'Administrative Interface': 'Administrátorské rozhraní',
'administrative interface': 'rozhraní pro správu',
'Administrator Password:': 'Administrátorské heslo:',
'Ajax Recipes': 'Recepty s ajaxem',
'An error occured, please %s the page': 'An error occured, please %s the page',
'and rename it:': 'a přejmenovat na:',
'appadmin': 'appadmin',
'appadmin is disabled because insecure channel': 'appadmin je zakázaná bez zabezpečeného spojení',
'Application': 'Application',
'application "%s" uninstalled': 'application "%s" odinstalována',
'application compiled': 'aplikace zkompilována',
'Application name:': 'Název aplikace:',
'are not used': 'nepoužita',
'are not used yet': 'ještě nepoužita',
'Are you sure you want to delete this object?': 'Opravdu chcete odstranit tento objekt?',
'Are you sure you want to uninstall application "%s"?': 'Opravdu chcete odinstalovat aplikaci "%s"?',
'arguments': 'arguments',
'at char %s': 'at char %s',
'at line %s': 'at line %s',
'ATTENTION:': 'ATTENTION:',
'ATTENTION: TESTING IS NOT THREAD SAFE SO DO NOT PERFORM MULTIPLE TESTS CONCURRENTLY.': 'ATTENTION: TESTING IS NOT THREAD SAFE SO DO NOT PERFORM MULTIPLE TESTS CONCURRENTLY.',
'Available Databases and Tables': 'Dostupné databáze a tabulky',
'back': 'zpět',
'Back to wizard': 'Back to wizard',
'Basics': 'Basics',
'Begin': 'Začít',
'breakpoint': 'bod přerušení',
'Breakpoints': 'Body přerušení',
'breakpoints': 'body přerušení',
'Buy this book': 'Koupit web2py knihu',
'Cache': 'Cache',
'cache': 'cache',
'Cache Keys': 'Klíče cache',
'cache, errors and sessions cleaned': 'cache, chyby a relace byly pročištěny',
'can be a git repo': 'může to být git repo',
'Cancel': 'Storno',
'Cannot be empty': 'Nemůže být prázdné',
'Change Admin Password': 'Změnit heslo pro správu',
'Change admin password': 'Změnit heslo pro správu aplikací',
'Change password': 'Změna hesla',
'check all': 'vše označit',
'Check for upgrades': 'Zkusit aktualizovat',
'Check to delete': 'Označit ke smazání',
'Check to delete:': 'Označit ke smazání:',
'Checking for upgrades...': 'Zjišťuji, zda jsou k dispozici aktualizace...',
'Clean': 'Pročistit',
'Clear CACHE?': 'Vymazat CACHE?',
'Clear DISK': 'Vymazat DISK',
'Clear RAM': 'Vymazat RAM',
'Click row to expand traceback': 'Pro rozbalení stopy, klikněte na řádek',
'Click row to view a ticket': 'Pro zobrazení chyby (ticketu), klikněte na řádku...',
'Client IP': 'IP adresa klienta',
'code': 'code',
'Code listing': 'Code listing',
'collapse/expand all': 'vše sbalit/rozbalit',
'Community': 'Komunita',
'Compile': 'Zkompilovat',
'compiled application removed': 'zkompilovaná aplikace smazána',
'Components and Plugins': 'Komponenty a zásuvné moduly',
'Condition': 'Podmínka',
'continue': 'continue',
'Controller': 'Kontrolér (Controller)',
'Controllers': 'Kontroléry',
'controllers': 'kontroléry',
'Copyright': 'Copyright',
'Count': 'Počet',
'Create': 'Vytvořit',
'create file with filename:': 'vytvořit soubor s názvem:',
'created by': 'vytvořil',
'Created By': 'Vytvořeno - kým',
'Created On': 'Vytvořeno - kdy',
'crontab': 'crontab',
'Current request': 'Aktuální požadavek',
'Current response': 'Aktuální odpověď',
'Current session': 'Aktuální relace',
'currently running': 'právě běží',
'currently saved or': 'uloženo nebo',
'customize me!': 'upravte mě!',
'data uploaded': 'data nahrána',
'Database': 'Rozhraní databáze',
'Database %s select': 'databáze %s výběr',
'Database administration': 'Database administration',
'database administration': 'správa databáze',
'Date and Time': 'Datum a čas',
'day': 'den',
'db': 'db',
'DB Model': 'Databázový model',
'Debug': 'Ladění',
'defines tables': 'defines tables',
'Delete': 'Smazat',
'delete': 'smazat',
'delete all checked': 'smazat vše označené',
'delete plugin': 'delete plugin',
'Delete this file (you will be asked to confirm deletion)': 'Smazat tento soubor (budete požádán o potvrzení mazání)',
'Delete:': 'Smazat:',
'deleted after first hit': 'smazat po prvním dosažení',
'Demo': 'Demo',
'Deploy': 'Nahrát',
'Deploy on Google App Engine': 'Nahrát na Google App Engine',
'Deploy to OpenShift': 'Nahrát na OpenShift',
'Deployment Recipes': 'Postupy pro deployment',
'Description': 'Popis',
'design': 'návrh',
'Detailed traceback description': 'Podrobný výpis prostředí',
'details': 'podrobnosti',
'direction: ltr': 'směr: ltr',
'Disable': 'Zablokovat',
'DISK': 'DISK',
'Disk Cache Keys': 'Klíče diskové cache',
'Disk Cleared': 'Disk smazán',
'docs': 'dokumentace',
'Documentation': 'Dokumentace',
"Don't know what to do?": 'Nevíte kudy kam?',
'done!': 'hotovo!',
'Download': 'Stáhnout',
'download layouts': 'stáhnout moduly rozvržení stránky',
'download plugins': 'stáhnout zásuvné moduly',
'E-mail': 'E-mail',
'Edit': 'Upravit',
'edit all': 'edit all',
'Edit application': 'Správa aplikace',
'edit controller': 'edit controller',
'Edit current record': 'Upravit aktuální záznam',
'Edit Profile': 'Upravit profil',
'edit views:': 'upravit pohled:',
'Editing file "%s"': 'Úprava souboru "%s"',
'Editing Language file': 'Úprava jazykového souboru',
'Editing Plural Forms File': 'Editing Plural Forms File',
'Email and SMS': 'Email a SMS',
'Enable': 'Odblokovat',
'enter a number between %(min)g and %(max)g': 'zadejte číslo mezi %(min)g a %(max)g',
'enter an integer between %(min)g and %(max)g': 'zadejte celé číslo mezi %(min)g a %(max)g',
'Error': 'Chyba',
'Error logs for "%(app)s"': 'Seznam výskytu chyb pro aplikaci "%(app)s"',
'Error snapshot': 'Snapshot chyby',
'Error ticket': 'Ticket chyby',
'Errors': 'Chyby',
'Exception %(extype)s: %(exvalue)s': 'Exception %(extype)s: %(exvalue)s',
'Exception %s': 'Exception %s',
'Exception instance attributes': 'Prvky instance výjimky',
'Expand Abbreviation': 'Expand Abbreviation',
'export as csv file': 'exportovat do .csv souboru',
'exposes': 'vystavuje',
'exposes:': 'vystavuje funkce:',
'extends': 'rozšiřuje',
'failed to compile file because:': 'soubor se nepodařilo zkompilovat, protože:',
'FAQ': 'Často kladené dotazy',
'File': 'Soubor',
'file': 'soubor',
'file "%(filename)s" created': 'file "%(filename)s" created',
'file saved on %(time)s': 'soubor uložen %(time)s',
'file saved on %s': 'soubor uložen %s',
'Filename': 'Název souboru',
'filter': 'filtr',
'Find Next': 'Najít další',
'Find Previous': 'Najít předchozí',
'First name': 'Křestní jméno',
'Forgot username?': 'Zapomněl jste svoje přihlašovací jméno?',
'forgot username?': 'zapomněl jste svoje přihlašovací jméno?',
'Forms and Validators': 'Formuláře a validátory',
'Frames': 'Frames',
'Free Applications': 'Aplikace zdarma',
'Functions with no doctests will result in [passed] tests.': 'Functions with no doctests will result in [passed] tests.',
'Generate': 'Vytvořit',
'Get from URL:': 'Stáhnout z internetu:',
'Git Pull': 'Git Pull',
'Git Push': 'Git Push',
'Globals##debug': 'Globální proměnné',
'go!': 'OK!',
'Goto': 'Goto',
'graph model': 'graph model',
'Group %(group_id)s created': 'Skupina %(group_id)s vytvořena',
'Group ID': 'ID skupiny',
'Groups': 'Skupiny',
'Hello World': 'Ahoj světe',
'Help': 'Nápověda',
'Hide/Show Translated strings': 'Skrýt/Zobrazit přeložené texty',
'Hits': 'Kolikrát dosaženo',
'Home': 'Domovská stránka',
'honored only if the expression evaluates to true': 'brát v potaz jen když se tato podmínka vyhodnotí kladně',
'How did you get here?': 'Jak jste se sem vlastně dostal?',
'If start the upgrade, be patient, it may take a while to download': 'If start the upgrade, be patient, it may take a while to download',
'If the report above contains a ticket number it indicates a failure in executing the controller, before any attempt to execute the doctests. This is usually due to an indentation error or an error outside function code.\nA green title indicates that all tests (if defined) passed. In this case test results are not shown.': 'If the report above contains a ticket number it indicates a failure in executing the controller, before any attempt to execute the doctests. This is usually due to an indentation error or an error outside function code.\nA green title indicates that all tests (if defined) passed. In this case test results are not shown.',
'import': 'import',
'Import/Export': 'Import/Export',
'includes': 'zahrnuje',
'Index': 'Index',
'insert new': 'vložit nový záznam ',
'insert new %s': 'vložit nový záznam %s',
'inspect attributes': 'inspect attributes',
'Install': 'Instalovat',
'Installed applications': 'Nainstalované aplikace',
'Interaction at %s line %s': 'Interakce v %s, na řádce %s',
'Interactive console': 'Interaktivní příkazová řádka',
'Internal State': 'Vnitřní stav',
'Introduction': 'Úvod',
'Invalid email': 'Neplatný email',
'Invalid password': 'Nesprávné heslo',
'invalid password.': 'neplatné heslo',
'Invalid Query': 'Neplatný dotaz',
'invalid request': 'Neplatný požadavek',
'Is Active': 'Je aktivní',
'It is %s %%{day} today.': 'Dnes je to %s %%{den}.',
'Key': 'Klíč',
'Key bindings': 'Vazby klíčů',
'Key bindings for ZenCoding Plugin': 'Key bindings for ZenCoding Plugin',
'languages': 'jazyky',
'Languages': 'Jazyky',
'Last name': 'Příjmení',
'Last saved on:': 'Naposledy uloženo:',
'Layout': 'Rozvržení stránky (layout)',
'Layout Plugins': 'Moduly rozvržení stránky (Layout Plugins)',
'Layouts': 'Rozvržení stránek',
'License for': 'Licence pro',
'Line number': 'Číslo řádku',
'LineNo': 'Č.řádku',
'Live Chat': 'Online pokec',
'loading...': 'nahrávám...',
'locals': 'locals',
'Locals##debug': 'Lokální proměnné',
'Logged in': 'Přihlášení proběhlo úspěšně',
'Logged out': 'Odhlášení proběhlo úspěšně',
'Login': 'Přihlásit se',
'login': 'přihlásit se',
'Login to the Administrative Interface': 'Přihlásit se do Správce aplikací',
'logout': 'odhlásit se',
'Logout': 'Odhlásit se',
'Lost Password': 'Zapomněl jste heslo',
'Lost password?': 'Zapomněl jste heslo?',
'lost password?': 'zapomněl jste heslo?',
'Manage': 'Manage',
'Manage Cache': 'Manage Cache',
'Menu Model': 'Model rozbalovací nabídky',
'Models': 'Modely',
'models': 'modely',
'Modified By': 'Změněno - kým',
'Modified On': 'Změněno - kdy',
'Modules': 'Moduly',
'modules': 'moduly',
'My Sites': 'Správa aplikací',
'Name': 'Jméno',
'new application "%s" created': 'nová aplikace "%s" vytvořena',
'New Application Wizard': 'Nový průvodce aplikací',
'New application wizard': 'Nový průvodce aplikací',
'New password': 'Nové heslo',
'New Record': 'Nový záznam',
'new record inserted': 'nový záznam byl založen',
'New simple application': 'Vytvořit primitivní aplikaci',
'next': 'next',
'next 100 rows': 'dalších 100 řádků',
'No databases in this application': 'V této aplikaci nejsou žádné databáze',
'No Interaction yet': 'Ještě žádná interakce nenastala',
'No ticket_storage.txt found under /private folder': 'Soubor ticket_storage.txt v adresáři /private nenalezen',
'Object or table name': 'Objekt či tabulka',
'Old password': 'Původní heslo',
'online designer': 'online návrhář',
'Online examples': 'Příklady online',
'Open new app in new window': 'Open new app in new window',
'or alternatively': 'or alternatively',
'Or Get from URL:': 'Or Get from URL:',
'or import from csv file': 'nebo importovat z .csv souboru',
'Origin': 'Původ',
'Original/Translation': 'Originál/Překlad',
'Other Plugins': 'Ostatní moduly',
'Other Recipes': 'Ostatní zásuvné moduly',
'Overview': 'Přehled',
'Overwrite installed app': 'Přepsat instalovanou aplikaci',
'Pack all': 'Zabalit',
'Pack compiled': 'Zabalit zkompilované',
'pack plugin': 'pack plugin',
'password': 'heslo',
'Password': 'Heslo',
"Password fields don't match": 'Hesla se neshodují',
'Peeking at file': 'Peeking at file',
'Please': 'Prosím',
'Plugin "%s" in application': 'Plugin "%s" in application',
'plugins': 'zásuvné moduly',
'Plugins': 'Zásuvné moduly',
'Plural Form #%s': 'Plural Form #%s',
'Plural-Forms:': 'Množná čísla:',
'Powered by': 'Poháněno',
'Preface': 'Předmluva',
'previous 100 rows': 'předchozích 100 řádků',
'Private files': 'Soukromé soubory',
'private files': 'soukromé soubory',
'profile': 'profil',
'Project Progress': 'Vývoj projektu',
'Python': 'Python',
'Query:': 'Dotaz:',
'Quick Examples': 'Krátké příklady',
'RAM': 'RAM',
'RAM Cache Keys': 'Klíče RAM Cache',
'Ram Cleared': 'RAM smazána',
'Readme': 'Nápověda',
'Recipes': 'Postupy jak na to',
'Record': 'Záznam',
'record does not exist': 'záznam neexistuje',
'Record ID': 'ID záznamu',
'Record id': 'id záznamu',
'refresh': 'obnovte',
'register': 'registrovat',
'Register': 'Zaregistrovat se',
'Registration identifier': 'Registrační identifikátor',
'Registration key': 'Registrační klíč',
'reload': 'reload',
'Reload routes': 'Znovu nahrát cesty',
'Remember me (for 30 days)': 'Zapamatovat na 30 dní',
'Remove compiled': 'Odstranit zkompilované',
'Removed Breakpoint on %s at line %s': 'Bod přerušení smazán - soubor %s na řádce %s',
'Replace': 'Zaměnit',
'Replace All': 'Zaměnit vše',
'request': 'request',
'Reset Password key': 'Reset registračního klíče',
'response': 'response',
'restart': 'restart',
'restore': 'obnovit',
'Retrieve username': 'Získat přihlašovací jméno',
'return': 'return',
'revert': 'vrátit se k původnímu',
'Role': 'Role',
'Rows in Table': 'Záznamy v tabulce',
'Rows selected': 'Záznamů zobrazeno',
'rules are not defined': 'pravidla nejsou definována',
"Run tests in this file (to run all files, you may also use the button labelled 'test')": "Spustí testy v tomto souboru (ke spuštění všech testů, použijte tlačítko 'test')",
'Running on %s': 'Běží na %s',
'Save': 'Uložit',
'Save file:': 'Save file:',
'Save via Ajax': 'Uložit pomocí Ajaxu',
'Saved file hash:': 'hash uloženého souboru:',
'Semantic': 'Modul semantic',
'Services': 'Služby',
'session': 'session',
'session expired': 'session expired',
'Set Breakpoint on %s at line %s: %s': 'Bod přerušení nastaven v souboru %s na řádce %s: %s',
'shell': 'příkazová řádka',
'Singular Form': 'Singular Form',
'Site': 'Správa aplikací',
'Size of cache:': 'Velikost cache:',
'skip to generate': 'skip to generate',
'Sorry, could not find mercurial installed': 'Bohužel mercurial není nainstalován.',
'Start a new app': 'Vytvořit novou aplikaci',
'Start searching': 'Začít hledání',
'Start wizard': 'Spustit průvodce',
'state': 'stav',
'Static': 'Static',
'static': 'statické soubory',
'Static files': 'Statické soubory',
'Statistics': 'Statistika',
'Step': 'Step',
'step': 'step',
'stop': 'stop',
'Stylesheet': 'CSS styly',
'submit': 'odeslat',
'Submit': 'Odeslat',
'successful': 'úspěšně',
'Support': 'Podpora',
'Sure you want to delete this object?': 'Opravdu chcete smazat tento objekt?',
'Table': 'tabulka',
'Table name': 'Název tabulky',
'Temporary': 'Dočasný',
'test': 'test',
'Testing application': 'Testing application',
'The "query" is a condition like "db.table1.field1==\'value\'". Something like "db.table1.field1==db.table2.field2" results in a SQL JOIN.': '"Dotaz" je podmínka, například "db.tabulka1.pole1==\'hodnota\'". Podmínka "db.tabulka1.pole1==db.tabulka2.pole2" pak vytvoří SQL JOIN.',
'The application logic, each URL path is mapped in one exposed function in the controller': 'Logika aplikace: každá URL je mapována na funkci vystavovanou kontrolérem.',
'The Core': 'Jádro (The Core)',
'The data representation, define database tables and sets': 'Reprezentace dat: definovat tabulky databáze a záznamy',
'The output of the file is a dictionary that was rendered by the view %s': 'Výstup ze souboru je slovník, který se zobrazil v pohledu %s.',
'The presentations layer, views are also known as templates': 'Prezentační vrstva: pohledy či templaty (šablony)',
'The Views': 'Pohledy (The Views)',
'There are no controllers': 'There are no controllers',
'There are no modules': 'There are no modules',
'There are no plugins': 'Žádné moduly nejsou instalovány.',
'There are no private files': 'Žádné soukromé soubory neexistují.',
'There are no static files': 'There are no static files',
'There are no translators, only default language is supported': 'There are no translators, only default language is supported',
'There are no views': 'There are no views',
'These files are not served, they are only available from within your app': 'Tyto soubory jsou klientům nepřístupné. K dispozici jsou pouze v rámci aplikace.',
'These files are served without processing, your images go here': 'Tyto soubory jsou servírovány bez přídavné logiky, sem patří např. obrázky.',
'This App': 'Tato aplikace',
'This is a copy of the scaffolding application': 'Toto je kopie aplikace skelet.',
'This is an experimental feature and it needs more testing. If you decide to upgrade you do it at your own risk': 'This is an experimental feature and it needs more testing. If you decide to upgrade you do it at your own risk',
'This is the %(filename)s template': 'This is the %(filename)s template',
'this page to see if a breakpoint was hit and debug interaction is required.': 'tuto stránku, abyste uviděli, zda se dosáhlo bodu přerušení.',
'Ticket': 'Ticket',
'Ticket ID': 'Ticket ID',
'Time in Cache (h:m:s)': 'Čas v Cache (h:m:s)',
'Timestamp': 'Časové razítko',
'to previous version.': 'k předchozí verzi.',
'To create a plugin, name a file/folder plugin_[name]': 'Zásuvný modul vytvoříte tak, že pojmenujete soubor/adresář plugin_[jméno modulu]',
'To emulate a breakpoint programatically, write:': 'K nastavení bodu přerušení v kódu programu, napište:',
'to use the debugger!': ', abyste mohli ladící program používat!',
'toggle breakpoint': 'vyp./zap. bod přerušení',
'Toggle Fullscreen': 'Na celou obrazovku a zpět',
'too short': 'Příliš krátké',
'Traceback': 'Traceback',
'Translation strings for the application': 'Překlad textů pro aplikaci',
'try something like': 'try something like',
'Try the mobile interface': 'Zkuste rozhraní pro mobilní zařízení',
'try view': 'try view',
'Twitter': 'Twitter',
'Type python statement in here and hit Return (Enter) to execute it.': 'Type python statement in here and hit Return (Enter) to execute it.',
'Type some Python code in here and hit Return (Enter) to execute it.': 'Type some Python code in here and hit Return (Enter) to execute it.',
'Unable to check for upgrades': 'Unable to check for upgrades',
'unable to parse csv file': 'csv soubor nedá sa zpracovat',
'uncheck all': 'vše odznačit',
'Uninstall': 'Odinstalovat',
'update': 'aktualizovat',
'update all languages': 'aktualizovat všechny jazyky',
'Update:': 'Upravit:',
'Upgrade': 'Upgrade',
'upgrade now': 'upgrade now',
'upgrade now to %s': 'upgrade now to %s',
'upload': 'nahrát',
'Upload': 'Upload',
'Upload a package:': 'Nahrát balík:',
'Upload and install packed application': 'Nahrát a instalovat zabalenou aplikaci',
'upload file:': 'nahrát soubor:',
'upload plugin file:': 'nahrát soubor modulu:',
'Use (...)&(...) for AND, (...)|(...) for OR, and ~(...) for NOT to build more complex queries.': 'Použijte (...)&(...) pro AND, (...)|(...) pro OR a ~(...) pro NOT pro sestavení složitějších dotazů.',
'User %(id)s Logged-in': 'Uživatel %(id)s přihlášen',
'User %(id)s Logged-out': 'Uživatel %(id)s odhlášen',
'User %(id)s Password changed': 'Uživatel %(id)s změnil heslo',
'User %(id)s Profile updated': 'Uživatel %(id)s upravil profil',
'User %(id)s Registered': 'Uživatel %(id)s se zaregistroval',
'User %(id)s Username retrieved': 'Uživatel %(id)s si nachal zaslat přihlašovací jméno',
'User ID': 'ID uživatele',
'Username': 'Přihlašovací jméno',
'variables': 'variables',
'Verify Password': 'Zopakujte heslo',
'Version': 'Verze',
'Version %s.%s.%s (%s) %s': 'Verze %s.%s.%s (%s) %s',
'Versioning': 'Verzování',
'Videos': 'Videa',
'View': 'Pohled (View)',
'Views': 'Pohledy',
'views': 'pohledy',
'Web Framework': 'Web Framework',
'web2py is up to date': 'Máte aktuální verzi web2py.',
'web2py online debugger': 'Ladící online web2py program',
'web2py Recent Tweets': 'Štěbetání na Twitteru o web2py',
'web2py upgrade': 'web2py upgrade',
'web2py upgraded; please restart it': 'web2py upgraded; please restart it',
'Welcome': 'Vítejte',
'Welcome to web2py': 'Vitejte ve web2py',
'Welcome to web2py!': 'Vítejte ve web2py!',
'Which called the function %s located in the file %s': 'která zavolala funkci %s v souboru (kontroléru) %s.',
'You are successfully running web2py': 'Úspěšně jste spustili web2py.',
'You can also set and remove breakpoint in the edit window, using the Toggle Breakpoint button': 'Nastavovat a mazat body přerušení je též možno v rámci editování zdrojového souboru přes tlačítko Vyp./Zap. bod přerušení',
'You can modify this application and adapt it to your needs': 'Tuto aplikaci si můžete upravit a přizpůsobit ji svým potřebám.',
'You need to set up and reach a': 'Je třeba nejprve nastavit a dojít až na',
'You visited the url %s': 'Navštívili jste stránku %s,',
'Your application will be blocked until you click an action button (next, step, continue, etc.)': 'Aplikace bude blokována než se klikne na jedno z tlačítek (další, krok, pokračovat, atd.)',
'Your can inspect variables using the console bellow': 'Níže pomocí příkazové řádky si můžete prohlédnout proměnné',
}
| gpl-2.0 |
Arcanemagus/SickRage | lib/validators/iban.py | 17 | 1167 | import re
from .utils import validator
regex = (
r'^[A-Z]{2}[0-9]{2}[A-Z0-9]{11,30}$'
)
pattern = re.compile(regex)
def char_value(char):
"""A=10, B=11, ..., Z=35
"""
if char.isdigit():
return int(char)
else:
return 10 + ord(char) - ord('A')
def modcheck(value):
"""Check if the value string passes the mod97-test.
"""
# move country code and check numbers to end
rearranged = value[4:] + value[:4]
# convert letters to numbers
converted = [char_value(char) for char in rearranged]
# interpret as integer
integerized = int(''.join([str(i) for i in converted]))
return (integerized % 97 == 1)
@validator
def iban(value):
"""
Return whether or not given value is a valid IBAN code.
If the value is a valid IBAN this function returns ``True``, otherwise
:class:`~validators.utils.ValidationFailure`.
Examples::
>>> iban('DE29100500001061045672')
True
>>> iban('123456')
ValidationFailure(func=iban, ...)
.. versionadded:: 0.8
:param value: IBAN string to validate
"""
return pattern.match(value) and modcheck(value)
| gpl-3.0 |
UUDigitalHumanitieslab/texcavator | services/management/commands/termvector_wordclouds.py | 2 | 3044 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Generate word clouds based on termvectors for random sets of documents.
"""
import logging
from django.core.management.base import BaseCommand
from collections import Counter
import time
from services.es import _es
from services.models import DocID
from texcavator import utils
logger = logging.getLogger(__name__)
class Command(BaseCommand):
args = '<#-documents, size-of-ES-chunks, #-repetitions>'
help = 'Generate word clouds using term vectors. #-documents is the ' \
'number of documents the word cloud must be generated for. ' \
'size-of-ES-chunk is the number of documents that is retrieved ' \
'in each ElasticSearch request. #-repetitions is the number of ' \
'word cloud generation is repeated (with a new random set of ' \
'documents).'
def handle(self, *args, **options):
query_size = 2500
n_repetitions = 10
es_retrieve = 2500
if len(args) > 0:
query_size = int(args[0])
if len(args) > 1:
n_repetitions = int(args[1])
if len(args) > 2:
es_retrieve = int(args[2])
response_times = []
for repetition in range(n_repetitions):
c1 = time.time()
es_time = []
wordcloud = Counter()
# select random documents
document_set = DocID.objects.order_by('?')[0:query_size]
doc_ids = [doc.doc_id for doc in document_set]
for ids in utils.chunks(doc_ids, es_retrieve):
bdy = {
'ids': ids,
'parameters': {
'fields': ['article_dc_title', 'text_content'],
'term_statistics': False,
'field_statistics': False,
'offsets': False,
'payloads': False,
'positions': False
}
}
c3 = time.time()
t_vectors = _es().mtermvectors(index='kb', doc_type='doc',
body=bdy)
c4 = time.time()
es_time.append((c4-c3)*1000)
for doc in t_vectors.get('docs'):
for field, data in doc.get('term_vectors').iteritems():
temp = {}
for term, details in data.get('terms').iteritems():
temp[term] = int(details['term_freq'])
wordcloud.update(temp)
c2 = time.time()
elapsed_c = (c2-c1)*1000
response_times.append(elapsed_c)
self.stdout.write(str(elapsed_c)+' ES: '+str(sum(es_time)))
self.stdout.flush()
avg = float(sum(response_times)/len(response_times))
print 'Average response time for generating word clouds from {num} ' \
'documents: {avg} miliseconds'.format(num=query_size, avg=avg)
| apache-2.0 |
onceuponatimeforever/oh-mainline | vendor/packages/scrapy/scrapy/tests/test_item.py | 24 | 3608 | import unittest
from scrapy.item import Item, Field
class ItemTest(unittest.TestCase):
def test_simple(self):
class TestItem(Item):
name = Field()
i = TestItem()
i['name'] = u'name'
self.assertEqual(i['name'], u'name')
def test_init(self):
class TestItem(Item):
name = Field()
i = TestItem()
self.assertRaises(KeyError, i.__getitem__, 'name')
i2 = TestItem(name=u'john doe')
self.assertEqual(i2['name'], u'john doe')
i3 = TestItem({'name': u'john doe'})
self.assertEqual(i3['name'], u'john doe')
i4 = TestItem(i3)
self.assertEqual(i4['name'], u'john doe')
self.assertRaises(KeyError, TestItem, {'name': u'john doe',
'other': u'foo'})
def test_invalid_field(self):
class TestItem(Item):
pass
i = TestItem()
self.assertRaises(KeyError, i.__setitem__, 'field', 'text')
self.assertRaises(KeyError, i.__getitem__, 'field')
def test_repr(self):
class TestItem(Item):
name = Field()
number = Field()
i = TestItem()
i['name'] = u'John Doe'
i['number'] = 123
itemrepr = repr(i)
self.assertEqual(itemrepr,
"{'name': u'John Doe', 'number': 123}")
i2 = eval(itemrepr)
self.assertEqual(i2['name'], 'John Doe')
self.assertEqual(i2['number'], 123)
def test_private_attr(self):
class TestItem(Item):
name = Field()
i = TestItem()
i._private = 'test'
self.assertEqual(i._private, 'test')
def test_raise_getattr(self):
class TestItem(Item):
name = Field()
i = TestItem()
self.assertRaises(AttributeError, getattr, i, 'name')
def test_raise_setattr(self):
class TestItem(Item):
name = Field()
i = TestItem()
self.assertRaises(AttributeError, setattr, i, 'name', 'john')
def test_custom_methods(self):
class TestItem(Item):
name = Field()
def get_name(self):
return self['name']
def change_name(self, name):
self['name'] = name
i = TestItem()
self.assertRaises(KeyError, i.get_name)
i['name'] = u'lala'
self.assertEqual(i.get_name(), u'lala')
i.change_name(u'other')
self.assertEqual(i.get_name(), 'other')
def test_metaclass(self):
class TestItem(Item):
name = Field()
keys = Field()
values = Field()
i = TestItem()
i['name'] = u'John'
self.assertEqual(i.keys(), ['name'])
self.assertEqual(i.values(), ['John'])
i['keys'] = u'Keys'
i['values'] = u'Values'
self.assertEqual(i.keys(), ['keys', 'values', 'name'])
self.assertEqual(i.values(), [u'Keys', u'Values', u'John'])
def test_metaclass_inheritance(self):
class BaseItem(Item):
name = Field()
keys = Field()
values = Field()
class TestItem(BaseItem):
keys = Field()
i = TestItem()
i['keys'] = 3
self.assertEqual(i.keys(), ['keys'])
self.assertEqual(i.values(), [3])
def test_to_dict(self):
class TestItem(Item):
name = Field()
i = TestItem()
i['name'] = u'John'
self.assertEqual(dict(i), {'name': u'John'})
if __name__ == "__main__":
unittest.main()
| agpl-3.0 |
onceuponatimeforever/oh-mainline | vendor/packages/sphinx/tests/test_websupport.py | 22 | 9602 | # -*- coding: utf-8 -*-
"""
test_websupport
~~~~~~~~~~~~~~~
Test the Web Support Package
:copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import os
from StringIO import StringIO
try:
from functools import wraps
except ImportError:
# functools is new in 2.5
wraps = lambda f: (lambda w: w)
from sphinx.websupport import WebSupport
from sphinx.websupport.errors import DocumentNotFoundError, \
CommentNotAllowedError, UserNotAuthorizedError
from sphinx.websupport.storage import StorageBackend
from sphinx.websupport.storage.differ import CombinedHtmlDiff
try:
from sphinx.websupport.storage.sqlalchemystorage import Session, \
Comment, CommentVote
from sphinx.websupport.storage.sqlalchemy_db import Node
sqlalchemy_missing = False
except ImportError:
sqlalchemy_missing = True
from util import test_root, raises, skip_if
default_settings = {'builddir': os.path.join(test_root, 'websupport'),
'status': StringIO(),
'warning': StringIO()}
def teardown_module():
(test_root / 'generated').rmtree(True)
(test_root / 'websupport').rmtree(True)
def with_support(*args, **kwargs):
"""Make a WebSupport object and pass it the test."""
settings = default_settings.copy()
settings.update(kwargs)
def generator(func):
@wraps(func)
def new_func(*args2, **kwargs2):
support = WebSupport(**settings)
func(support, *args2, **kwargs2)
return new_func
return generator
class NullStorage(StorageBackend):
pass
@with_support(storage=NullStorage())
def test_no_srcdir(support):
"""Make sure the correct exception is raised if srcdir is not given."""
raises(RuntimeError, support.build)
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support(srcdir=test_root)
def test_build(support):
support.build()
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_get_document(support):
raises(DocumentNotFoundError, support.get_document, 'nonexisting')
contents = support.get_document('contents')
assert contents['title'] and contents['body'] \
and contents['sidebar'] and contents['relbar']
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_comments(support):
session = Session()
nodes = session.query(Node).all()
first_node = nodes[0]
second_node = nodes[1]
# Create a displayed comment and a non displayed comment.
comment = support.add_comment('First test comment',
node_id=first_node.id,
username='user_one')
hidden_comment = support.add_comment('Hidden comment',
node_id=first_node.id,
displayed=False)
# Make sure that comments can't be added to a comment where
# displayed == False, since it could break the algorithm that
# converts a nodes comments to a tree.
raises(CommentNotAllowedError, support.add_comment, 'Not allowed',
parent_id=str(hidden_comment['id']))
# Add a displayed and not displayed child to the displayed comment.
support.add_comment('Child test comment', parent_id=str(comment['id']),
username='user_one')
support.add_comment('Hidden child test comment',
parent_id=str(comment['id']), displayed=False)
# Add a comment to another node to make sure it isn't returned later.
support.add_comment('Second test comment',
node_id=second_node.id,
username='user_two')
# Access the comments as a moderator.
data = support.get_data(first_node.id, moderator=True)
comments = data['comments']
children = comments[0]['children']
assert len(comments) == 2
assert comments[1]['text'] == '<p>Hidden comment</p>\n'
assert len(children) == 2
assert children[1]['text'] == '<p>Hidden child test comment</p>\n'
# Access the comments without being a moderator.
data = support.get_data(first_node.id)
comments = data['comments']
children = comments[0]['children']
assert len(comments) == 1
assert comments[0]['text'] == '<p>First test comment</p>\n'
assert len(children) == 1
assert children[0]['text'] == '<p>Child test comment</p>\n'
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_voting(support):
session = Session()
nodes = session.query(Node).all()
node = nodes[0]
comment = support.get_data(node.id)['comments'][0]
def check_rating(val):
data = support.get_data(node.id)
comment = data['comments'][0]
assert comment['rating'] == val, '%s != %s' % (comment['rating'], val)
support.process_vote(comment['id'], 'user_one', '1')
support.process_vote(comment['id'], 'user_two', '1')
support.process_vote(comment['id'], 'user_three', '1')
check_rating(3)
support.process_vote(comment['id'], 'user_one', '-1')
check_rating(1)
support.process_vote(comment['id'], 'user_one', '0')
check_rating(2)
# Make sure a vote with value > 1 or < -1 can't be cast.
raises(ValueError, support.process_vote, comment['id'], 'user_one', '2')
raises(ValueError, support.process_vote, comment['id'], 'user_one', '-2')
# Make sure past voting data is associated with comments when they are
# fetched.
data = support.get_data(str(node.id), username='user_two')
comment = data['comments'][0]
assert comment['vote'] == 1, '%s != 1' % comment['vote']
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_proposals(support):
session = Session()
node = session.query(Node).first()
data = support.get_data(node.id)
source = data['source']
proposal = source[:5] + source[10:15] + 'asdf' + source[15:]
comment = support.add_comment('Proposal comment',
node_id=node.id,
proposal=proposal)
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_user_delete_comments(support):
def get_comment():
session = Session()
node = session.query(Node).first()
session.close()
return support.get_data(node.id)['comments'][0]
comment = get_comment()
assert comment['username'] == 'user_one'
# Make sure other normal users can't delete someone elses comments.
raises(UserNotAuthorizedError, support.delete_comment,
comment['id'], username='user_two')
# Now delete the comment using the correct username.
support.delete_comment(comment['id'], username='user_one')
comment = get_comment()
assert comment['username'] == '[deleted]'
assert comment['text'] == '[deleted]'
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_moderator_delete_comments(support):
def get_comment():
session = Session()
node = session.query(Node).first()
session.close()
return support.get_data(node.id, moderator=True)['comments'][1]
comment = get_comment()
support.delete_comment(comment['id'], username='user_two',
moderator=True)
raises(IndexError, get_comment)
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support()
def test_update_username(support):
support.update_username('user_two', 'new_user_two')
session = Session()
comments = session.query(Comment).\
filter(Comment.username == 'user_two').all()
assert len(comments) == 0
votes = session.query(CommentVote).\
filter(CommentVote.username == 'user_two').all()
assert len(votes) == 0
comments = session.query(Comment).\
filter(Comment.username == 'new_user_two').all()
assert len(comments) == 1
votes = session.query(CommentVote).\
filter(CommentVote.username == 'new_user_two').all()
assert len(votes) == 0
called = False
def moderation_callback(comment):
global called
called = True
@skip_if(sqlalchemy_missing, 'needs sqlalchemy')
@with_support(moderation_callback=moderation_callback)
def test_moderation(support):
session = Session()
nodes = session.query(Node).all()
node = nodes[7]
session.close()
accepted = support.add_comment('Accepted Comment', node_id=node.id,
displayed=False)
deleted = support.add_comment('Comment to delete', node_id=node.id,
displayed=False)
# Make sure the moderation_callback is called.
assert called == True
# Make sure the user must be a moderator.
raises(UserNotAuthorizedError, support.accept_comment, accepted['id'])
raises(UserNotAuthorizedError, support.delete_comment, deleted['id'])
support.accept_comment(accepted['id'], moderator=True)
support.delete_comment(deleted['id'], moderator=True)
comments = support.get_data(node.id)['comments']
assert len(comments) == 1
comments = support.get_data(node.id, moderator=True)['comments']
assert len(comments) == 1
def test_differ():
source = 'Lorem ipsum dolor sit amet,\nconsectetur adipisicing elit,\n' \
'sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.'
prop = 'Lorem dolor sit amet,\nconsectetur nihil adipisicing elit,\n' \
'sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.'
differ = CombinedHtmlDiff(source, prop)
differ.make_html()
| agpl-3.0 |
Pablo126/SSBW | Entrega1/lib/python3.5/site-packages/rest_framework/relations.py | 5 | 19367 | # coding: utf-8
from __future__ import unicode_literals
from collections import OrderedDict
from django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist
from django.db.models import Manager
from django.db.models.query import QuerySet
from django.utils import six
from django.utils.encoding import (
python_2_unicode_compatible, smart_text, uri_to_iri
)
from django.utils.six.moves.urllib import parse as urlparse
from django.utils.translation import ugettext_lazy as _
from rest_framework.compat import (
NoReverseMatch, Resolver404, get_script_prefix, resolve
)
from rest_framework.fields import (
Field, empty, get_attribute, is_simple_callable, iter_options
)
from rest_framework.reverse import reverse
from rest_framework.settings import api_settings
from rest_framework.utils import html
def method_overridden(method_name, klass, instance):
"""
Determine if a method has been overridden.
"""
method = getattr(klass, method_name)
default_method = getattr(method, '__func__', method) # Python 3 compat
return default_method is not getattr(instance, method_name).__func__
class Hyperlink(six.text_type):
"""
A string like object that additionally has an associated name.
We use this for hyperlinked URLs that may render as a named link
in some contexts, or render as a plain URL in others.
"""
def __new__(self, url, obj):
ret = six.text_type.__new__(self, url)
ret.obj = obj
return ret
def __getnewargs__(self):
return(str(self), self.name,)
@property
def name(self):
# This ensures that we only called `__str__` lazily,
# as in some cases calling __str__ on a model instances *might*
# involve a database lookup.
return six.text_type(self.obj)
is_hyperlink = True
@python_2_unicode_compatible
class PKOnlyObject(object):
"""
This is a mock object, used for when we only need the pk of the object
instance, but still want to return an object with a .pk attribute,
in order to keep the same interface as a regular model instance.
"""
def __init__(self, pk):
self.pk = pk
def __str__(self):
return "%s" % self.pk
# We assume that 'validators' are intended for the child serializer,
# rather than the parent serializer.
MANY_RELATION_KWARGS = (
'read_only', 'write_only', 'required', 'default', 'initial', 'source',
'label', 'help_text', 'style', 'error_messages', 'allow_empty'
)
class RelatedField(Field):
queryset = None
html_cutoff = None
html_cutoff_text = None
def __init__(self, **kwargs):
self.queryset = kwargs.pop('queryset', self.queryset)
self.html_cutoff = kwargs.pop(
'html_cutoff',
self.html_cutoff or int(api_settings.HTML_SELECT_CUTOFF)
)
self.html_cutoff_text = kwargs.pop(
'html_cutoff_text',
self.html_cutoff_text or _(api_settings.HTML_SELECT_CUTOFF_TEXT)
)
if not method_overridden('get_queryset', RelatedField, self):
assert self.queryset is not None or kwargs.get('read_only', None), (
'Relational field must provide a `queryset` argument, '
'override `get_queryset`, or set read_only=`True`.'
)
assert not (self.queryset is not None and kwargs.get('read_only', None)), (
'Relational fields should not provide a `queryset` argument, '
'when setting read_only=`True`.'
)
kwargs.pop('many', None)
kwargs.pop('allow_empty', None)
super(RelatedField, self).__init__(**kwargs)
def __new__(cls, *args, **kwargs):
# We override this method in order to automagically create
# `ManyRelatedField` classes instead when `many=True` is set.
if kwargs.pop('many', False):
return cls.many_init(*args, **kwargs)
return super(RelatedField, cls).__new__(cls, *args, **kwargs)
@classmethod
def many_init(cls, *args, **kwargs):
"""
This method handles creating a parent `ManyRelatedField` instance
when the `many=True` keyword argument is passed.
Typically you won't need to override this method.
Note that we're over-cautious in passing most arguments to both parent
and child classes in order to try to cover the general case. If you're
overriding this method you'll probably want something much simpler, eg:
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
return CustomManyRelatedField(*args, **kwargs)
"""
list_kwargs = {'child_relation': cls(*args, **kwargs)}
for key in kwargs.keys():
if key in MANY_RELATION_KWARGS:
list_kwargs[key] = kwargs[key]
return ManyRelatedField(**list_kwargs)
def run_validation(self, data=empty):
# We force empty strings to None values for relational fields.
if data == '':
data = None
return super(RelatedField, self).run_validation(data)
def get_queryset(self):
queryset = self.queryset
if isinstance(queryset, (QuerySet, Manager)):
# Ensure queryset is re-evaluated whenever used.
# Note that actually a `Manager` class may also be used as the
# queryset argument. This occurs on ModelSerializer fields,
# as it allows us to generate a more expressive 'repr' output
# for the field.
# Eg: 'MyRelationship(queryset=ExampleModel.objects.all())'
queryset = queryset.all()
return queryset
def use_pk_only_optimization(self):
return False
def get_attribute(self, instance):
if self.use_pk_only_optimization() and self.source_attrs:
# Optimized case, return a mock object only containing the pk attribute.
try:
instance = get_attribute(instance, self.source_attrs[:-1])
value = instance.serializable_value(self.source_attrs[-1])
if is_simple_callable(value):
# Handle edge case where the relationship `source` argument
# points to a `get_relationship()` method on the model
value = value().pk
return PKOnlyObject(pk=value)
except AttributeError:
pass
# Standard case, return the object instance.
return get_attribute(instance, self.source_attrs)
def get_choices(self, cutoff=None):
queryset = self.get_queryset()
if queryset is None:
# Ensure that field.choices returns something sensible
# even when accessed with a read-only field.
return {}
if cutoff is not None:
queryset = queryset[:cutoff]
return OrderedDict([
(
self.to_representation(item),
self.display_value(item)
)
for item in queryset
])
@property
def choices(self):
return self.get_choices()
@property
def grouped_choices(self):
return self.choices
def iter_options(self):
return iter_options(
self.get_choices(cutoff=self.html_cutoff),
cutoff=self.html_cutoff,
cutoff_text=self.html_cutoff_text
)
def display_value(self, instance):
return six.text_type(instance)
class StringRelatedField(RelatedField):
"""
A read only field that represents its targets using their
plain string representation.
"""
def __init__(self, **kwargs):
kwargs['read_only'] = True
super(StringRelatedField, self).__init__(**kwargs)
def to_representation(self, value):
return six.text_type(value)
class PrimaryKeyRelatedField(RelatedField):
default_error_messages = {
'required': _('This field is required.'),
'does_not_exist': _('Invalid pk "{pk_value}" - object does not exist.'),
'incorrect_type': _('Incorrect type. Expected pk value, received {data_type}.'),
}
def __init__(self, **kwargs):
self.pk_field = kwargs.pop('pk_field', None)
super(PrimaryKeyRelatedField, self).__init__(**kwargs)
def use_pk_only_optimization(self):
return True
def to_internal_value(self, data):
if self.pk_field is not None:
data = self.pk_field.to_internal_value(data)
try:
return self.get_queryset().get(pk=data)
except ObjectDoesNotExist:
self.fail('does_not_exist', pk_value=data)
except (TypeError, ValueError):
self.fail('incorrect_type', data_type=type(data).__name__)
def to_representation(self, value):
if self.pk_field is not None:
return self.pk_field.to_representation(value.pk)
return value.pk
class HyperlinkedRelatedField(RelatedField):
lookup_field = 'pk'
view_name = None
default_error_messages = {
'required': _('This field is required.'),
'no_match': _('Invalid hyperlink - No URL match.'),
'incorrect_match': _('Invalid hyperlink - Incorrect URL match.'),
'does_not_exist': _('Invalid hyperlink - Object does not exist.'),
'incorrect_type': _('Incorrect type. Expected URL string, received {data_type}.'),
}
def __init__(self, view_name=None, **kwargs):
if view_name is not None:
self.view_name = view_name
assert self.view_name is not None, 'The `view_name` argument is required.'
self.lookup_field = kwargs.pop('lookup_field', self.lookup_field)
self.lookup_url_kwarg = kwargs.pop('lookup_url_kwarg', self.lookup_field)
self.format = kwargs.pop('format', None)
# We include this simply for dependency injection in tests.
# We can't add it as a class attributes or it would expect an
# implicit `self` argument to be passed.
self.reverse = reverse
super(HyperlinkedRelatedField, self).__init__(**kwargs)
def use_pk_only_optimization(self):
return self.lookup_field == 'pk'
def get_object(self, view_name, view_args, view_kwargs):
"""
Return the object corresponding to a matched URL.
Takes the matched URL conf arguments, and should return an
object instance, or raise an `ObjectDoesNotExist` exception.
"""
lookup_value = view_kwargs[self.lookup_url_kwarg]
lookup_kwargs = {self.lookup_field: lookup_value}
return self.get_queryset().get(**lookup_kwargs)
def get_url(self, obj, view_name, request, format):
"""
Given an object, return the URL that hyperlinks to the object.
May raise a `NoReverseMatch` if the `view_name` and `lookup_field`
attributes are not configured to correctly match the URL conf.
"""
# Unsaved objects will not yet have a valid URL.
if hasattr(obj, 'pk') and obj.pk in (None, ''):
return None
lookup_value = getattr(obj, self.lookup_field)
kwargs = {self.lookup_url_kwarg: lookup_value}
return self.reverse(view_name, kwargs=kwargs, request=request, format=format)
def to_internal_value(self, data):
request = self.context.get('request', None)
try:
http_prefix = data.startswith(('http:', 'https:'))
except AttributeError:
self.fail('incorrect_type', data_type=type(data).__name__)
if http_prefix:
# If needed convert absolute URLs to relative path
data = urlparse.urlparse(data).path
prefix = get_script_prefix()
if data.startswith(prefix):
data = '/' + data[len(prefix):]
data = uri_to_iri(data)
try:
match = resolve(data)
except Resolver404:
self.fail('no_match')
try:
expected_viewname = request.versioning_scheme.get_versioned_viewname(
self.view_name, request
)
except AttributeError:
expected_viewname = self.view_name
if match.view_name != expected_viewname:
self.fail('incorrect_match')
try:
return self.get_object(match.view_name, match.args, match.kwargs)
except (ObjectDoesNotExist, TypeError, ValueError):
self.fail('does_not_exist')
def to_representation(self, value):
assert 'request' in self.context, (
"`%s` requires the request in the serializer"
" context. Add `context={'request': request}` when instantiating "
"the serializer." % self.__class__.__name__
)
request = self.context['request']
format = self.context.get('format', None)
# By default use whatever format is given for the current context
# unless the target is a different type to the source.
#
# Eg. Consider a HyperlinkedIdentityField pointing from a json
# representation to an html property of that representation...
#
# '/snippets/1/' should link to '/snippets/1/highlight/'
# ...but...
# '/snippets/1/.json' should link to '/snippets/1/highlight/.html'
if format and self.format and self.format != format:
format = self.format
# Return the hyperlink, or error if incorrectly configured.
try:
url = self.get_url(value, self.view_name, request, format)
except NoReverseMatch:
msg = (
'Could not resolve URL for hyperlinked relationship using '
'view name "%s". You may have failed to include the related '
'model in your API, or incorrectly configured the '
'`lookup_field` attribute on this field.'
)
if value in ('', None):
value_string = {'': 'the empty string', None: 'None'}[value]
msg += (
" WARNING: The value of the field on the model instance "
"was %s, which may be why it didn't match any "
"entries in your URL conf." % value_string
)
raise ImproperlyConfigured(msg % self.view_name)
if url is None:
return None
return Hyperlink(url, value)
class HyperlinkedIdentityField(HyperlinkedRelatedField):
"""
A read-only field that represents the identity URL for an object, itself.
This is in contrast to `HyperlinkedRelatedField` which represents the
URL of relationships to other objects.
"""
def __init__(self, view_name=None, **kwargs):
assert view_name is not None, 'The `view_name` argument is required.'
kwargs['read_only'] = True
kwargs['source'] = '*'
super(HyperlinkedIdentityField, self).__init__(view_name, **kwargs)
def use_pk_only_optimization(self):
# We have the complete object instance already. We don't need
# to run the 'only get the pk for this relationship' code.
return False
class SlugRelatedField(RelatedField):
"""
A read-write field that represents the target of the relationship
by a unique 'slug' attribute.
"""
default_error_messages = {
'does_not_exist': _('Object with {slug_name}={value} does not exist.'),
'invalid': _('Invalid value.'),
}
def __init__(self, slug_field=None, **kwargs):
assert slug_field is not None, 'The `slug_field` argument is required.'
self.slug_field = slug_field
super(SlugRelatedField, self).__init__(**kwargs)
def to_internal_value(self, data):
try:
return self.get_queryset().get(**{self.slug_field: data})
except ObjectDoesNotExist:
self.fail('does_not_exist', slug_name=self.slug_field, value=smart_text(data))
except (TypeError, ValueError):
self.fail('invalid')
def to_representation(self, obj):
return getattr(obj, self.slug_field)
class ManyRelatedField(Field):
"""
Relationships with `many=True` transparently get coerced into instead being
a ManyRelatedField with a child relationship.
The `ManyRelatedField` class is responsible for handling iterating through
the values and passing each one to the child relationship.
This class is treated as private API.
You shouldn't generally need to be using this class directly yourself,
and should instead simply set 'many=True' on the relationship.
"""
initial = []
default_empty_html = []
default_error_messages = {
'not_a_list': _('Expected a list of items but got type "{input_type}".'),
'empty': _('This list may not be empty.')
}
html_cutoff = None
html_cutoff_text = None
def __init__(self, child_relation=None, *args, **kwargs):
self.child_relation = child_relation
self.allow_empty = kwargs.pop('allow_empty', True)
self.html_cutoff = kwargs.pop(
'html_cutoff',
self.html_cutoff or int(api_settings.HTML_SELECT_CUTOFF)
)
self.html_cutoff_text = kwargs.pop(
'html_cutoff_text',
self.html_cutoff_text or _(api_settings.HTML_SELECT_CUTOFF_TEXT)
)
assert child_relation is not None, '`child_relation` is a required argument.'
super(ManyRelatedField, self).__init__(*args, **kwargs)
self.child_relation.bind(field_name='', parent=self)
def get_value(self, dictionary):
# We override the default field access in order to support
# lists in HTML forms.
if html.is_html_input(dictionary):
# Don't return [] if the update is partial
if self.field_name not in dictionary:
if getattr(self.root, 'partial', False):
return empty
return dictionary.getlist(self.field_name)
return dictionary.get(self.field_name, empty)
def to_internal_value(self, data):
if isinstance(data, type('')) or not hasattr(data, '__iter__'):
self.fail('not_a_list', input_type=type(data).__name__)
if not self.allow_empty and len(data) == 0:
self.fail('empty')
return [
self.child_relation.to_internal_value(item)
for item in data
]
def get_attribute(self, instance):
# Can't have any relationships if not created
if hasattr(instance, 'pk') and instance.pk is None:
return []
relationship = get_attribute(instance, self.source_attrs)
return relationship.all() if hasattr(relationship, 'all') else relationship
def to_representation(self, iterable):
return [
self.child_relation.to_representation(value)
for value in iterable
]
def get_choices(self, cutoff=None):
return self.child_relation.get_choices(cutoff)
@property
def choices(self):
return self.get_choices()
@property
def grouped_choices(self):
return self.choices
def iter_options(self):
return iter_options(
self.get_choices(cutoff=self.html_cutoff),
cutoff=self.html_cutoff,
cutoff_text=self.html_cutoff_text
)
| gpl-3.0 |
dalf/searx | searx/engines/wordnik.py | 1 | 1991 | # SPDX-License-Identifier: AGPL-3.0-or-later
"""Wordnik (general)
"""
from lxml.html import fromstring
from searx import logger
from searx.utils import extract_text
from searx.network import raise_for_httperror
logger = logger.getChild('Wordnik engine')
# about
about = {
"website": 'https://www.wordnik.com',
"wikidata_id": 'Q8034401',
"official_api_documentation": None,
"use_official_api": False,
"require_api_key": False,
"results": 'HTML',
}
categories = ['general']
paging = False
URL = 'https://www.wordnik.com'
SEARCH_URL = URL + '/words/{query}'
def request(query, params):
params['url'] = SEARCH_URL.format(query=query)
logger.debug(f"query_url --> {params['url']}")
return params
def response(resp):
results = []
raise_for_httperror(resp)
dom = fromstring(resp.text)
word = extract_text(dom.xpath('//*[@id="headword"]/text()'))
definitions = []
for src in dom.xpath('//*[@id="define"]//h3[@class="source"]'):
src_text = extract_text(src).strip()
if src_text.startswith('from '):
src_text = src_text[5:]
src_defs = []
for def_item in src.xpath('following-sibling::ul[1]/li'):
def_abbr = extract_text(def_item.xpath('.//abbr')).strip()
def_text = extract_text(def_item).strip()
if def_abbr:
def_text = def_text[len(def_abbr):].strip()
src_defs.append((def_abbr, def_text))
definitions.append((src_text, src_defs))
if not definitions:
return results
infobox = ''
for src_text, src_defs in definitions:
infobox += f"<small>{src_text}</small>"
infobox += "<ul>"
for def_abbr, def_text in src_defs:
if def_abbr:
def_abbr += ": "
infobox += f"<li><i>{def_abbr}</i> {def_text}</li>"
infobox += "</ul>"
results.append({
'infobox': word,
'content': infobox,
})
return results
| agpl-3.0 |
asmltd/gosports | webapp/coach/views.py | 1 | 1584 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from rest_framework.authentication import SessionAuthentication, BasicAuthentication
from rest_framework.generics import ListAPIView, RetrieveAPIView, RetrieveUpdateAPIView
from rest_framework.pagination import LimitOffsetPagination
from rest_framework.permissions import IsAuthenticated, IsAdminUser
from .models import Coach_Details
from .serializers import CoachSerializer
from django.shortcuts import render
# Create your views here.
class ListCoachsAPIView(ListAPIView):
"""
List all Coach
api ==> hostname:port/api/coach/
"""
queryset = Coach_Details.objects.all()
serializer_class = CoachSerializer
pagination_class = LimitOffsetPagination
authentication_classes = (SessionAuthentication, BasicAuthentication)
permission_classes = (IsAuthenticated, IsAdminUser)
class CoachIndividualAPIView(RetrieveAPIView):
"""
Retrieve details about an Coach
api ==> hostname:port/api/coach/coach_id/
"""
queryset = Coach_Details.objects.all()
serializer_class = CoachSerializer
authentication_classes = (SessionAuthentication, BasicAuthentication)
permission_classes = (IsAuthenticated,)
class CoachEditAPIView(RetrieveUpdateAPIView):
"""
Update particular Coach details
api ==> hostname:port/api/Coach/Coach_id/edit
"""
queryset = Coach_Details.objects.all()
serializer_class = CoachSerializer
authentication_classes = (SessionAuthentication, BasicAuthentication)
permission_classes = (IsAuthenticated, IsAdminUser) | gpl-3.0 |
mlperf/training_results_v0.6 | Fujitsu/benchmarks/resnet/implementations/mxnet/python/mxnet/contrib/onnx/mx2onnx/_export_helper.py | 10 | 2340 | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""export helper functions"""
# coding: utf-8
import os
import logging
import mxnet as mx
def load_module(sym_filepath, params_filepath):
"""Loads the MXNet model file and
returns MXNet symbol and params (weights).
Parameters
----------
json_path : str
Path to the json file
params_path : str
Path to the params file
Returns
-------
sym : MXNet symbol
Model symbol object
params : params object
Model weights including both arg and aux params.
"""
if not (os.path.isfile(sym_filepath) and os.path.isfile(params_filepath)):
raise ValueError("Symbol and params files provided are invalid")
else:
try:
# reads symbol.json file from given path and
# retrieves model prefix and number of epochs
model_name = sym_filepath.rsplit('.', 1)[0].rsplit('-', 1)[0]
params_file_list = params_filepath.rsplit('.', 1)[0].rsplit('-', 1)
# Setting num_epochs to 0 if not present in filename
num_epochs = 0 if len(params_file_list) == 1 else int(params_file_list[1])
except IndexError:
logging.info("Model and params name should be in format: "
"prefix-symbol.json, prefix-epoch.params")
raise
sym, arg_params, aux_params = mx.model.load_checkpoint(model_name, num_epochs)
# Merging arg and aux parameters
params = {}
params.update(arg_params)
params.update(aux_params)
return sym, params
| apache-2.0 |
Workday/OpenFrame | net/data/name_constraints_unittest/generate_name_constraints.py | 8 | 18998 | #!/usr/bin/env python
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import base64
import copy
import os
import random
import subprocess
import sys
import tempfile
sys.path += [os.path.join('..', 'verify_name_match_unittest', 'scripts')]
import generate_names
def generate(s, out_fn):
conf_tempfile = tempfile.NamedTemporaryFile()
conf_tempfile.write(str(s))
conf_tempfile.flush()
der_tmpfile = tempfile.NamedTemporaryFile()
description_tmpfile = tempfile.NamedTemporaryFile()
subprocess.check_call(['openssl', 'asn1parse', '-genconf', conf_tempfile.name,
'-i', '-out', der_tmpfile.name],
stdout=description_tmpfile)
conf_tempfile.close()
output_file = open(out_fn, 'w')
description_tmpfile.seek(0)
output_file.write(description_tmpfile.read())
output_file.write('-----BEGIN %s-----\n' % s.token())
output_file.write(base64.encodestring(der_tmpfile.read()))
output_file.write('-----END %s-----\n' % s.token())
output_file.close()
class SubjectAltNameGenerator:
def __init__(self):
self.names = []
def token(self):
return "SUBJECT ALTERNATIVE NAME"
def add_name(self, general_name):
self.names.append(general_name)
def __str__(self):
s = "asn1 = OCTWRAP,SEQUENCE:subjectAltNameSequence\n"
s += "[subjectAltNameSequence]\n"
s_suffix = ""
for n, name in enumerate(self.names):
n1, n2 = (str(name) + '\n').split('\n', 1)
if n2:
s_suffix += n2 + '\n'
s += '%s%s\n' % (n, n1)
return s + s_suffix
class NameConstraintsGenerator:
def __init__(self,
force_permitted_sequence=False,
force_excluded_sequence=False):
self.permitted = []
self.excluded = []
self.force_permitted_sequence = force_permitted_sequence
self.force_excluded_sequence = force_excluded_sequence
def token(self):
return "NAME CONSTRAINTS"
def union_from(self, c):
self.permitted.extend(c.permitted)
self.excluded.extend(c.excluded)
def add_permitted(self, general_name):
self.permitted.append(general_name)
def add_excluded(self, general_name):
self.excluded.append(general_name)
def __str__(self):
s = "asn1 = SEQUENCE:nameConstraintsSequence\n[nameConstraintsSequence]\n"
if self.permitted or self.force_permitted_sequence:
s += "permittedSubtrees = IMPLICIT:0,SEQUENCE:permittedSubtreesSequence\n"
if self.excluded or self.force_excluded_sequence:
s += "excludedSubtrees = IMPLICIT:1,SEQUENCE:excludedSubtreesSequence\n"
if self.permitted or self.force_permitted_sequence:
s += "[permittedSubtreesSequence]\n"
for n, subtree in enumerate(self.permitted):
s += 'subtree%i = SEQUENCE:permittedSubtree%i\n' % (n, n)
if self.excluded or self.force_excluded_sequence:
s += "[excludedSubtreesSequence]\n"
for n, subtree in enumerate(self.excluded):
s += 'subtree%i = SEQUENCE:excludedSubtree%i\n' % (n, n)
for n, subtree in enumerate(self.permitted):
s += '[permittedSubtree%i]\n%s\n' % (n, subtree)
for n, subtree in enumerate(self.excluded):
s += '[excludedSubtree%i]\n%s\n' % (n, subtree)
return s
def other_name():
i = random.randint(0, sys.maxint)
s = 'otherName = IMPLICIT:0,SEQUENCE:otherNameSequence%i\n' % i
s += '[otherNameSequence%i]\n' % i
s += 'type_id = OID:1.2.3.4.5\n'
s += 'value = FORMAT:HEX,OCTETSTRING:DEADBEEF\n'
return s
def rfc822_name(name):
return 'rfc822Name = IMPLICIT:1,IA5STRING:' + name
def dns_name(name):
return 'dNSName = IMPLICIT:2,IA5STRING:' + name
def x400_address():
i = random.randint(0, sys.maxint)
s = 'x400Address = IMPLICIT:3,SEQUENCE:x400AddressSequence%i\n' % i
s += '[x400AddressSequence%i]\n' % i
s += 'builtinstandardattributes = SEQUENCE:BuiltInStandardAttributes%i\n' % i
s += '[BuiltInStandardAttributes%i]\n' % i
s += 'countryname = EXPLICIT:1A,PRINTABLESTRING:US\n'
return s
def directory_name(name):
return str(name).replace(
'asn1 = SEQUENCE', 'directoryName = IMPLICIT:4,SEQUENCE')
def edi_party_name():
i = random.randint(0, sys.maxint)
s = 'ediPartyName = IMPLICIT:5,SEQUENCE:ediPartyNameSequence%i\n' % i
s += '[ediPartyNameSequence%i]\n' % i
s += 'partyName = IMPLICIT:1,UTF8:foo\n'
return s
def uniform_resource_identifier(name):
return 'uniformResourceIdentifier = IMPLICIT:6,IA5STRING:' + name
def ip_address(addr, enforce_length=True):
if enforce_length:
assert len(addr) in (4,16)
addr_str = ""
for addr_byte in addr:
addr_str += '%02X'%(addr_byte)
return 'iPAddress = IMPLICIT:7,FORMAT:HEX,OCTETSTRING:' + addr_str
def ip_address_range(addr, netmask, enforce_length=True):
if enforce_length:
assert len(addr) == len(netmask)
assert len(addr) in (4,16)
addr_str = ""
netmask_str = ""
for addr_byte, mask_byte in map(None, addr, netmask):
assert (addr_byte & ~mask_byte) == 0
addr_str += '%02X'%(addr_byte)
netmask_str += '%02X'%(mask_byte)
return ('iPAddress = IMPLICIT:7,FORMAT:HEX,OCTETSTRING:' + addr_str +
netmask_str)
def registered_id(oid):
return 'registeredID = IMPLICIT:8,OID:' + oid
def with_min_max(val, minimum=None, maximum=None):
s = val
s += '\n'
assert '\n[' not in s
if minimum is not None:
s += 'minimum = IMPLICIT:0,INTEGER:%i\n' % minimum
if maximum is not None:
s += 'maximum = IMPLICIT:1,INTEGER:%i\n' % maximum
return s
def main():
dnsname_constraints = NameConstraintsGenerator()
dnsname_constraints.add_permitted(dns_name("permitted.example.com"))
dnsname_constraints.add_permitted(dns_name("permitted.example2.com"))
dnsname_constraints.add_permitted(dns_name("permitted.example3.com."))
dnsname_constraints.add_permitted(dns_name("alsopermitted.example.com"))
dnsname_constraints.add_excluded(dns_name("excluded.permitted.example.com"))
dnsname_constraints.add_permitted(
dns_name("stillnotpermitted.excluded.permitted.example.com"))
dnsname_constraints.add_excluded(dns_name("extraneousexclusion.example.com"))
generate(dnsname_constraints, "dnsname.pem")
dnsname_constraints2 = NameConstraintsGenerator()
dnsname_constraints2.add_permitted(dns_name("com"))
dnsname_constraints2.add_excluded(dns_name("foo.bar.com"))
generate(dnsname_constraints2, "dnsname2.pem")
dnsname_constraints3 = NameConstraintsGenerator()
dnsname_constraints3.add_permitted(dns_name(".bar.com"))
generate(dnsname_constraints3, "dnsname-permitted_with_leading_dot.pem")
c = NameConstraintsGenerator()
c.add_excluded(dns_name("excluded.permitted.example.com"))
generate(c, "dnsname-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(dns_name("permitted.example.com"))
c.add_excluded(dns_name(""))
generate(c, "dnsname-excludeall.pem")
c = NameConstraintsGenerator()
c.add_permitted(dns_name("permitted.example.com"))
c.add_excluded(dns_name("."))
generate(c, "dnsname-exclude_dot.pem")
ipaddress_constraints = NameConstraintsGenerator()
ipaddress_constraints.add_permitted(
ip_address_range((192,168,0,0),(255,255,0,0)))
ipaddress_constraints.add_excluded(
ip_address_range((192,168,5,0),(255,255,255,0)))
ipaddress_constraints.add_permitted(
ip_address_range((192,168,5,32),(255,255,255,224)))
ipaddress_constraints.add_permitted(
ip_address_range((192,167,5,32),(255,255,255,224)))
ipaddress_constraints.add_excluded(
ip_address_range((192,166,5,32),(255,255,255,224)))
ipaddress_constraints.add_permitted(ip_address_range(
(1,2,3,4,5,6,7,8,9,10,11,12,0,0,0,0),
(255,255,255,255,255,255,255,255,255,255,255,255,0,0,0,0)))
ipaddress_constraints.add_excluded(ip_address_range(
(1,2,3,4,5,6,7,8,9,10,11,12,5,0,0,0),
(255,255,255,255,255,255,255,255,255,255,255,255,255,0,0,0)))
ipaddress_constraints.add_permitted(ip_address_range(
(1,2,3,4,5,6,7,8,9,10,11,12,5,32,0,0),
(255,255,255,255,255,255,255,255,255,255,255,255,255,224,0,0)))
ipaddress_constraints.add_permitted(ip_address_range(
(1,2,3,4,5,6,7,8,9,10,11,11,5,32,0,0),
(255,255,255,255,255,255,255,255,255,255,255,255,255,224,0,0)))
ipaddress_constraints.add_excluded(ip_address_range(
(1,2,3,4,5,6,7,8,9,10,11,10,5,32,0,0),
(255,255,255,255,255,255,255,255,255,255,255,255,255,224,0,0)))
generate(ipaddress_constraints, "ipaddress.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,1,3),(255,255,255,255)))
generate(c, "ipaddress-permit_singlehost.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((0,0,0,0),(0,0,0,0)))
generate(c, "ipaddress-permit_all.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((0x80,0,0,0),(0x80,0,0,0)))
generate(c, "ipaddress-permit_prefix1.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,1,2),(255,255,255,254)))
generate(c, "ipaddress-permit_prefix31.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,1,0),(255,255,255,253)))
generate(c, "ipaddress-invalid_mask_not_contiguous_1.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,0,0),(255,253,0,0)))
generate(c, "ipaddress-invalid_mask_not_contiguous_2.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((0,0,0,0),(0x40,0,0,0)))
generate(c, "ipaddress-invalid_mask_not_contiguous_3.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,0,0,0),(0xFF,0,0xFF,0)))
generate(c, "ipaddress-invalid_mask_not_contiguous_4.pem")
c = NameConstraintsGenerator()
c.add_excluded(ip_address_range((192,168,5,0),(255,255,255,0)))
generate(c, "ipaddress-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,0,0),(255,255,0,0)))
c.add_permitted(ip_address_range((1,2,3,4,5,6,7,8,9,10,11,12,0,0,0,0),
(255,255,255,255,255,255,255,255,
255,255,255,255,0,0,0,0)))
c.add_excluded(ip_address_range((0,0,0,0),(0,0,0,0)))
c.add_excluded(ip_address_range((0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),
(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)))
generate(c, "ipaddress-excludeall.pem")
c = NameConstraintsGenerator()
c.add_permitted(ip_address_range((192,168,0,0),(255,255,255,0)))
c.add_permitted(ip_address_range((192,168,5,0,0),(255,255,255,0,0),
enforce_length=False))
generate(c, "ipaddress-invalid_addr.pem")
n_us = generate_names.NameGenerator()
n_us.add_rdn().add_attr('countryName', 'PRINTABLESTRING', 'US')
generate(n_us, "name-us.pem")
n_us_az = copy.deepcopy(n_us)
n_us_az.add_rdn().add_attr('stateOrProvinceName', 'UTF8', 'Arizona')
generate(n_us_az, "name-us-arizona.pem")
n_us_ca = copy.deepcopy(n_us)
n_us_ca.add_rdn().add_attr('stateOrProvinceName', 'UTF8', 'California')
generate(n_us_ca, "name-us-california.pem")
n_us_ca_mountain_view = copy.deepcopy(n_us_ca)
n_us_ca_mountain_view.add_rdn().add_attr(
'localityName', 'UTF8', 'Mountain View')
generate(n_us_ca_mountain_view, "name-us-california-mountain_view.pem")
n_jp = generate_names.NameGenerator()
n_jp.add_rdn().add_attr('countryName', 'PRINTABLESTRING', 'JP')
generate(n_jp, "name-jp.pem")
n_jp_tokyo = copy.deepcopy(n_jp)
n_jp_tokyo.add_rdn().add_attr(
'stateOrProvinceName', 'UTF8', '\xe6\x9d\xb1\xe4\xba\xac', 'FORMAT:UTF8')
generate(n_jp_tokyo, "name-jp-tokyo.pem")
n_us_az_foodotcom = copy.deepcopy(n_us_az)
n_us_az_foodotcom.add_rdn().add_attr('commonName', 'UTF8', 'foo.com')
generate(n_us_az_foodotcom, "name-us-arizona-foo.com.pem")
n_us_az_permittedexamplecom = copy.deepcopy(n_us_az)
n_us_az_permittedexamplecom.add_rdn().add_attr('commonName', 'UTF8',
'permitted.example.com')
generate(n_us_az_permittedexamplecom,
"name-us-arizona-permitted.example.com.pem")
n_us_ca_permittedexamplecom = copy.deepcopy(n_us_ca)
n_us_ca_permittedexamplecom.add_rdn().add_attr('commonName', 'UTF8',
'permitted.example.com')
generate(n_us_ca_permittedexamplecom,
"name-us-california-permitted.example.com.pem")
n_us_az_ip1111 = copy.deepcopy(n_us_az)
n_us_az_ip1111.add_rdn().add_attr('commonName', 'UTF8', '1.1.1.1')
generate(n_us_az_ip1111, "name-us-arizona-1.1.1.1.pem")
n_us_az_192_168_1_1 = copy.deepcopy(n_us_az)
n_us_az_192_168_1_1.add_rdn().add_attr('commonName', 'UTF8', '192.168.1.1')
generate(n_us_az_192_168_1_1, "name-us-arizona-192.168.1.1.pem")
n_us_az_ipv6 = copy.deepcopy(n_us_az)
n_us_az_ipv6.add_rdn().add_attr('commonName', 'UTF8',
'102:304:506:708:90a:b0c::1')
generate(n_us_az_ipv6, "name-us-arizona-ipv6.pem")
n_us_ca_192_168_1_1 = copy.deepcopy(n_us_ca)
n_us_ca_192_168_1_1.add_rdn().add_attr('commonName', 'UTF8', '192.168.1.1')
generate(n_us_ca_192_168_1_1, "name-us-california-192.168.1.1.pem")
n_us_az_email = copy.deepcopy(n_us_az)
n_us_az_email.add_rdn().add_attr('emailAddress', 'IA5STRING',
'[email protected]')
generate(n_us_az_email, "name-us-arizona-email.pem")
n_ca = generate_names.NameGenerator()
n_ca.add_rdn().add_attr('countryName', 'PRINTABLESTRING', 'CA')
generate(n_ca, "name-ca.pem")
n_de = generate_names.NameGenerator()
n_de.add_rdn().add_attr('countryName', 'PRINTABLESTRING', 'DE')
generate(n_de, "name-de.pem")
n_empty = generate_names.NameGenerator()
generate(n_empty, "name-empty.pem")
directoryname_constraints = NameConstraintsGenerator()
directoryname_constraints.add_permitted(directory_name(n_us))
directoryname_constraints.add_excluded(directory_name(n_us_ca))
directoryname_constraints.add_permitted(directory_name(n_us_ca_mountain_view))
directoryname_constraints.add_excluded(directory_name(n_de))
directoryname_constraints.add_permitted(directory_name(n_jp_tokyo))
generate(directoryname_constraints, "directoryname.pem")
c = NameConstraintsGenerator()
c.union_from(directoryname_constraints)
c.union_from(dnsname_constraints)
generate(c, "directoryname_and_dnsname.pem")
c = NameConstraintsGenerator()
c.union_from(directoryname_constraints)
c.union_from(dnsname_constraints)
c.union_from(ipaddress_constraints)
generate(c, "directoryname_and_dnsname_and_ipaddress.pem")
c = NameConstraintsGenerator()
c.add_excluded(directory_name(n_us_ca))
generate(c, "directoryname-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(directory_name(n_us))
c.add_excluded(directory_name(n_empty))
generate(c, "directoryname-excludeall.pem")
san = SubjectAltNameGenerator()
san.add_name(dns_name("permitted.example.com"))
san.add_name(ip_address((192,168,1,2)))
san.add_name(directory_name(n_us_az))
generate(san, "san-permitted.pem")
san2 = copy.deepcopy(san)
san2.add_name(
dns_name("foo.stillnotpermitted.excluded.permitted.example.com"))
generate(san2, "san-excluded-dnsname.pem")
san2 = copy.deepcopy(san)
san2.add_name(ip_address((192,168,5,5)))
generate(san2, "san-excluded-ipaddress.pem")
san2 = copy.deepcopy(san)
san2.add_name(directory_name(n_us_ca_mountain_view))
generate(san2, "san-excluded-directoryname.pem")
san = SubjectAltNameGenerator()
san.add_name(other_name())
generate(san, "san-othername.pem")
san = SubjectAltNameGenerator()
san.add_name(rfc822_name("[email protected]"))
generate(san, "san-rfc822name.pem")
san = SubjectAltNameGenerator()
san.add_name(x400_address())
generate(san, "san-x400address.pem")
san = SubjectAltNameGenerator()
san.add_name(edi_party_name())
generate(san, "san-edipartyname.pem")
san = SubjectAltNameGenerator()
san.add_name(uniform_resource_identifier('http://example.com'))
generate(san, "san-uri.pem")
san = SubjectAltNameGenerator()
san.add_name(registered_id("1.2.3.4"))
generate(san, "san-registeredid.pem")
san = SubjectAltNameGenerator()
generate(san, "san-invalid-empty.pem")
san = SubjectAltNameGenerator()
san.add_name(ip_address((192,168,0,5,0), enforce_length=False))
generate(san, "san-invalid-ipaddress.pem")
c = NameConstraintsGenerator()
c.add_permitted(other_name())
generate(c, "othername-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(other_name())
generate(c, "othername-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(rfc822_name("[email protected]"))
generate(c, "rfc822name-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(rfc822_name("[email protected]"))
generate(c, "rfc822name-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(x400_address())
generate(c, "x400address-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(x400_address())
generate(c, "x400address-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(edi_party_name())
generate(c, "edipartyname-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(edi_party_name())
generate(c, "edipartyname-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(uniform_resource_identifier("http://example.com"))
generate(c, "uri-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(uniform_resource_identifier("http://example.com"))
generate(c, "uri-excluded.pem")
c = NameConstraintsGenerator()
c.add_permitted(registered_id("1.2.3.4"))
generate(c, "registeredid-permitted.pem")
c = NameConstraintsGenerator()
c.add_excluded(registered_id("1.2.3.4"))
generate(c, "registeredid-excluded.pem")
c = NameConstraintsGenerator()
generate(c, "invalid-no_subtrees.pem")
c = NameConstraintsGenerator(force_permitted_sequence=True)
generate(c, "invalid-empty_permitted_subtree.pem")
c = NameConstraintsGenerator(force_excluded_sequence=True)
generate(c, "invalid-empty_excluded_subtree.pem")
c = NameConstraintsGenerator()
c.add_permitted(with_min_max(dns_name("permitted.example.com"), minimum=0))
generate(c, "dnsname-with_min_0.pem")
c = NameConstraintsGenerator()
c.add_permitted(with_min_max(dns_name("permitted.example.com"), minimum=1))
generate(c, "dnsname-with_min_1.pem")
c = NameConstraintsGenerator()
c.add_permitted(with_min_max(
dns_name("permitted.example.com"), minimum=0, maximum=2))
generate(c, "dnsname-with_min_0_and_max.pem")
c = NameConstraintsGenerator()
c.add_permitted(with_min_max(
dns_name("permitted.example.com"), minimum=1, maximum=2))
generate(c, "dnsname-with_min_1_and_max.pem")
c = NameConstraintsGenerator()
c.add_permitted(with_min_max(dns_name("permitted.example.com"), maximum=2))
generate(c, "dnsname-with_max.pem")
if __name__ == '__main__':
main()
| bsd-3-clause |
btrent/knave | .buildozer/android/app/chesstools/piece.py | 2 | 10622 | #Forked from chesstools 0.1.8 by Mario Balibrera
from chesstools import COLORS
from chesstools.move import Move, to_algebraic, move_from_array
MOVES = { "Knight": [[-2,-1],[-1,-2],[2,-1],[1,-2],[-2,1],[-1,2],[2,1],[1,2]],
"Bishop": [[-1,-1],[-1,1],[1,-1],[1,1]],
"Rook": [[0,-1],[0,1],[-1,0],[1,0]] }
MOVES["Queen"] = MOVES["Bishop"] + MOVES["Rook"]
class Piece(object):
def __init__(self, board, color, pos):
self.board = board
self.color = color
self.pos = pos
self.name = self.__class__.__name__
self.init()
def __str__(self):
return PIECE_TO_LETTER[self.__class__][self.color]
def __repr__(self):
return '<%s %s@%s>'%(self.color, self.name, to_algebraic(self.pos))
def init(self):
pass
def copy(self):
return self.__class__(None, self.color, self.pos)
def row(self):
return self.pos[0]
def column(self):
return self.pos[1]
def _all_moves(self):
m = []
for x, y in MOVES[self.name]:
a, b = self.row(), self.column()
while True:
a += x
b += y
if a < 0 or b < 0 or a > 7 or b > 7:
break
p = self.board.get_square([a,b])
if p:
if p.color != self.color:
m.append([a,b])
break
m.append([a,b])
return m
def all_legal_moves(self):
return [move_from_array(self.pos, move) for move in self._all_moves() if self.board.safe_king(self.pos, move)]
def legal_move(self, dest):
"""
print "legal move criteria:"
print "1"
print self._good_target(dest)
print "2"
print self._can_move_to(dest)
print "3"
print self._clear_path(self._next_in_path(list(dest)))
print "4"
print self.board.safe_king(self.pos, dest)
"""
return self._good_target(dest) and self._can_move_to(dest) and self._clear_path(self._next_in_path(list(dest))) and self.board.safe_king(self.pos, dest)
def can_move(self):
for dest in self._all_moves():
if self.board.safe_king(self.pos, dest):
return True
return False
def _can_move_to(self, dest):
return self.can_capture(dest)
def can_take(self, dest, layout=None):
return self._enemy_target(dest, layout) and self.can_target(dest, layout)
def can_target(self, dest, layout=None):
return self.can_capture(dest, layout) and self._clear_path(self._next_in_path(list(dest)), layout)
def _enemy_target(self, d, layout=None):
dest = self.board.get_square(d, layout)
return dest and dest.color != self.color
def _good_target(self, d):
dest = self.board.get_square(d)
return not dest or dest.color != self.color
def _clear_path(self, dest, layout=None):
if tuple(dest) == self.pos:
return True
if not self._clear_condition(dest, layout):
return False
return self._clear_path(self._next_in_path(dest), layout)
def _clear_condition(self, dest, layout):
return self.board.is_empty(dest, layout)
def _next_in_path(self, dest):
if dest[0] > self.row(): dest[0] -= 1
elif dest[0] < self.row(): dest[0] += 1
if dest[1] > self.column(): dest[1] -= 1
elif dest[1] < self.column(): dest[1] += 1
return dest
def move(self, pos):
self.pos = pos
class Pawn(Piece):
def init(self):
self.direction = self.color == 'white' and 1 or -1
self.home_row = self.color == 'white' and 1 or 6
self.promotion_row = self.color == 'white' and 7 or 0
def _pawn_scan(self, color, cols):
c = self.column()
s = 0
for row in range(1,7):
for n in cols:
col = c+n
if col > 0 and col < 7:
p = self.board.get_square((row, col))
if p and p.color == color and isinstance(p, Pawn):
s += 1
return s
def advancement(self):
if self.color == 'white':
return self.row()
else:
return 7 - self.row()
def supporting_pawns(self):
return self._pawn_scan(self.color, [-1,1])
def opposing_pawns(self):
return self._pawn_scan(COLORS[self.color], [-1,0,1])
def all_legal_moves(self):
m = [move_from_array(self.pos, move) for move in self._all_moves() if self.legal_move(move)]
if m and self.row() + self.direction == self.promotion_row:
m = reduce(list.__add__, [[Move(move.start, move.end, letter) for letter in ['q','r','b','n']] for move in m])
return m
def _all_moves(self):
r = self.row()+self.direction
c = self.column()
m = [[r, c]]
if self.row() == self.home_row:
m.append([r+self.direction, c])
if c < 7:
m.append([r, c+1])
if c > 0:
m.append([r, c-1])
return m
def can_capture(self, dest, layout=None):
if dest[0] == self.row() + self.direction and abs(dest[1] - self.column()) == 1: # capture
target = self.board.get_square(dest, layout)
if target and target.color != self.color: # normal capture
return True
if dest == self.board.en_passant: # en passant
return True
return False
def _can_move_to(self, dest):
if dest[1] == self.column() and self.board.is_empty(dest): # jump
if dest[0] == self.row() + self.direction: # single
return True
elif self.row() == self.home_row and dest[0] == self.row() + 2*self.direction: # double
return True
elif self.can_capture(dest):
return True
return False
class Knight(Piece):
def _all_moves(self):
m = []
r, c = self.row(), self.column()
for x, y in MOVES["Knight"]:
a, b = r+x, c+y
if a < 0 or b < 0 or a > 7 or b > 7 or not self._good_target([a,b]):
continue
m.append([a,b])
return m
def can_capture(self, dest, layout=None):
r = abs(self.row() - dest[0])
c = abs(self.column() - dest[1])
if r and c and r + c == 3:
return True
return False
def _clear_path(self, dest, layout=None):
return True
class Bishop(Piece):
def can_capture(self, dest, layout=None):
return abs(self.row() - dest[0]) == abs(self.column() - dest[1])
class Rook(Piece):
def init(self):
self.side = None
def can_capture(self, dest, layout=None):
return dest[0] == self.row() or dest[1] == self.column()
def move(self, pos):
self.pos = pos
if self.side:
self.board.kings[self.color].castle[self.side] = None
class Queen(Piece):
def can_capture(self, dest, layout=None):
return dest[0] == self.row() or dest[1] == self.column() or abs(self.row() - dest[0]) == abs(self.column() - dest[1])
class King(Piece):
def init(self):
self.castle = {'king': None,'queen': None}
self.home_row = self.color == 'black' and 7 or 0
def set_castle(self, qr, kr):
qr.castle_king_column = 2
kr.castle_king_column = 6
qr.side = 'queen'
kr.side = 'king'
self.castle = {'queen': qr, 'king': kr}
def copy(self):
k = King(None, self.color, self.pos)
k.castle = self.castle.copy()
return k
def _all_moves(self):
m = []
row, col = self.row(), self.column()
for x in range(-1,2):
for y in range(-1,2):
if x or y:
a, b = row+x, col+y
if a < 0 or b < 0 or a > 7 or b > 7:
continue
if self._good_target([a,b]):
m.append([a,b])
if self.board.safe_square(self.pos):
for rook in self.castle.values():
if rook:
target_square = [row, rook.castle_king_column]
# target is empty
safemove = self.board.is_empty(target_square)
# king and rook are unimpeded
if safemove:
low = min(col, rook.castle_king_column, rook.column()) + 1
high = max(col, rook.castle_king_column, rook.column())
for mid_square in [[row, i] for i in range(low, high)]:
if not self.board.is_empty(mid_square):
safemove = False
break
# king's path is safe
if safemove:
low = min(col, rook.castle_king_column) + 1
high = max(col, rook.castle_king_column)
for mid_square in [[row, i] for i in range(low, high)]:
if not self.board.safe_square(mid_square):
safemove = False
break
if safemove:
m.append(target_square)
return m
def can_capture(self, dest, layout=None):
return abs(self.row() - dest[0]) < 2 and abs(self.column() - dest[1]) < 2
def _can_move_to(self, dest):
if self.can_capture(dest): # normal move
return True
if self.board.safe_square(self.pos) and dest[0] == self.home_row: # castle
for c in self.castle.values():
if c and dest[1] == c.castle_king_column:
return True
return False
def _clear_condition(self, dest, layout):
return self.board.is_empty(dest) and self.board.safe_king(self.pos, dest)
def move(self, pos):
self.pos = pos
self.castle['king'] = None
self.castle['queen'] = None
# this is mostly here so we can get at these
# strings without having initialized a piece.
PIECE_TO_LETTER = {
King: {"white": "K", "black": "k"},
Queen: {"white": "Q", "black": "q"},
Rook: {"white": "R", "black": "r"},
Bishop: {"white": "B", "black": "b"},
Knight: {"white": "N", "black": "n"},
Pawn: {"white": "P", "black": "p"}
}
# to help us go the other way
LETTER_TO_PIECE = {
"K": King,
"Q": Queen,
"R": Rook,
"B": Bishop,
"N": Knight,
"P": Pawn
}
| gpl-3.0 |
markslwong/tensorflow | tensorflow/python/ops/accumulate_n_benchmark.py | 114 | 5470 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Benchmark for accumulate_n() in math_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
import time
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.client import session
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import gen_control_flow_ops
from tensorflow.python.ops import gen_state_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.platform import test
class AccumulateNBenchmark(test.Benchmark):
def _AccumulateNTemplate(self, inputs, init, shape, validate_shape):
var = gen_state_ops._temporary_variable(
shape=shape, dtype=inputs[0].dtype.base_dtype)
ref = state_ops.assign(var, init, validate_shape=validate_shape)
update_ops = [
state_ops.assign_add(
ref, tensor, use_locking=True).op for tensor in inputs
]
with ops.control_dependencies(update_ops):
return gen_state_ops._destroy_temporary_variable(
ref, var_name=var.op.name)
def _AccumulateNInitializedWithFirst(self, inputs):
return self._AccumulateNTemplate(
inputs,
init=array_ops.zeros_like(inputs[0]),
shape=inputs[0].get_shape(),
validate_shape=True)
def _AccumulateNInitializedWithMerge(self, inputs):
return self._AccumulateNTemplate(
inputs,
init=array_ops.zeros_like(gen_control_flow_ops._merge(inputs)[0]),
shape=tensor_shape.vector(0),
validate_shape=False)
def _AccumulateNInitializedWithShape(self, inputs):
return self._AccumulateNTemplate(
inputs,
init=array_ops.zeros(
shape=inputs[0].get_shape(), dtype=inputs[0].dtype.base_dtype),
shape=inputs[0].get_shape(),
validate_shape=True)
def _GenerateUnorderedInputs(self, size, n):
inputs = [random_ops.random_uniform(shape=[size]) for _ in xrange(n)]
random.shuffle(inputs)
return inputs
def _GenerateReplicatedInputs(self, size, n):
return n * self._GenerateUnorderedInputs(size, 1)
def _GenerateOrderedInputs(self, size, n):
inputs = self._GenerateUnorderedInputs(size, 1)
queue = data_flow_ops.FIFOQueue(
capacity=1, dtypes=[inputs[0].dtype], shapes=[inputs[0].get_shape()])
for _ in xrange(n - 1):
op = queue.enqueue(inputs[-1])
with ops.control_dependencies([op]):
inputs.append(math_ops.tanh(1.0 + queue.dequeue()))
return inputs
def _GenerateReversedInputs(self, size, n):
inputs = self._GenerateOrderedInputs(size, n)
inputs.reverse()
return inputs
def _SetupAndRunBenchmark(self, graph, inputs, repeats, format_args):
with graph.as_default():
add_n = math_ops.add_n(inputs)
acc_n_first = self._AccumulateNInitializedWithFirst(inputs)
acc_n_merge = self._AccumulateNInitializedWithMerge(inputs)
acc_n_shape = self._AccumulateNInitializedWithShape(inputs)
test_ops = (("AddN", add_n.op),
("AccNFirst", acc_n_first.op),
("AccNMerge", acc_n_merge.op),
("AccNShape", acc_n_shape.op))
with session.Session(graph=graph):
for tag, op in test_ops:
for _ in xrange(100):
op.run() # Run for warm up.
start = time.time()
for _ in xrange(repeats):
op.run()
duration = time.time() - start
args = format_args + (tag, duration)
print(self._template.format(*args))
def _RunBenchmark(self, tag, input_fn, sizes, ninputs, repeats):
for size in sizes:
for ninput in ninputs:
graph = ops.Graph()
with graph.as_default():
inputs = input_fn(size, ninput)
format_args = (tag, size, ninput, repeats)
self._SetupAndRunBenchmark(graph, inputs, repeats, format_args)
def benchmarkAccumulateN(self):
self._template = "{:<15}" * 6
args = {
"sizes": (128, 128**2),
"ninputs": (1, 10, 100, 300),
"repeats": 100
}
benchmarks = (("Replicated", self._GenerateReplicatedInputs),
("Unordered", self._GenerateUnorderedInputs),
("Ordered", self._GenerateOrderedInputs),
("Reversed", self._GenerateReversedInputs))
print(self._template.format("", "Size", "#Inputs", "#Repeat", "Method",
"Duration"))
print("-" * 90)
for benchmark in benchmarks:
self._RunBenchmark(*benchmark, **args)
if __name__ == "__main__":
test.main()
| apache-2.0 |
hachreak/invenio-oaiharvester | invenio_oaiharvester/api.py | 3 | 4392 | # -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2015 CERN.
#
# Invenio is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
from __future__ import absolute_import, print_function, unicode_literals
from sickle import Sickle
from .errors import NameOrUrlMissing, WrongDateCombination
from .utils import get_oaiharvest_object
def list_records(metadata_prefix=None, from_date=None, until_date=None,
url=None, name=None, setSpec=None):
"""Harvest records from an OAI repo, based on datestamp and/or set parameters.
:param metadata_prefix: The prefix for the metadata return (defaults to 'oai_dc').
:param from_date: The lower bound date for the harvesting (optional).
:param until_date: The upper bound date for the harvesting (optional).
:param url: The The url to be used to create the endpoint.
:param name: The name of the OaiHARVEST object that we want to use to create the endpoint.
:param setSpec: The 'set' criteria for the harvesting (optional).
:return: An iterator of harvested records.
"""
if url:
request = Sickle(url)
elif name:
request, _metadata_prefix, lastrun = get_from_oai_name(name)
# In case we provide a prefix, we don't want it to be
# overwritten by the one we get from the name variable.
if metadata_prefix is None:
metadata_prefix = _metadata_prefix
else:
raise NameOrUrlMissing("Retry using the parameters -n <name> or -u <url>.")
# By convention, when we have a url we have no lastrun, and when we use
# the name we can either have from_date (if provided) or lastrun.
dates = {
'from': lastrun if from_date is None else from_date,
'until': until_date
}
# Sanity check
if (dates['until'] is not None) and (dates['from'] > dates['until']):
raise WrongDateCombination("'Until' date larger than 'from' date.")
if metadata_prefix is None:
metadata_prefix = "oai_dc"
return request.ListRecords(metadataPrefix=metadata_prefix,
set=setSpec,
**dates)
def get_records(identifiers, metadata_prefix=None, url=None, name=None):
"""Harvest specific records from an OAI repo, based on their unique identifiers.
:param metadata_prefix: The prefix for the metadata return (defaults to 'oai_dc').
:param identifiers: A list of unique identifiers for records to be harvested.
:param url: The The url to be used to create the endpoint.
:param name: The name of the OaiHARVEST object that we want to use to create the endpoint.
:return: An iterator of harvested records.
"""
if url:
request = Sickle(url)
elif name:
request, _metadata_prefix, _ = get_from_oai_name(name)
# In case we provide a prefix, we don't want it to be
# overwritten by the one we get from the name variable.
if metadata_prefix is None:
metadata_prefix = _metadata_prefix
else:
raise NameOrUrlMissing("Retry using the parameters -n <name> or -u <url>.")
if metadata_prefix is None:
metadata_prefix = "oai_dc"
for identifier in identifiers:
arguments = {
'identifier': identifier,
'metadataPrefix': metadata_prefix
}
yield request.GetRecord(**arguments)
def get_from_oai_name(name):
"""Get basic OAI request data from the OaiHARVEST model.
:param name: name of the source (OaiHARVEST.name)
:return: (Sickle obj, metadataprefix, lastrun)
"""
obj = get_oaiharvest_object(name)
req = Sickle(obj.baseurl)
metadata_prefix = obj.metadataprefix
lastrun = obj.lastrun
return req, metadata_prefix, lastrun
| gpl-2.0 |
jolevq/odoopub | addons/website_forum/controllers/main.py | 12 | 31467 | # -*- coding: utf-8 -*-
from datetime import datetime
import werkzeug.urls
import werkzeug.wrappers
import simplejson
from openerp import tools
from openerp import SUPERUSER_ID
from openerp.addons.web import http
from openerp.addons.web.controllers.main import login_redirect
from openerp.addons.web.http import request
from openerp.addons.website.controllers.main import Website as controllers
from openerp.addons.website.models.website import slug
controllers = controllers()
class WebsiteForum(http.Controller):
_post_per_page = 10
_user_per_page = 30
def _get_notifications(self):
cr, uid, context = request.cr, request.uid, request.context
Message = request.registry['mail.message']
badge_st_id = request.registry['ir.model.data'].xmlid_to_res_id(cr, uid, 'gamification.mt_badge_granted')
if badge_st_id:
msg_ids = Message.search(cr, uid, [('subtype_id', '=', badge_st_id), ('to_read', '=', True)], context=context)
msg = Message.browse(cr, uid, msg_ids, context=context)
else:
msg = list()
return msg
def _prepare_forum_values(self, forum=None, **kwargs):
user = request.registry['res.users'].browse(request.cr, request.uid, request.uid, context=request.context)
values = {'user': user,
'is_public_user': user.id == request.website.user_id.id,
'notifications': self._get_notifications(),
'header': kwargs.get('header', dict()),
'searches': kwargs.get('searches', dict()),
}
if forum:
values['forum'] = forum
elif kwargs.get('forum_id'):
values['forum'] = request.registry['forum.forum'].browse(request.cr, request.uid, kwargs.pop('forum_id'), context=request.context)
values.update(kwargs)
return values
# Forum
# --------------------------------------------------
@http.route(['/forum'], type='http', auth="public", website=True)
def forum(self, **kwargs):
cr, uid, context = request.cr, request.uid, request.context
Forum = request.registry['forum.forum']
obj_ids = Forum.search(cr, uid, [], context=context)
forums = Forum.browse(cr, uid, obj_ids, context=context)
return request.website.render("website_forum.forum_all", {'forums': forums})
@http.route('/forum/new', type='http', auth="user", methods=['POST'], website=True)
def forum_create(self, forum_name="New Forum", **kwargs):
forum_id = request.registry['forum.forum'].create(request.cr, request.uid, {
'name': forum_name,
}, context=request.context)
return request.redirect("/forum/%s" % forum_id)
@http.route('/forum/notification_read', type='json', auth="user", methods=['POST'], website=True)
def notification_read(self, **kwargs):
request.registry['mail.message'].set_message_read(request.cr, request.uid, [int(kwargs.get('notification_id'))], read=True, context=request.context)
return True
@http.route(['/forum/<model("forum.forum"):forum>',
'/forum/<model("forum.forum"):forum>/page/<int:page>',
'''/forum/<model("forum.forum"):forum>/tag/<model("forum.tag", "[('forum_id','=',forum[0])]"):tag>/questions''',
'''/forum/<model("forum.forum"):forum>/tag/<model("forum.tag", "[('forum_id','=',forum[0])]"):tag>/questions/page/<int:page>''',
], type='http', auth="public", website=True)
def questions(self, forum, tag=None, page=1, filters='all', sorting='date', search='', **post):
cr, uid, context = request.cr, request.uid, request.context
Post = request.registry['forum.post']
user = request.registry['res.users'].browse(cr, uid, uid, context=context)
domain = [('forum_id', '=', forum.id), ('parent_id', '=', False), ('state', '=', 'active')]
if search:
domain += ['|', ('name', 'ilike', search), ('content', 'ilike', search)]
if tag:
domain += [('tag_ids', 'in', tag.id)]
if filters == 'unanswered':
domain += [('child_ids', '=', False)]
elif filters == 'followed':
domain += [('message_follower_ids', '=', user.partner_id.id)]
else:
filters = 'all'
if sorting == 'answered':
order = 'child_count desc'
elif sorting == 'vote':
order = 'vote_count desc'
elif sorting == 'date':
order = 'write_date desc'
else:
sorting = 'creation'
order = 'create_date desc'
question_count = Post.search(cr, uid, domain, count=True, context=context)
if tag:
url = "/forum/%s/tag/%s/questions" % (slug(forum), slug(tag))
else:
url = "/forum/%s" % slug(forum)
url_args = {}
if search:
url_args['search'] = search
if filters:
url_args['filters'] = filters
if sorting:
url_args['sorting'] = sorting
pager = request.website.pager(url=url, total=question_count, page=page,
step=self._post_per_page, scope=self._post_per_page,
url_args=url_args)
obj_ids = Post.search(cr, uid, domain, limit=self._post_per_page, offset=pager['offset'], order=order, context=context)
question_ids = Post.browse(cr, uid, obj_ids, context=context)
values = self._prepare_forum_values(forum=forum, searches=post)
values.update({
'main_object': tag or forum,
'question_ids': question_ids,
'question_count': question_count,
'pager': pager,
'tag': tag,
'filters': filters,
'sorting': sorting,
'search': search,
})
return request.website.render("website_forum.forum_index", values)
@http.route(['/forum/<model("forum.forum"):forum>/faq'], type='http', auth="public", website=True)
def forum_faq(self, forum, **post):
values = self._prepare_forum_values(forum=forum, searches=dict(), header={'is_guidelines': True}, **post)
return request.website.render("website_forum.faq", values)
@http.route('/forum/get_tags', type='http', auth="public", methods=['GET'], website=True)
def tag_read(self, **post):
tags = request.registry['forum.tag'].search_read(request.cr, request.uid, [], ['name'], context=request.context)
data = [tag['name'] for tag in tags]
return simplejson.dumps(data)
@http.route(['/forum/<model("forum.forum"):forum>/tag'], type='http', auth="public", website=True)
def tags(self, forum, page=1, **post):
cr, uid, context = request.cr, request.uid, request.context
Tag = request.registry['forum.tag']
obj_ids = Tag.search(cr, uid, [('forum_id', '=', forum.id), ('posts_count', '>', 0)], limit=None, order='posts_count DESC', context=context)
tags = Tag.browse(cr, uid, obj_ids, context=context)
values = self._prepare_forum_values(forum=forum, searches={'tags': True}, **post)
values.update({
'tags': tags,
'main_object': forum,
})
return request.website.render("website_forum.tag", values)
# Questions
# --------------------------------------------------
@http.route(['/forum/<model("forum.forum"):forum>/ask'], type='http', auth="public", website=True)
def question_ask(self, forum, **post):
if not request.session.uid:
return login_redirect()
values = self._prepare_forum_values(forum=forum, searches={}, header={'ask_hide': True})
return request.website.render("website_forum.ask_question", values)
@http.route('/forum/<model("forum.forum"):forum>/question/new', type='http', auth="user", methods=['POST'], website=True)
def question_create(self, forum, **post):
cr, uid, context = request.cr, request.uid, request.context
Tag = request.registry['forum.tag']
question_tag_ids = []
if post.get('question_tags').strip('[]'):
tags = post.get('question_tags').strip('[]').replace('"', '').split(",")
for tag in tags:
tag_ids = Tag.search(cr, uid, [('name', '=', tag)], context=context)
if tag_ids:
question_tag_ids.append((4, tag_ids[0]))
else:
question_tag_ids.append((0, 0, {'name': tag, 'forum_id': forum.id}))
new_question_id = request.registry['forum.post'].create(
request.cr, request.uid, {
'forum_id': forum.id,
'name': post.get('question_name'),
'content': post.get('content'),
'tag_ids': question_tag_ids,
}, context=context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), new_question_id))
@http.route(['''/forum/<model("forum.forum"):forum>/question/<model("forum.post", "[('forum_id','=',forum[0]),('parent_id','=',False)]"):question>'''], type='http', auth="public", website=True)
def question(self, forum, question, **post):
cr, uid, context = request.cr, request.uid, request.context
# increment view counter
request.registry['forum.post'].set_viewed(cr, SUPERUSER_ID, [question.id], context=context)
if question.parent_id:
redirect_url = "/forum/%s/question/%s" % (slug(forum), slug(question.parent_id))
return werkzeug.utils.redirect(redirect_url, 301)
filters = 'question'
values = self._prepare_forum_values(forum=forum, searches=post)
values.update({
'main_object': question,
'question': question,
'header': {'question_data': True},
'filters': filters,
'reversed': reversed,
})
return request.website.render("website_forum.post_description_full", values)
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/toggle_favourite', type='json', auth="user", methods=['POST'], website=True)
def question_toggle_favorite(self, forum, question, **post):
if not request.session.uid:
return {'error': 'anonymous_user'}
# TDE: add check for not public
favourite = False if question.user_favourite else True
if favourite:
favourite_ids = [(4, request.uid)]
else:
favourite_ids = [(3, request.uid)]
request.registry['forum.post'].write(request.cr, request.uid, [question.id], {'favourite_ids': favourite_ids}, context=request.context)
return favourite
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/ask_for_close', type='http', auth="user", methods=['POST'], website=True)
def question_ask_for_close(self, forum, question, **post):
cr, uid, context = request.cr, request.uid, request.context
Reason = request.registry['forum.post.reason']
reason_ids = Reason.search(cr, uid, [], context=context)
reasons = Reason.browse(cr, uid, reason_ids, context)
values = self._prepare_forum_values(**post)
values.update({
'question': question,
'question': question,
'forum': forum,
'reasons': reasons,
})
return request.website.render("website_forum.close_question", values)
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/edit_answer', type='http', auth="user", website=True)
def question_edit_answer(self, forum, question, **kwargs):
for record in question.child_ids:
if record.create_uid.id == request.uid:
answer = record
break
return werkzeug.utils.redirect("/forum/%s/post/%s/edit" % (slug(forum), slug(answer)))
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/close', type='http', auth="user", methods=['POST'], website=True)
def question_close(self, forum, question, **post):
request.registry['forum.post'].close(request.cr, request.uid, [question.id], reason_id=int(post.get('reason_id', False)), context=request.context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/reopen', type='http', auth="user", methods=['POST'], website=True)
def question_reopen(self, forum, question, **kwarg):
request.registry['forum.post'].write(request.cr, request.uid, [question.id], {'state': 'active'}, context=request.context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/delete', type='http', auth="user", methods=['POST'], website=True)
def question_delete(self, forum, question, **kwarg):
request.registry['forum.post'].write(request.cr, request.uid, [question.id], {'active': False}, context=request.context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/question/<model("forum.post"):question>/undelete', type='http', auth="user", methods=['POST'], website=True)
def question_undelete(self, forum, question, **kwarg):
request.registry['forum.post'].write(request.cr, request.uid, [question.id], {'active': True}, context=request.context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
# Post
# --------------------------------------------------
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/new', type='http', auth="public", methods=['POST'], website=True)
def post_new(self, forum, post, **kwargs):
if not request.session.uid:
return login_redirect()
request.registry['forum.post'].create(
request.cr, request.uid, {
'forum_id': forum.id,
'parent_id': post.id,
'content': kwargs.get('content'),
}, context=request.context)
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(post)))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/comment', type='http', auth="public", methods=['POST'], website=True)
def post_comment(self, forum, post, **kwargs):
if not request.session.uid:
return login_redirect()
question = post.parent_id if post.parent_id else post
cr, uid, context = request.cr, request.uid, request.context
if kwargs.get('comment') and post.forum_id.id == forum.id:
# TDE FIXME: check that post_id is the question or one of its answers
request.registry['forum.post'].message_post(
cr, uid, post.id,
body=kwargs.get('comment'),
type='comment',
subtype='mt_comment',
context=dict(context, mail_create_nosubcribe=True))
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/toggle_correct', type='json', auth="public", website=True)
def post_toggle_correct(self, forum, post, **kwargs):
cr, uid, context = request.cr, request.uid, request.context
if post.parent_id is False:
return request.redirect('/')
if not request.session.uid:
return {'error': 'anonymous_user'}
# set all answers to False, only one can be accepted
request.registry['forum.post'].write(cr, uid, [c.id for c in post.parent_id.child_ids], {'is_correct': False}, context=context)
request.registry['forum.post'].write(cr, uid, [post.id], {'is_correct': not post.is_correct}, context=context)
return not post.is_correct
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/delete', type='http', auth="user", methods=['POST'], website=True)
def post_delete(self, forum, post, **kwargs):
question = post.parent_id
request.registry['forum.post'].unlink(request.cr, request.uid, [post.id], context=request.context)
if question:
werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
return werkzeug.utils.redirect("/forum/%s" % slug(forum))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/edit', type='http', auth="user", website=True)
def post_edit(self, forum, post, **kwargs):
tags = ""
for tag_name in post.tag_ids:
tags += tag_name.name + ","
values = self._prepare_forum_values(forum=forum)
values.update({
'tags': tags,
'post': post,
'is_answer': bool(post.parent_id),
'searches': kwargs
})
return request.website.render("website_forum.edit_post", values)
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/save', type='http', auth="user", methods=['POST'], website=True)
def post_save(self, forum, post, **kwargs):
cr, uid, context = request.cr, request.uid, request.context
question_tags = []
if kwargs.get('question_tag') and kwargs.get('question_tag').strip('[]'):
Tag = request.registry['forum.tag']
tags = kwargs.get('question_tag').strip('[]').replace('"', '').split(",")
for tag in tags:
tag_ids = Tag.search(cr, uid, [('name', '=', tag)], context=context)
if tag_ids:
question_tags += tag_ids
else:
new_tag = Tag.create(cr, uid, {'name': tag, 'forum_id': forum.id}, context=context)
question_tags.append(new_tag)
vals = {
'tag_ids': [(6, 0, question_tags)],
'name': kwargs.get('question_name'),
'content': kwargs.get('content'),
}
request.registry['forum.post'].write(cr, uid, [post.id], vals, context=context)
question = post.parent_id if post.parent_id else post
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/upvote', type='json', auth="public", website=True)
def post_upvote(self, forum, post, **kwargs):
if not request.session.uid:
return {'error': 'anonymous_user'}
if request.uid == post.create_uid.id:
return {'error': 'own_post'}
upvote = True if not post.user_vote > 0 else False
return request.registry['forum.post'].vote(request.cr, request.uid, [post.id], upvote=upvote, context=request.context)
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/downvote', type='json', auth="public", website=True)
def post_downvote(self, forum, post, **kwargs):
if not request.session.uid:
return {'error': 'anonymous_user'}
if request.uid == post.create_uid.id:
return {'error': 'own_post'}
upvote = True if post.user_vote < 0 else False
return request.registry['forum.post'].vote(request.cr, request.uid, [post.id], upvote=upvote, context=request.context)
# User
# --------------------------------------------------
@http.route(['/forum/<model("forum.forum"):forum>/users',
'/forum/<model("forum.forum"):forum>/users/page/<int:page>'],
type='http', auth="public", website=True)
def users(self, forum, page=1, **searches):
cr, uid, context = request.cr, request.uid, request.context
User = request.registry['res.users']
step = 30
tag_count = User.search(cr, SUPERUSER_ID, [('karma', '>', 1), ('website_published', '=', True)], count=True, context=context)
pager = request.website.pager(url="/forum/%s/users" % slug(forum), total=tag_count, page=page, step=step, scope=30)
obj_ids = User.search(cr, SUPERUSER_ID, [('karma', '>', 1), ('website_published', '=', True)], limit=step, offset=pager['offset'], order='karma DESC', context=context)
# put the users in block of 3 to display them as a table
users = [[] for i in range(len(obj_ids)/3+1)]
for index, user in enumerate(User.browse(cr, SUPERUSER_ID, obj_ids, context=context)):
users[index/3].append(user)
searches['users'] = 'True'
values = self._prepare_forum_values(forum=forum, searches=searches)
values .update({
'users': users,
'main_object': forum,
'notifications': self._get_notifications(),
'pager': pager,
})
return request.website.render("website_forum.users", values)
@http.route(['/forum/<model("forum.forum"):forum>/partner/<int:partner_id>'], type='http', auth="public", website=True)
def open_partner(self, forum, partner_id=0, **post):
cr, uid, context = request.cr, request.uid, request.context
pids = request.registry['res.partner'].search(cr, SUPERUSER_ID, [('id', '=', partner_id)], context=context)
if pids:
partner = request.registry['res.partner'].browse(cr, SUPERUSER_ID, pids[0], context=context)
if partner.user_ids:
return werkzeug.utils.redirect("/forum/%s/user/%d" % (slug(forum), partner.user_ids[0].id))
return werkzeug.utils.redirect("/forum/%s" % slug(forum))
@http.route(['/forum/user/<int:user_id>/avatar'], type='http', auth="public", website=True)
def user_avatar(self, user_id=0, **post):
cr, uid, context = request.cr, request.uid, request.context
response = werkzeug.wrappers.Response()
User = request.registry['res.users']
Website = request.registry['website']
user = User.browse(cr, SUPERUSER_ID, user_id, context=context)
if not user.exists() or (user_id != request.session.uid and user.karma < 1):
return Website._image_placeholder(response)
return Website._image(cr, SUPERUSER_ID, 'res.users', user.id, 'image', response)
@http.route(['/forum/<model("forum.forum"):forum>/user/<int:user_id>'], type='http', auth="public", website=True)
def open_user(self, forum, user_id=0, **post):
cr, uid, context = request.cr, request.uid, request.context
User = request.registry['res.users']
Post = request.registry['forum.post']
Vote = request.registry['forum.post.vote']
Activity = request.registry['mail.message']
Followers = request.registry['mail.followers']
Data = request.registry["ir.model.data"]
user = User.browse(cr, SUPERUSER_ID, user_id, context=context)
values = self._prepare_forum_values(forum=forum, **post)
if not user.exists() or (user_id != request.session.uid and (not user.website_published or user.karma < 1)):
return request.website.render("website_forum.private_profile", values)
# questions and answers by user
user_questions, user_answers = [], []
user_post_ids = Post.search(
cr, uid, [
('forum_id', '=', forum.id), ('create_uid', '=', user.id),
'|', ('active', '=', False), ('active', '=', True)], context=context)
user_posts = Post.browse(cr, uid, user_post_ids, context=context)
for record in user_posts:
if record.parent_id:
user_answers.append(record)
else:
user_questions.append(record)
# showing questions which user following
obj_ids = Followers.search(cr, SUPERUSER_ID, [('res_model', '=', 'forum.post'), ('partner_id', '=', user.partner_id.id)], context=context)
post_ids = [follower.res_id for follower in Followers.browse(cr, SUPERUSER_ID, obj_ids, context=context)]
que_ids = Post.search(cr, uid, [('id', 'in', post_ids), ('forum_id', '=', forum.id), ('parent_id', '=', False)], context=context)
followed = Post.browse(cr, uid, que_ids, context=context)
#showing Favourite questions of user.
fav_que_ids = Post.search(cr, uid, [('favourite_ids', '=', user.id), ('forum_id', '=', forum.id), ('parent_id', '=', False)], context=context)
favourite = Post.browse(cr, uid, fav_que_ids, context=context)
#votes which given on users questions and answers.
data = Vote.read_group(cr, uid, [('post_id.forum_id', '=', forum.id), ('post_id.create_uid', '=', user.id)], ["vote"], groupby=["vote"], context=context)
up_votes, down_votes = 0, 0
for rec in data:
if rec['vote'] == '1':
up_votes = rec['vote_count']
elif rec['vote'] == '-1':
down_votes = rec['vote_count']
total_votes = up_votes + down_votes
#Votes which given by users on others questions and answers.
post_votes = Vote.search(cr, uid, [('user_id', '=', user.id)], context=context)
vote_ids = Vote.browse(cr, uid, post_votes, context=context)
#activity by user.
model, comment = Data.get_object_reference(cr, uid, 'mail', 'mt_comment')
activity_ids = Activity.search(cr, uid, [('res_id', 'in', user_post_ids), ('model', '=', 'forum.post'), ('subtype_id', '!=', comment)], order='date DESC', limit=100, context=context)
activities = Activity.browse(cr, uid, activity_ids, context=context)
posts = {}
for act in activities:
posts[act.res_id] = True
posts_ids = Post.browse(cr, uid, posts.keys(), context=context)
posts = dict(map(lambda x: (x.id, (x.parent_id or x, x.parent_id and x or False)), posts_ids))
post['users'] = 'True'
values.update({
'uid': uid,
'user': user,
'main_object': user,
'searches': post,
'questions': user_questions,
'answers': user_answers,
'followed': followed,
'favourite': favourite,
'total_votes': total_votes,
'up_votes': up_votes,
'down_votes': down_votes,
'activities': activities,
'posts': posts,
'vote_post': vote_ids,
})
return request.website.render("website_forum.user_detail_full", values)
@http.route('/forum/<model("forum.forum"):forum>/user/<model("res.users"):user>/edit', type='http', auth="user", website=True)
def edit_profile(self, forum, user, **kwargs):
country = request.registry['res.country']
country_ids = country.search(request.cr, SUPERUSER_ID, [], context=request.context)
countries = country.browse(request.cr, SUPERUSER_ID, country_ids, context=request.context)
values = self._prepare_forum_values(forum=forum, searches=kwargs)
values.update({
'countries': countries,
'notifications': self._get_notifications(),
})
return request.website.render("website_forum.edit_profile", values)
@http.route('/forum/<model("forum.forum"):forum>/user/<model("res.users"):user>/save', type='http', auth="user", methods=['POST'], website=True)
def save_edited_profile(self, forum, user, **kwargs):
request.registry['res.users'].write(request.cr, request.uid, [user.id], {
'name': kwargs.get('name'),
'website': kwargs.get('website'),
'email': kwargs.get('email'),
'city': kwargs.get('city'),
'country_id': int(kwargs.get('country')) if kwargs.get('country') else False,
'website_description': kwargs.get('description'),
}, context=request.context)
return werkzeug.utils.redirect("/forum/%s/user/%d" % (slug(forum), user.id))
# Badges
# --------------------------------------------------
@http.route('/forum/<model("forum.forum"):forum>/badge', type='http', auth="public", website=True)
def badges(self, forum, **searches):
cr, uid, context = request.cr, request.uid, request.context
Badge = request.registry['gamification.badge']
badge_ids = Badge.search(cr, SUPERUSER_ID, [('challenge_ids.category', '=', 'forum')], context=context)
badges = Badge.browse(cr, uid, badge_ids, context=context)
badges = sorted(badges, key=lambda b: b.stat_count_distinct, reverse=True)
values = self._prepare_forum_values(forum=forum, searches={'badges': True})
values.update({
'badges': badges,
})
return request.website.render("website_forum.badge", values)
@http.route(['''/forum/<model("forum.forum"):forum>/badge/<model("gamification.badge"):badge>'''], type='http', auth="public", website=True)
def badge_users(self, forum, badge, **kwargs):
user_ids = [badge_user.user_id.id for badge_user in badge.owner_ids]
users = request.registry['res.users'].browse(request.cr, SUPERUSER_ID, user_ids, context=request.context)
values = self._prepare_forum_values(forum=forum, searches={'badges': True})
values.update({
'badge': badge,
'users': users,
})
return request.website.render("website_forum.badge_user", values)
# Messaging
# --------------------------------------------------
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/comment/<model("mail.message"):comment>/convert_to_answer', type='http', auth="user", methods=['POST'], website=True)
def convert_comment_to_answer(self, forum, post, comment, **kwarg):
new_post_id = request.registry['forum.post'].convert_comment_to_answer(request.cr, request.uid, comment.id, context=request.context)
if not new_post_id:
return werkzeug.utils.redirect("/forum/%s" % slug(forum))
post = request.registry['forum.post'].browse(request.cr, request.uid, new_post_id, context=request.context)
question = post.parent_id if post.parent_id else post
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/convert_to_comment', type='http', auth="user", methods=['POST'], website=True)
def convert_answer_to_comment(self, forum, post, **kwarg):
question = post.parent_id
new_msg_id = request.registry['forum.post'].convert_answer_to_comment(request.cr, request.uid, post.id, context=request.context)
if not new_msg_id:
return werkzeug.utils.redirect("/forum/%s" % slug(forum))
return werkzeug.utils.redirect("/forum/%s/question/%s" % (slug(forum), slug(question)))
@http.route('/forum/<model("forum.forum"):forum>/post/<model("forum.post"):post>/comment/<model("mail.message"):comment>/delete', type='json', auth="user", website=True)
def delete_comment(self, forum, post, comment, **kwarg):
if not request.session.uid:
return {'error': 'anonymous_user'}
return request.registry['forum.post'].unlink(request.cr, request.uid, post.id, comment.id, context=request.context)
| agpl-3.0 |
sinhrks/scikit-learn | sklearn/learning_curve.py | 30 | 14601 | """Utilities to evaluate models with respect to a variable
"""
# Author: Alexander Fabisch <[email protected]>
#
# License: BSD 3 clause
import warnings
import numpy as np
from .base import is_classifier, clone
from .cross_validation import check_cv
from .externals.joblib import Parallel, delayed
from .cross_validation import _safe_split, _score, _fit_and_score
from .metrics.scorer import check_scoring
from .utils import indexable
from .utils.fixes import astype
warnings.warn("This module has been deprecated in favor of the "
"model_selection module into which all the functions are moved."
" This module will be removed in 0.20",
DeprecationWarning)
__all__ = ['learning_curve', 'validation_curve']
def learning_curve(estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0):
"""Learning curve.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` used. In all
other cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<example_model_selection_plot_learning_curve.py>`
"""
if exploit_incremental_learning and not hasattr(estimator, "partial_fit"):
raise ValueError("An estimator must support the partial_fit interface "
"to exploit incremental learning")
X, y = indexable(X, y)
# Make a list since we will be iterating multiple times over the folds
cv = list(check_cv(cv, X, y, classifier=is_classifier(estimator)))
scorer = check_scoring(estimator, scoring=scoring)
# HACK as long as boolean indices are allowed in cv generators
if cv[0][0].dtype == bool:
new_cv = []
for i in range(len(cv)):
new_cv.append((np.nonzero(cv[i][0])[0], np.nonzero(cv[i][1])[0]))
cv = new_cv
n_max_training_samples = len(cv[0][0])
# Because the lengths of folds can be significantly different, it is
# not guaranteed that we use all of the available training data when we
# use the first 'n_max_training_samples' samples.
train_sizes_abs = _translate_train_sizes(train_sizes,
n_max_training_samples)
n_unique_ticks = train_sizes_abs.shape[0]
if verbose > 0:
print("[learning_curve] Training set sizes: " + str(train_sizes_abs))
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
if exploit_incremental_learning:
classes = np.unique(y) if is_classifier(estimator) else None
out = parallel(delayed(_incremental_fit_estimator)(
clone(estimator), X, y, classes, train, test, train_sizes_abs,
scorer, verbose) for train, test in cv)
else:
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train[:n_train_samples], test,
verbose, parameters=None, fit_params=None, return_train_score=True)
for train, test in cv for n_train_samples in train_sizes_abs)
out = np.array(out)[:, :2]
n_cv_folds = out.shape[0] // n_unique_ticks
out = out.reshape(n_cv_folds, n_unique_ticks, 2)
out = np.asarray(out).transpose((2, 1, 0))
return train_sizes_abs, out[0], out[1]
def _translate_train_sizes(train_sizes, n_max_training_samples):
"""Determine absolute sizes of training subsets and validate 'train_sizes'.
Examples:
_translate_train_sizes([0.5, 1.0], 10) -> [5, 10]
_translate_train_sizes([5, 10], 10) -> [5, 10]
Parameters
----------
train_sizes : array-like, shape (n_ticks,), dtype float or int
Numbers of training examples that will be used to generate the
learning curve. If the dtype is float, it is regarded as a
fraction of 'n_max_training_samples', i.e. it has to be within (0, 1].
n_max_training_samples : int
Maximum number of training samples (upper bound of 'train_sizes').
Returns
-------
train_sizes_abs : array, shape (n_unique_ticks,), dtype int
Numbers of training examples that will be used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
"""
train_sizes_abs = np.asarray(train_sizes)
n_ticks = train_sizes_abs.shape[0]
n_min_required_samples = np.min(train_sizes_abs)
n_max_required_samples = np.max(train_sizes_abs)
if np.issubdtype(train_sizes_abs.dtype, np.float):
if n_min_required_samples <= 0.0 or n_max_required_samples > 1.0:
raise ValueError("train_sizes has been interpreted as fractions "
"of the maximum number of training samples and "
"must be within (0, 1], but is within [%f, %f]."
% (n_min_required_samples,
n_max_required_samples))
train_sizes_abs = astype(train_sizes_abs * n_max_training_samples,
dtype=np.int, copy=False)
train_sizes_abs = np.clip(train_sizes_abs, 1,
n_max_training_samples)
else:
if (n_min_required_samples <= 0 or
n_max_required_samples > n_max_training_samples):
raise ValueError("train_sizes has been interpreted as absolute "
"numbers of training samples and must be within "
"(0, %d], but is within [%d, %d]."
% (n_max_training_samples,
n_min_required_samples,
n_max_required_samples))
train_sizes_abs = np.unique(train_sizes_abs)
if n_ticks > train_sizes_abs.shape[0]:
warnings.warn("Removed duplicate entries from 'train_sizes'. Number "
"of ticks will be less than than the size of "
"'train_sizes' %d instead of %d)."
% (train_sizes_abs.shape[0], n_ticks), RuntimeWarning)
return train_sizes_abs
def _incremental_fit_estimator(estimator, X, y, classes, train, test,
train_sizes, scorer, verbose):
"""Train estimator on training subsets incrementally and compute scores."""
train_scores, test_scores = [], []
partitions = zip(train_sizes, np.split(train, train_sizes)[:-1])
for n_train_samples, partial_train in partitions:
train_subset = train[:n_train_samples]
X_train, y_train = _safe_split(estimator, X, y, train_subset)
X_partial_train, y_partial_train = _safe_split(estimator, X, y,
partial_train)
X_test, y_test = _safe_split(estimator, X, y, test, train_subset)
if y_partial_train is None:
estimator.partial_fit(X_partial_train, classes=classes)
else:
estimator.partial_fit(X_partial_train, y_partial_train,
classes=classes)
train_scores.append(_score(estimator, X_train, y_train, scorer))
test_scores.append(_score(estimator, X_test, y_test, scorer))
return np.array((train_scores, test_scores)).T
def validation_curve(estimator, X, y, param_name, param_range, cv=None,
scoring=None, n_jobs=1, pre_dispatch="all", verbose=0):
"""Validation curve.
Determine training and test scores for varying parameter values.
Compute scores for an estimator with different values of a specified
parameter. This is similar to grid search with one parameter. However, this
will also compute training scores and is merely a utility for plotting the
results.
Read more in the :ref:`User Guide <validation_curve>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
param_name : string
Name of the parameter that will be varied.
param_range : array-like, shape (n_values,)
The values of the parameter that will be evaluated.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` used. In all
other cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See
:ref:`examples/model_selection/plot_validation_curve.py
<example_model_selection_plot_validation_curve.py>`
"""
X, y = indexable(X, y)
cv = check_cv(cv, X, y, classifier=is_classifier(estimator))
scorer = check_scoring(estimator, scoring=scoring)
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
out = parallel(delayed(_fit_and_score)(
estimator, X, y, scorer, train, test, verbose,
parameters={param_name: v}, fit_params=None, return_train_score=True)
for train, test in cv for v in param_range)
out = np.asarray(out)[:, :2]
n_params = len(param_range)
n_cv_folds = out.shape[0] // n_params
out = out.reshape(n_cv_folds, n_params, 2).transpose((2, 1, 0))
return out[0], out[1]
| bsd-3-clause |
tkinz27/ansible | lib/ansible/plugins/callback/syslog_json.py | 136 | 2579 | import os
import json
import logging
import logging.handlers
import socket
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
logs ansible-playbook and ansible runs to a syslog server in json format
make sure you have in ansible.cfg:
callback_plugins = <path_to_callback_plugins_folder>
and put the plugin in <path_to_callback_plugins_folder>
This plugin makes use of the following environment variables:
SYSLOG_SERVER (optional): defaults to localhost
SYSLOG_PORT (optional): defaults to 514
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'syslog_json'
def __init__(self, display):
super(CallbackModule, self).__init__(display)
self.logger = logging.getLogger('ansible logger')
self.logger.setLevel(logging.DEBUG)
self.handler = logging.handlers.SysLogHandler(
address = (os.getenv('SYSLOG_SERVER','localhost'),
os.getenv('SYSLOG_PORT',514)),
facility=logging.handlers.SysLogHandler.LOG_USER
)
self.logger.addHandler(self.handler)
self.hostname = socket.gethostname()
def runner_on_failed(self, host, res, ignore_errors=False):
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
def runner_on_ok(self, host, res):
self.logger.info('%s ansible-command: task execution OK; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
def runner_on_skipped(self, host, item=None):
self.logger.info('%s ansible-command: task execution SKIPPED; host: %s; message: %s' % (self.hostname,host, 'skipped'))
def runner_on_unreachable(self, host, res):
self.logger.error('%s ansible-command: task execution UNREACHABLE; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
def runner_on_async_failed(self, host, res):
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
def playbook_on_import_for_host(self, host, imported_file):
self.logger.info('%s ansible-command: playbook IMPORTED; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
def playbook_on_not_import_for_host(self, host, missing_file):
self.logger.info('%s ansible-command: playbook NOT IMPORTED; host: %s; message: %s' % (self.hostname,host,self._dump_results(res)))
| gpl-3.0 |
openembedded/openembedded | contrib/mtn2git/mtn/utility.py | 45 | 3593 |
import popen2
import select
import fcntl
import os
def set_nonblocking(fd):
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NDELAY)
def run_command(command, timeout=None, to_child=None):
"returns a tuple of (was_timeout, exit_code, data_read)"
p = popen2.Popen3(command, capturestderr=True)
set_nonblocking(p.fromchild)
set_nonblocking(p.childerr)
fromchild_read = ""
childerr_read = ""
was_timeout = False
if to_child != None:
p.tochild.write(to_child)
p.tochild.close()
while 1:
ro, rw, re = select.select([p.fromchild], [], [p.childerr], timeout)
if not ro and not rw and not re:
was_timeout = True
break
if p.fromchild in ro:
recv = p.fromchild.read()
if recv == "": break
fromchild_read += recv
if p.childerr in re:
recv = p.childerr.read()
if recv == "": break
childerr_read += recv
if not was_timeout:
# check for any data we might have missed (due to a premature break)
# (if there isn't anything we just get a IOError, which we don't mind
try: fromchild_read += p.fromchild.read()
except IOError: pass
try: childerr_read += p.childerr.read()
except IOError: pass
p.fromchild.close()
# if there wasn't a timeout, the program should have exited; in which case we should wait() for it
# otherwise, it might be hung, so the parent should wait for it.
# (wrap in a try: except: just in case some other thread happens to wait() and grab ours; god wrapping
# python around UNIX is horrible sometimes)
exitcode = None
try:
if not was_timeout: exitcode = p.wait() >> 8
except: pass
return { 'run_command' : command,
'timeout' : was_timeout,
'exitcode' : exitcode,
'fromchild' : fromchild_read,
'childerr' : childerr_read }
def iter_command(command, timeout=None):
p = popen2.Popen3(command, capturestderr=True)
set_nonblocking(p.fromchild)
set_nonblocking(p.childerr)
fromchild_read = ""
childerr_read = ""
was_timeout = False
while 1:
ro, rw, re = select.select([p.fromchild], [], [p.childerr], timeout)
if not ro and not rw and not re:
was_timeout = True
break
if p.fromchild in ro:
recv = p.fromchild.read()
if recv == "": break
fromchild_read += recv
while 1:
nl = fromchild_read.find('\n')
if nl == -1: break
yield fromchild_read[:nl]
fromchild_read = fromchild_read[nl+1:]
if p.childerr in re:
recv = p.childerr.read()
if recv == "": break
childerr_read += recv
if not was_timeout:
# check for any data we might have missed (due to a premature break)
# (if there isn't anything we just get a IOError, which we don't mind
try: fromchild_read += p.fromchild.read()
except IOError: pass
try: childerr_read += p.childerr.read()
except IOError: pass
p.fromchild.close()
p.tochild.close()
# yield anything left over
to_yield = fromchild_read.split('\n')
while len(to_yield): yield to_yield.pop()
# call wait()
try:
if not was_timeout: p.wait()
except: pass
if len(childerr_read): raise Exception("data on stderr (command is %s)" % command, childerr_read)
if was_timeout: raise Exception("command timeout")
| mit |
adit-chandra/tensorflow | tensorflow/lite/testing/op_tests/depth_to_space.py | 4 | 2016 | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test configs for depth_to_space."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.lite.testing.zip_test_utils import create_tensor_data
from tensorflow.lite.testing.zip_test_utils import make_zip_of_tests
from tensorflow.lite.testing.zip_test_utils import register_make_test_function
@register_make_test_function()
def make_depth_to_space_tests(options):
"""Make a set of tests to do depth_to_space."""
test_parameters = [{
"dtype": [tf.float32, tf.int32, tf.uint8, tf.int64],
"input_shape": [[2, 3, 4, 16]],
"block_size": [2, 4],
}]
def build_graph(parameters):
input_tensor = tf.compat.v1.placeholder(
dtype=parameters["dtype"],
name="input",
shape=parameters["input_shape"])
out = tf.compat.v1.depth_to_space(
input_tensor, block_size=parameters["block_size"])
return [input_tensor], [out]
def build_inputs(parameters, sess, inputs, outputs):
input_values = create_tensor_data(parameters["dtype"],
parameters["input_shape"])
return [input_values], sess.run(
outputs, feed_dict=dict(zip(inputs, [input_values])))
make_zip_of_tests(options, test_parameters, build_graph, build_inputs)
| apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.