repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
---|---|---|---|---|---|
qistoph/thug
|
samples/steps/exploits.py
|
2
|
1116
|
import sys
import os
from behave import *
from behave.log_capture import capture
THUG = os.path.dirname(os.path.abspath(__file__)).split("samples")[0]
EXPLOITS = os.path.join(THUG, 'samples', 'exploits')
sys.path.append(os.path.join(THUG, 'src'))
from ThugAPI import ThugAPI
class Exploits(object):
def __init__(self, context):
self.exploits = list()
for row in context.table:
self.exploits.append(row)
def _run(self, context, exploit):
sample = os.path.join(EXPLOITS, exploit[0])
instance = ThugAPI(None, None)
instance.set_events('click')
instance.set_timeout(1)
instance.log_init(sample)
instance.run_local(sample)
for assertion in exploit[1].split(","):
assert assertion in context.log_capture.getvalue()
def run(self, context):
for exploit in self.exploits:
self._run(context, exploit)
@given('set of exploits')
def step_impl(context):
global exploits
exploits = Exploits(context)
@capture
@then('run exploit')
def step_impl(context):
exploits.run(context)
|
gpl-2.0
|
PeterDaveHello/eden
|
static/scripts/tools/ADAT_CSV_generator.py
|
41
|
18554
|
#!/usr/bin/python
"""
Sahana-Eden ADAT helper script
==============================
Script to generate the import files for a new survey template.
The input is a "xls" spreadsheet with four sheets, namely:
* Template
* Sections
* Questions
* Layout
The output is two csv files one that can beused to import the
questions into the system, the other that may be used to import
the layout details into the system. The name of the files will the
same as the input file with either the .Layout.csv or .Question.csv prefix
replacing the .xls type.
Details of the input sheets
===========================
Template
--------
This includes the basic details of the template as follows:
A1: Template Name
A2: Template Description
A3: Complete Question
A4: Date Question
A5: Time Question
A6: Location Detail
A7: Priority Question
The questions in cells A3:A7 are the unique questions codes which are given
later in sheet the Questions sheet.
Sections
--------
This lists each section within the template. Each section name is given in
the A column, they should be provided in their default order. Thier display
order can be later be changed by the layout but this will be the default
order of the sections.
Questions
---------
This holds details of questions in each column as follows:
A: Unique Question Code
B: The Section - the section to which the question belongs
C: Question Name - the actual question which is what will be displayed
and should be in the base template language (English)
D: Question type - This must be one of the known question widget types
E: Question notes - any help information that should be associated with
the question to help the enumerators complete
the questionnaire.
F onwards: Metadata
Any option type: The list of options, thus:
OptionOther: Early Continue
The Grid type: This is the most complex type when it comes to the metadata
and it relies on keywords to make it as simple as possible.
The columns alternate between keywords and their value.
The valid keywords are:
Subtitle - the subtitle of the grid [Optional]
Column count - the number of columns in the grid
Row count - the number of rows in the grid
Column Headings - the headings for each column with one
heading per cell
Row Questions - the headings for each row with one
heading per cell
Question Type - this is the type of each question, again
this must be one of the known question
widget types. If just one value is given
then all questions take this type. If the
same number of types as columns are given
then this type reflects the types used
in each column. Otherwise, their should
be a direct mapping between question and
type.
GridChild - Some of the question types need to have
metadata associated with them. This
keyword is followed by a number to
indicate which question type the metadata
is for, refering to the order in the
Question Type list. [Optional]
NOTE: The metadata must be provided in this order.
NOTE on the grid question type:
The question code for this should be unique and end with a hyphen.
The questions within the grid will then be properly numbered.
So a grid question code of PMI-WASH-A-, will then hold the questions
PMI-WASH-A-1, PMI-WASH-A-2, PMI-WASH-A-3 etc.
Layout
------
This is used to describe in a semi-visual way how the questions should be
laid out. This layout can be used for any representation of the
questionnaire such as web form, spreadsheet, PDF etc.
The rules to complete this section are as follows:
* Add the section name
* On subsequent lines add the question codes for the questions to appear
in this section
* For questions that are to appear in a the same line add them in adjacent
columns of the same row, thus:
PMI-Ass-1 PMI-Ass-2 PMI-Ass-3
* For questions that are to appear in adjacent columns use the keyword
column in the first column and then add the questions in subsequent
columns, thus:
columns PMI-Health-1 PMI-Health-4 PMI-Health-A-
PMI-Health-2
So this describes three columns with two questions in the first column
and one question each in columns 2 and 3.
* To add a subheading (normally at the start of a column) just add the
text in the cell and the question codes in the columns below. Any text
that does not match a question code or keyword is assumed to be a
subheading.
NOTE: The script might be able to manage blank lines between the end of
one section and the next but *please* try and avoid using blank lines
since this is not fully tested and future enhancements of this script
may break that.
NOTE: Only include questions codes from within the section. Including
questions from different sections is untested and whilst the script
may work as expected, Sahana-Eden *might* not.
"""
import sys
import xlrd
import csv
optionTypes = ["Option", "OptionOther", "MultiOption"]
widgetTypes = ["String", "Text", "Numeric", "Date", "Option", "YesNo", "YesNoDontKnow", "OptionOther", "MultiOption", "Location", "Link", "Grid", "GridChild"]
layoutQuestions = []
def splitGridChildMetadata(metadataList):
gridChildList = []
dataList = []
for x in range(len(metadataList)):
if metadataList[x] == "GridChild":
if dataList != []:
gridChildList.append(dataList)
dataList = []
else:
dataList.append(metadataList[x])
if dataList != []:
gridChildList.append(dataList)
return gridChildList
def processGridChildMetadata(metadataList, childType):
metadata = dict()
gridChildList = splitGridChildMetadata(metadataList)
for x in range(len(gridChildList)):
dataList = gridChildList[x]
qstnNo = int(dataList[0])
qstn_type = childType[qstnNo-1]
(metadataList, dummy) = processMetadata(dataList[1:], qstn_type, None,0,None)
metadata[qstnNo] = metadataList
return metadata
def processGridChildMetadataAll(metadataList, colCnt, rowCnt, qstn_code, qstn_posn, firstQstnInSection, childType):
metadata = dict()
qstnMetadataList = processGridChildMetadata(metadataList, childType)
offset = qstn_posn - firstQstnInSection + 1
for x in range(colCnt * rowCnt):
qCode = "%s%d" %(qstn_code, x+offset)
for qstnMetadata in qstnMetadataList.values():
metadata[str(qCode)] = qstnMetadata
return metadata
def processGridChildMetadataColumn(metadataList, colCnt, rowCnt, qstn_code, qstn_posn, firstQstnInSection, childType):
metadata = dict()
qstnMetadataList = processGridChildMetadata(metadataList, childType)
offset = qstn_posn - firstQstnInSection
for (posn, qstnMetadata) in qstnMetadataList.items():
for x in range(rowCnt):
qCode = "%s%d" %(qstn_code, x*colCnt+posn+offset)
metadata[str(qCode)] = qstnMetadata
return metadata
def processGridChildMetadataElement(metadataList, qstn_code, qstn_posn, firstQstnInSection, childType):
metadata = dict()
qstnMetadataList = processGridChildMetadata(metadataList, childType)
offset = qstn_posn - firstQstnInSection
for (posn, qstnMetadata) in qstnMetadataList.items():
qCode = "%s%d" %(qstn_code, posn+offset)
metadata[str(qCode)] = qstnMetadata
return metadata
def processMetadata(metadataList, qstn_type, qstn_code, qstn_posn, firstQstnInSection):
metadata = dict()
next_qstn_posn = qstn_posn + 1
if qstn_type in optionTypes:
posn = 0
for value in metadataList:
posn += 1
if value == "metadata":
metadata += processMetadata(metadataList[posn:],None,None,0,None)
break
metadata[str(posn)] = str(value)
metadata["Length"] = str(posn)
elif qstn_type == "Grid":
colCnt = 0
rowCnt = 0
metadata["QuestionNo"] = qstn_posn - firstQstnInSection + 1
end = len(metadataList)
for x in range(end):
value = metadataList[x]
if value == "Subtitle":
x += 1
metadata["Subtitle"] = str(metadataList[x])
elif value == "Column count":
x += 1
colCnt = int(metadataList[x])
metadata["col-cnt"] = str(colCnt)
elif value == "Row count":
x += 1
rowCnt = int(metadataList[x])
metadata["row-cnt"] = str(rowCnt)
elif value == "Column Headings":
colList = []
for y in range(colCnt):
colList.append(str(metadataList[x+y+1]))
metadata["columns"] = colList
x += colCnt
elif value == "Row Questions":
rowList = []
for y in range(rowCnt):
rowList.append(str(metadataList[x+y+1]))
metadata["rows"] = rowList
x += rowCnt
elif value == "Question Type":
rowList = []
childType = []
for y in xrange(x+1, end):
value = metadataList[y]
if value == "GridChild":
break
else:
childType.append(str(value))
if len(childType) == 1:
colList = childType*colCnt
rowList = [colList] * rowCnt
metadata["data"] = rowList
elif len(childType) == colCnt:
for r in range(rowCnt):
rowList.append(childType)
metadata["data"] = rowList
else:
for r in range(rowCnt):
colList = []
for c in range(colCnt):
colList.append(childType[r*colCnt + c])
rowList.append(colList)
metadata["data"] = rowList
if value == "GridChild":
if len(childType) == 1:
metadata.update(processGridChildMetadataAll(metadataList[y:], colCnt, rowCnt, qstn_code, qstn_posn, firstQstnInSection, childType))
elif len(childType) == colCnt:
metadata.update(processGridChildMetadataColumn(metadataList[y:], colCnt, rowCnt, qstn_code, qstn_posn, firstQstnInSection, childType))
else:
metadata.update(processGridChildMetadataElement(metadataList[y:], qstn_code, qstn_posn, firstQstnInSection, childType))
break
next_qstn_posn = qstn_posn + colCnt * rowCnt
else:
pass
return (metadata, next_qstn_posn)
def getQstnMetadata(sheetQ, row, qstn_type, qstn_code, qstn_posn, firstQstnInSection):
metadataList = []
for col in xrange(5,sheetQ.ncols):
value = sheetQ.cell_value(row, col)
if value == "":
break
metadataList.append(value)
(metadata, qstn_posn) = processMetadata(metadataList, qstn_type, qstn_code, qstn_posn, firstQstnInSection)
return (metadata, qstn_posn)
def formatQuestionnaire(sheetQ, templateDetails, sections):
questionnaire = []
questions = []
theSection = ""
sectionPosn = 0
firstQstnInSection = 0
next_qstn_posn = 1
line = []
for row in range(sheetQ.nrows):
qstn_posn = next_qstn_posn
line = templateDetails[:]
qstn_code = sheetQ.cell_value(row, 0)
section = sheetQ.cell_value(row, 1)
if section != theSection:
theSection = section
sectionPosn += 1
firstQstnInSection = qstn_posn
question = sheetQ.cell_value(row, 2)
qstn_type = sheetQ.cell_value(row, 3)
qstn_notes = sheetQ.cell_value(row, 4)
(metadata, next_qstn_posn) = getQstnMetadata(sheetQ, row, qstn_type, qstn_code, qstn_posn, firstQstnInSection)
questions.append(qstn_code)
line.append(section)
line.append(sectionPosn)
line.append(question)
line.append(qstn_type)
line.append(qstn_notes)
line.append(qstn_posn)
line.append(qstn_code)
if metadata != {}:
line.append(metadata)
questionnaire.append(line)
return (questions, questionnaire)
def processColumns(sheetL, questions, rowStart, rowEnd):
columns = []
for col in xrange(1,sheetL.ncols):
colList = []
for row in xrange(rowStart, rowEnd):
value = sheetL.cell_value(row, col)
if value == "":
break
if value in questions:
colList.append(str(value))
layoutQuestions.append(value)
else:
colList.append(processLabel(value))
if colList == []:
break
columns.append(colList)
return [{'columns':columns}]
def processRow(sheetL, questions, row):
rowList = []
for col in range(sheetL.ncols):
value = sheetL.cell_value(row, col)
if value in questions:
rowList.append(str(value))
layoutQuestions.append(value)
return rowList
def processLabel(value):
return {'heading':str(value)}
def getLayoutRules(sheetL, questions, rowStart, rowEnd):
colStart = None
rules = []
for row in xrange(rowStart, rowEnd):
value = sheetL.cell_value(row, 0)
if value == "columns":
if colStart != None:
rules.append(processColumns(sheetL, questions, colStart, row))
colStart = row
elif value == "":
pass
elif value in questions:
if colStart != None:
rules.append(processColumns(sheetL, questions, colStart, row))
colStart = None
rules.append(processRow(sheetL, questions, row))
else:
rules.append(processLabel(value))
if colStart != None:
rules.append(processColumns(sheetL, questions, colStart, rowEnd))
return rules
def formatLayout(sheetL, template, sections, questions):
layoutMethod = 1
layout = []
sectionLength = len(sections)
rowStart = rowEnd = 0
rowLimit = sheetL.nrows
for i in range(sectionLength):
section = sections[i]
while rowStart < rowLimit:
if sheetL.cell_value(rowStart, 0) == section:
break
else:
rowStart += 1
if i+1 == sectionLength:
rowEnd = rowLimit
else:
nextSection = sections[i+1]
while rowEnd < rowLimit:
if sheetL.cell_value(rowEnd, 0) == nextSection:
break
else:
rowEnd += 1
rule = repr(getLayoutRules(sheetL, questions, rowStart+1, rowEnd))
layout.append([template,section,i+1,layoutMethod,rule])
return layout
def loadSpreadsheet(name):
workbook = xlrd.open_workbook(filename=name)
sheetT = workbook.sheet_by_name("Template")
sheetS = workbook.sheet_by_name("Sections")
sheetQ = workbook.sheet_by_name("Questions")
sheetL = workbook.sheet_by_name("Layout")
templateDetails = []
for row in xrange(0, sheetT.nrows):
value = str(sheetT.cell_value(row, 0))
if sheetT.ncols > 1 and sheetT.cell_value(row, 1):
value = [value]
for col in xrange(1, sheetT.ncols):
if sheetT.cell_value(row, col):
value .append(str(sheetT.cell_value(row, col)))
templateDetails.append(value)
sections = []
for row in xrange(0, sheetS.nrows):
sections.append(sheetS.cell_value(row, 0))
(questions, questionnaire) = formatQuestionnaire(sheetQ, templateDetails, sections)
layout = formatLayout(sheetL, templateDetails[0], sections, questions)
# Report back the questions that are not in the layout
missing = []
for qstn in questions:
if qstn not in layoutQuestions:
missing.append(qstn)
if missing != []:
print "The following questions are missing from the layout: %s" % missing
return (questionnaire, layout)
def generateQuestionnaireCSV(name, questionnaire):
csvName = "%s.Question.csv" % name
headings = ["Template","Template Description","Complete Question","Date Question","Time Question","Location Detail","Priority Question","Section","Section Position","Question","Question Type","Question Notes","Question Position","Question Code","Meta Data"]
writer = csv.writer(open(csvName, "w"))
writer.writerows([headings])
writer.writerows(questionnaire)
def generateLayoutCSV(name, layout):
csvName = "%s.Layout.csv" % name
headings = ["Template","Section","Posn","Method","Rules"]
writer = csv.writer(open(csvName, "w"))
writer.writerows([headings])
writer.writerows(layout)
def _main():
"""
Parse arguments and run checks generate the csv files
"""
if len(sys.argv) == 1:
print "Please add a spreadsheet to process"
return
spreadsheetName = sys.argv[1]
(questionnaire, layout) = loadSpreadsheet(spreadsheetName)
generateQuestionnaireCSV(spreadsheetName, questionnaire)
generateLayoutCSV(spreadsheetName, layout)
if __name__ == '__main__':
_main()
|
mit
|
gexueyuan/WAT89EC-10-odroid
|
board/pxa255_idp/pxa_reg_calcs.py
|
267
|
11108
|
#!/usr/bin/python
# (C) Copyright 2004
# BEC Systems <http://bec-systems.com>
# Cliff Brake <[email protected]>
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
# calculations for PXA255 registers
class gpio:
dir = '0'
set = '0'
clr = '0'
alt = '0'
desc = ''
def __init__(self, dir=0, set=0, clr=0, alt=0, desc=''):
self.dir = dir
self.set = set
self.clr = clr
self.alt = alt
self.desc = desc
# the following is a dictionary of all GPIOs in the system
# the key is the GPIO number
pxa255_alt_func = {
0: ['gpio', 'none', 'none', 'none'],
1: ['gpio', 'gpio reset', 'none', 'none'],
2: ['gpio', 'none', 'none', 'none'],
3: ['gpio', 'none', 'none', 'none'],
4: ['gpio', 'none', 'none', 'none'],
5: ['gpio', 'none', 'none', 'none'],
6: ['gpio', 'MMC clk', 'none', 'none'],
7: ['gpio', '48MHz clock', 'none', 'none'],
8: ['gpio', 'MMC CS0', 'none', 'none'],
9: ['gpio', 'MMC CS1', 'none', 'none'],
10: ['gpio', 'RTC Clock', 'none', 'none'],
11: ['gpio', '3.6MHz', 'none', 'none'],
12: ['gpio', '32KHz', 'none', 'none'],
13: ['gpio', 'none', 'MBGNT', 'none'],
14: ['gpio', 'MBREQ', 'none', 'none'],
15: ['gpio', 'none', 'nCS_1', 'none'],
16: ['gpio', 'none', 'PWM0', 'none'],
17: ['gpio', 'none', 'PWM1', 'none'],
18: ['gpio', 'RDY', 'none', 'none'],
19: ['gpio', 'DREQ[1]', 'none', 'none'],
20: ['gpio', 'DREQ[0]', 'none', 'none'],
21: ['gpio', 'none', 'none', 'none'],
22: ['gpio', 'none', 'none', 'none'],
23: ['gpio', 'none', 'SSP SCLK', 'none'],
24: ['gpio', 'none', 'SSP SFRM', 'none'],
25: ['gpio', 'none', 'SSP TXD', 'none'],
26: ['gpio', 'SSP RXD', 'none', 'none'],
27: ['gpio', 'SSP EXTCLK', 'none', 'none'],
28: ['gpio', 'AC97 bitclk in, I2S bitclock out', 'I2S bitclock in', 'none'],
29: ['gpio', 'AC97 SDATA_IN0', 'I2S SDATA_IN', 'none'],
30: ['gpio', 'I2S SDATA_OUT', 'AC97 SDATA_OUT', 'none'],
31: ['gpio', 'I2S SYNC', 'AC97 SYNC', 'none'],
32: ['gpio', 'AC97 SDATA_IN1', 'I2S SYSCLK', 'none'],
33: ['gpio', 'none', 'nCS_5', 'none'],
34: ['gpio', 'FF RXD', 'MMC CS0', 'none'],
35: ['gpio', 'FF CTS', 'none', 'none'],
36: ['gpio', 'FF DCD', 'none', 'none'],
37: ['gpio', 'FF DSR', 'none', 'none'],
38: ['gpio', 'FF RI', 'none', 'none'],
39: ['gpio', 'MMC CS1', 'FF TXD', 'none'],
40: ['gpio', 'none', 'FF DTR', 'none'],
41: ['gpio', 'none', 'FF RTS', 'none'],
42: ['gpio', 'BT RXD', 'none', 'HW RXD'],
43: ['gpio', 'none', 'BT TXD', 'HW TXD'],
44: ['gpio', 'BT CTS', 'none', 'HW CTS'],
45: ['gpio', 'none', 'BT RTS', 'HW RTS'],
46: ['gpio', 'ICP_RXD', 'STD RXD', 'none'],
47: ['gpio', 'STD TXD', 'ICP_TXD', 'none'],
48: ['gpio', 'HW TXD', 'nPOE', 'none'],
49: ['gpio', 'HW RXD', 'nPWE', 'none'],
50: ['gpio', 'HW CTS', 'nPIOR', 'none'],
51: ['gpio', 'nPIOW', 'HW RTS', 'none'],
52: ['gpio', 'none', 'nPCE[1]', 'none'],
53: ['gpio', 'MMC CLK', 'nPCE[2]', 'none'],
54: ['gpio', 'MMC CLK', 'nPSKSEL', 'none'],
55: ['gpio', 'none', 'nPREG', 'none'],
56: ['gpio', 'nPWAIT', 'none', 'none'],
57: ['gpio', 'nIOIS16', 'none', 'none'],
58: ['gpio', 'none', 'LDD[0]', 'none'],
59: ['gpio', 'none', 'LDD[1]', 'none'],
60: ['gpio', 'none', 'LDD[2]', 'none'],
61: ['gpio', 'none', 'LDD[3]', 'none'],
62: ['gpio', 'none', 'LDD[4]', 'none'],
63: ['gpio', 'none', 'LDD[5]', 'none'],
64: ['gpio', 'none', 'LDD[6]', 'none'],
65: ['gpio', 'none', 'LDD[7]', 'none'],
66: ['gpio', 'MBREQ', 'LDD[8]', 'none'],
67: ['gpio', 'MMC CS0', 'LDD[9]', 'none'],
68: ['gpio', 'MMC CS1', 'LDD[10]', 'none'],
69: ['gpio', 'MMC CLK', 'LDD[11]', 'none'],
70: ['gpio', 'RTC CLK', 'LDD[12]', 'none'],
71: ['gpio', '3.6 MHz', 'LDD[13]', 'none'],
72: ['gpio', '32 KHz', 'LDD[14]', 'none'],
73: ['gpio', 'MBGNT', 'LDD[15]', 'none'],
74: ['gpio', 'none', 'LCD_FCLK', 'none'],
75: ['gpio', 'none', 'LCD_LCLK', 'none'],
76: ['gpio', 'none', 'LCD_PCLK', 'none'],
77: ['gpio', 'none', 'LCD_ACBIAS', 'none'],
78: ['gpio', 'none', 'nCS_2', 'none'],
79: ['gpio', 'none', 'nCS_3', 'none'],
80: ['gpio', 'none', 'nCS_4', 'none'],
81: ['gpio', 'NSSPSCLK', 'none', 'none'],
82: ['gpio', 'NSSPSFRM', 'none', 'none'],
83: ['gpio', 'NSSPTXD', 'NSSPRXD', 'none'],
84: ['gpio', 'NSSPTXD', 'NSSPRXD', 'none'],
}
#def __init__(self, dir=0, set=0, clr=0, alt=0, desc=''):
gpio_list = []
for i in range(0,85):
gpio_list.append(gpio())
#chip select GPIOs
gpio_list[18] = gpio(0, 0, 0, 1, 'RDY')
gpio_list[33] = gpio(1, 1, 0, 2, 'CS5#')
gpio_list[80] = gpio(1, 1, 0, 2, 'CS4#')
gpio_list[79] = gpio(1, 1, 0, 2, 'CS3#')
gpio_list[78] = gpio(1, 1, 0, 2, 'CS2#')
gpio_list[15] = gpio(1, 1, 0, 2, 'CS1#')
gpio_list[22] = gpio(0, 0, 0, 0, 'Consumer IR, PCC_S1_IRQ_O#')
gpio_list[21] = gpio(0, 0, 0, 0, 'IRQ_IDE, PFI')
gpio_list[19] = gpio(0, 0, 0, 0, 'XB_DREQ1, PCC_SO_IRQ_O#')
gpio_list[20] = gpio(0, 0, 0, 0, 'XB_DREQ0')
gpio_list[20] = gpio(0, 0, 0, 0, 'XB_DREQ0')
gpio_list[17] = gpio(0, 0, 0, 0, 'IRQ_AXB')
gpio_list[16] = gpio(1, 0, 0, 2, 'PWM0')
# PCMCIA stuff
gpio_list[57] = gpio(0, 0, 0, 1, 'PCC_IOIS16#')
gpio_list[56] = gpio(0, 0, 0, 1, 'PCC_WAIT#')
gpio_list[55] = gpio(1, 0, 0, 2, 'PCC_REG#')
gpio_list[54] = gpio(1, 0, 0, 2, 'PCC_SCKSEL')
gpio_list[53] = gpio(1, 1, 0, 2, 'PCC_CE2#')
gpio_list[52] = gpio(1, 1, 0, 2, 'PCC_CE1#')
gpio_list[51] = gpio(1, 1, 0, 1, 'PCC_IOW#')
gpio_list[50] = gpio(1, 1, 0, 2, 'PCC_IOR#')
gpio_list[49] = gpio(1, 1, 0, 2, 'PCC_WE#')
gpio_list[48] = gpio(1, 1, 0, 2, 'PCC_OE#')
# SSP port
gpio_list[26] = gpio(0, 0, 0, 1, 'SSP_RXD')
gpio_list[25] = gpio(0, 0, 0, 0, 'SSP_TXD')
gpio_list[24] = gpio(1, 0, 1, 2, 'SSP_SFRM')
gpio_list[23] = gpio(1, 0, 1, 2, 'SSP_SCLK')
gpio_list[27] = gpio(0, 0, 0, 0, 'SSP_EXTCLK')
# audio codec
gpio_list[32] = gpio(0, 0, 0, 0, 'AUD_SDIN1')
gpio_list[31] = gpio(1, 0, 0, 2, 'AC_SYNC')
gpio_list[30] = gpio(1, 0, 0, 2, 'AC_SDOUT')
gpio_list[29] = gpio(0, 0, 0, 1, 'AUD_SDIN0')
gpio_list[28] = gpio(0, 0, 0, 1, 'AC_BITCLK')
# serial ports
gpio_list[39] = gpio(1, 0, 0, 2, 'FF_TXD')
gpio_list[34] = gpio(0, 0, 0, 1, 'FF_RXD')
gpio_list[41] = gpio(1, 0, 0, 2, 'FF_RTS')
gpio_list[35] = gpio(0, 0, 0, 1, 'FF_CTS')
gpio_list[40] = gpio(1, 0, 0, 2, 'FF_DTR')
gpio_list[37] = gpio(0, 0, 0, 1, 'FF_DSR')
gpio_list[38] = gpio(0, 0, 0, 1, 'FF_RI')
gpio_list[36] = gpio(0, 0, 0, 1, 'FF_DCD')
gpio_list[43] = gpio(1, 0, 0, 2, 'BT_TXD')
gpio_list[42] = gpio(0, 0, 0, 1, 'BT_RXD')
gpio_list[45] = gpio(1, 0, 0, 2, 'BT_RTS')
gpio_list[44] = gpio(0, 0, 0, 1, 'BT_CTS')
gpio_list[47] = gpio(1, 0, 0, 1, 'IR_TXD')
gpio_list[46] = gpio(0, 0, 0, 2, 'IR_RXD')
# misc GPIO signals
gpio_list[14] = gpio(0, 0, 0, 0, 'MBREQ')
gpio_list[13] = gpio(0, 0, 0, 0, 'MBGNT')
gpio_list[12] = gpio(0, 0, 0, 0, 'GPIO_12/32K_CLK')
gpio_list[11] = gpio(0, 0, 0, 0, '3M6_CLK')
gpio_list[10] = gpio(1, 0, 1, 0, 'GPIO_10/RTC_CLK/debug LED')
gpio_list[9] = gpio(0, 0, 0, 0, 'MMC_CD#')
gpio_list[8] = gpio(0, 0, 0, 0, 'PCC_S1_CD#')
gpio_list[7] = gpio(0, 0, 0, 0, 'PCC_S0_CD#')
gpio_list[6] = gpio(1, 0, 0, 1, 'MMC_CLK')
gpio_list[5] = gpio(0, 0, 0, 0, 'IRQ_TOUCH#')
gpio_list[4] = gpio(0, 0, 0, 0, 'IRQ_ETH')
gpio_list[3] = gpio(0, 0, 0, 0, 'MQ_IRQ#')
gpio_list[2] = gpio(0, 0, 0, 0, 'BAT_DATA')
gpio_list[1] = gpio(0, 0, 0, 1, 'USER_RESET#')
gpio_list[0] = gpio(0, 0, 0, 1, 'USER_RESET#')
# LCD GPIOs
gpio_list[58] = gpio(1, 0, 0, 2, 'LDD0')
gpio_list[59] = gpio(1, 0, 0, 2, 'LDD1')
gpio_list[60] = gpio(1, 0, 0, 2, 'LDD2')
gpio_list[61] = gpio(1, 0, 0, 2, 'LDD3')
gpio_list[62] = gpio(1, 0, 0, 2, 'LDD4')
gpio_list[63] = gpio(1, 0, 0, 2, 'LDD5')
gpio_list[64] = gpio(1, 0, 0, 2, 'LDD6')
gpio_list[65] = gpio(1, 0, 0, 2, 'LDD7')
gpio_list[66] = gpio(1, 0, 0, 2, 'LDD8')
gpio_list[67] = gpio(1, 0, 0, 2, 'LDD9')
gpio_list[68] = gpio(1, 0, 0, 2, 'LDD10')
gpio_list[69] = gpio(1, 0, 0, 2, 'LDD11')
gpio_list[70] = gpio(1, 0, 0, 2, 'LDD12')
gpio_list[71] = gpio(1, 0, 0, 2, 'LDD13')
gpio_list[72] = gpio(1, 0, 0, 2, 'LDD14')
gpio_list[73] = gpio(1, 0, 0, 2, 'LDD15')
gpio_list[74] = gpio(1, 0, 0, 2, 'FCLK')
gpio_list[75] = gpio(1, 0, 0, 2, 'LCLK')
gpio_list[76] = gpio(1, 0, 0, 2, 'PCLK')
gpio_list[77] = gpio(1, 0, 0, 2, 'ACBIAS')
# calculate registers
pxa_regs = {
'gpdr0':0, 'gpdr1':0, 'gpdr2':0,
'gpsr0':0, 'gpsr1':0, 'gpsr2':0,
'gpcr0':0, 'gpcr1':0, 'gpcr2':0,
'gafr0_l':0, 'gafr0_u':0,
'gafr1_l':0, 'gafr1_u':0,
'gafr2_l':0, 'gafr2_u':0,
}
# U-boot define names
uboot_reg_names = {
'gpdr0':'CONFIG_SYS_GPDR0_VAL', 'gpdr1':'CONFIG_SYS_GPDR1_VAL', 'gpdr2':'CONFIG_SYS_GPDR2_VAL',
'gpsr0':'CONFIG_SYS_GPSR0_VAL', 'gpsr1':'CONFIG_SYS_GPSR1_VAL', 'gpsr2':'CONFIG_SYS_GPSR2_VAL',
'gpcr0':'CONFIG_SYS_GPCR0_VAL', 'gpcr1':'CONFIG_SYS_GPCR1_VAL', 'gpcr2':'CONFIG_SYS_GPCR2_VAL',
'gafr0_l':'CONFIG_SYS_GAFR0_L_VAL', 'gafr0_u':'CONFIG_SYS_GAFR0_U_VAL',
'gafr1_l':'CONFIG_SYS_GAFR1_L_VAL', 'gafr1_u':'CONFIG_SYS_GAFR1_U_VAL',
'gafr2_l':'CONFIG_SYS_GAFR2_L_VAL', 'gafr2_u':'CONFIG_SYS_GAFR2_U_VAL',
}
# bit mappings
bit_mappings = [
{ 'gpio':(0,32), 'shift':1, 'regs':{'dir':'gpdr0', 'set':'gpsr0', 'clr':'gpcr0'} },
{ 'gpio':(32,64), 'shift':1, 'regs':{'dir':'gpdr1', 'set':'gpsr1', 'clr':'gpcr1'} },
{ 'gpio':(64,85), 'shift':1, 'regs':{'dir':'gpdr2', 'set':'gpsr2', 'clr':'gpcr2'} },
{ 'gpio':(0,16), 'shift':2, 'regs':{'alt':'gafr0_l'} },
{ 'gpio':(16,32), 'shift':2, 'regs':{'alt':'gafr0_u'} },
{ 'gpio':(32,48), 'shift':2, 'regs':{'alt':'gafr1_l'} },
{ 'gpio':(48,64), 'shift':2, 'regs':{'alt':'gafr1_u'} },
{ 'gpio':(64,80), 'shift':2, 'regs':{'alt':'gafr2_l'} },
{ 'gpio':(80,85), 'shift':2, 'regs':{'alt':'gafr2_u'} },
]
def stuff_bits(bit_mapping, gpio_list):
gpios = range( bit_mapping['gpio'][0], bit_mapping['gpio'][1])
for gpio in gpios:
for reg in bit_mapping['regs'].keys():
value = eval( 'gpio_list[gpio].%s' % (reg) )
if ( value ):
# we have a high bit
bit_shift = (gpio - bit_mapping['gpio'][0]) * bit_mapping['shift']
bit = value << (bit_shift)
pxa_regs[bit_mapping['regs'][reg]] |= bit
for i in bit_mappings:
stuff_bits(i, gpio_list)
# now print out all regs
registers = pxa_regs.keys()
registers.sort()
for reg in registers:
print '%s: 0x%x' % (reg, pxa_regs[reg])
# print define to past right into U-Boot source code
print
print
for reg in registers:
print '#define %s 0x%x' % (uboot_reg_names[reg], pxa_regs[reg])
# print all GPIOS
print
print
for i in range(len(gpio_list)):
gpio_i = gpio_list[i]
alt_func_desc = pxa255_alt_func[i][gpio_i.alt]
print 'GPIO: %i, dir=%i, set=%i, clr=%i, alt=%s, desc=%s' % (i, gpio_i.dir, gpio_i.set, gpio_i.clr, alt_func_desc, gpio_i.desc)
|
gpl-2.0
|
jgronefe/AliPhysics
|
PWGJE/EMCALJetTasks/Tracks/analysis/base/MergeException.py
|
41
|
1396
|
#**************************************************************************
#* Copyright(c) 1998-2014, ALICE Experiment at CERN, All rights reserved. *
#* *
#* Author: The ALICE Off-line Project. *
#* Contributors are mentioned in the code where appropriate. *
#* *
#* Permission to use, copy, modify and distribute this software and its *
#* documentation strictly for non-commercial purposes is hereby granted *
#* without fee, provided that the above copyright notice appears in all *
#* copies and that both the copyright notice and this permission notice *
#* appear in the supporting documentation. The authors make no claims *
#* about the suitability of this software for any purpose. It is *
#* provided "as is" without express or implied warranty. *
#**************************************************************************
class MergeException(Exception):
"""
Error handling for the merge processes
"""
def __init__(self, message):
"""
Constructor
"""
self.__message = message
def __str__(self):
"""
Make exception a string object
"""
return self.__message
|
bsd-3-clause
|
mxamin/youtube-dl
|
youtube_dl/extractor/digiteka.py
|
29
|
3507
|
# coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import int_or_none
class DigitekaIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://(?:www\.)?(?:digiteka\.net|ultimedia\.com)/
(?:
deliver/
(?P<embed_type>
generic|
musique
)
(?:/[^/]+)*/
(?:
src|
article
)|
default/index/video
(?P<site_type>
generic|
music
)
/id
)/(?P<id>[\d+a-z]+)'''
_TESTS = [{
# news
'url': 'https://www.ultimedia.com/default/index/videogeneric/id/s8uk0r',
'md5': '276a0e49de58c7e85d32b057837952a2',
'info_dict': {
'id': 's8uk0r',
'ext': 'mp4',
'title': 'Loi sur la fin de vie: le texte prévoit un renforcement des directives anticipées',
'thumbnail': 're:^https?://.*\.jpg',
'duration': 74,
'upload_date': '20150317',
'timestamp': 1426604939,
'uploader_id': '3fszv',
},
}, {
# music
'url': 'https://www.ultimedia.com/default/index/videomusic/id/xvpfp8',
'md5': '2ea3513813cf230605c7e2ffe7eca61c',
'info_dict': {
'id': 'xvpfp8',
'ext': 'mp4',
'title': 'Two - C\'est La Vie (clip)',
'thumbnail': 're:^https?://.*\.jpg',
'duration': 233,
'upload_date': '20150224',
'timestamp': 1424760500,
'uploader_id': '3rfzk',
},
}, {
'url': 'https://www.digiteka.net/deliver/generic/iframe/mdtk/01637594/src/lqm3kl/zone/1/showtitle/1/autoplay/yes',
'only_matching': True,
}]
@staticmethod
def _extract_url(webpage):
mobj = re.search(
r'<(?:iframe|script)[^>]+src=["\'](?P<url>(?:https?:)?//(?:www\.)?ultimedia\.com/deliver/(?:generic|musique)(?:/[^/]+)*/(?:src|article)/[\d+a-z]+)',
webpage)
if mobj:
return mobj.group('url')
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id')
video_type = mobj.group('embed_type') or mobj.group('site_type')
if video_type == 'music':
video_type = 'musique'
deliver_info = self._download_json(
'http://www.ultimedia.com/deliver/video?video=%s&topic=%s' % (video_id, video_type),
video_id)
yt_id = deliver_info.get('yt_id')
if yt_id:
return self.url_result(yt_id, 'Youtube')
jwconf = deliver_info['jwconf']
formats = []
for source in jwconf['playlist'][0]['sources']:
formats.append({
'url': source['file'],
'format_id': source.get('label'),
})
self._sort_formats(formats)
title = deliver_info['title']
thumbnail = jwconf.get('image')
duration = int_or_none(deliver_info.get('duration'))
timestamp = int_or_none(deliver_info.get('release_time'))
uploader_id = deliver_info.get('owner_id')
return {
'id': video_id,
'title': title,
'thumbnail': thumbnail,
'duration': duration,
'timestamp': timestamp,
'uploader_id': uploader_id,
'formats': formats,
}
|
unlicense
|
a-parhom/edx-platform
|
openedx/features/journals/tests/test_marketing_views.py
|
4
|
5336
|
""" Tests for journals marketing views. """
import uuid
import mock
from django.conf import settings
from django.core.urlresolvers import reverse
from openedx.core.djangolib.testing.utils import CacheIsolationTestCase
from openedx.core.djangoapps.site_configuration.tests.mixins import SiteMixin
from openedx.features.journals.tests.utils import (get_mocked_journals,
get_mocked_journal_bundles,
get_mocked_pricing_data,
override_switch)
from openedx.features.journals.api import JOURNAL_INTEGRATION
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
@mock.patch.dict(settings.FEATURES, {"JOURNALS_ENABLED": True})
class JournalBundleViewTest(CacheIsolationTestCase, SiteMixin):
""" Tests for journals marketing views. """
@override_switch(JOURNAL_INTEGRATION, True)
@mock.patch('openedx.features.journals.api.DiscoveryApiClient.get_journal_bundles')
def test_journal_bundle_with_empty_data(self, mock_bundles):
"""
Test the marketing page without journal bundle data.
"""
mock_bundles.return_value = []
response = self.client.get(
path=reverse(
"openedx.journals.bundle_about",
kwargs={'bundle_uuid': str(uuid.uuid4())}
)
)
self.assertEqual(response.status_code, 404)
@override_switch(JOURNAL_INTEGRATION, True)
@mock.patch('openedx.features.journals.views.marketing.get_pricing_data')
@mock.patch('openedx.features.journals.api.DiscoveryApiClient.get_journal_bundles')
def test_journal_bundle_with_valid_data(self, mock_bundles, mock_pricing_data):
"""
Test the marketing page with journal bundle data.
"""
journal_bundles = get_mocked_journal_bundles()
journal_bundle = journal_bundles[0]
mock_pricing_data.return_value = get_mocked_pricing_data()
mock_bundles.return_value = journal_bundles
response = self.client.get(
path=reverse(
"openedx.journals.bundle_about",
kwargs={'bundle_uuid': str(uuid.uuid4())}
)
)
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Purchase the Bundle")
self.assertContains(response, journal_bundle["title"])
self.assertContains(response, journal_bundle["courses"][0]["short_description"])
self.assertContains(response, journal_bundle["courses"][0]["course_runs"][0]["title"])
@mock.patch.dict(settings.FEATURES, {"JOURNALS_ENABLED": True})
class JournalIndexViewTest(SiteMixin, ModuleStoreTestCase):
"""
Tests for Journals Listing in Marketing Pages.
"""
shard = 1
def setUp(self):
super(JournalIndexViewTest, self).setUp()
self.journal_bundles = get_mocked_journal_bundles()
self.journal_bundle = self.journal_bundles[0]
self.journals = get_mocked_journals()
def assert_journal_data(self, response):
"""
Checks the journal data in given response
"""
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Bundle")
self.assertContains(response, self.journal_bundle["uuid"])
self.assertContains(response, self.journal_bundle["title"])
self.assertContains(response, self.journal_bundle["organization"])
for journal in self.journals:
self.assertContains(response, "Journal")
self.assertContains(response, journal["title"])
self.assertContains(response, journal["organization"])
@override_switch(JOURNAL_INTEGRATION, True)
@mock.patch('student.views.management.get_journals_context')
def test_journals_index_page(self, mock_journals_context):
"""
Test the journal data on index page.
"""
mock_journals_context.return_value = {'journal_bundles': self.journal_bundles, 'journals': self.journals}
response = self.client.get(reverse('root'))
self.assert_journal_data(response)
@override_switch(JOURNAL_INTEGRATION, False)
def test_journals_index_page_disabled(self):
"""
Test the index page can load with journals disabled
"""
response = self.client.get(reverse('root'))
self.assertEqual(response.status_code, 200)
@override_switch(JOURNAL_INTEGRATION, True)
@mock.patch('openedx.features.journals.api.DiscoveryApiClient.get_journals')
@mock.patch('openedx.features.journals.api.DiscoveryApiClient.get_journal_bundles')
def test_journals_courses_page(self, mock_journal_bundles, mock_journals):
"""
Test the journal data on courses page.
"""
mock_journal_bundles.return_value = self.journal_bundles
mock_journals.return_value = self.journals
response = self.client.get(reverse('courses'))
self.assert_journal_data(response)
@override_switch(JOURNAL_INTEGRATION, False)
def test_journals_courses_page_disabled(self):
"""
Test the courses pages can load with journals disabled
"""
response = self.client.get(reverse('courses'))
self.assertEqual(response.status_code, 200)
|
agpl-3.0
|
aprovy/test-ng-pp
|
scripts/testngppgen/DataProviderParser.py
|
2
|
4022
|
#!/usr/bin/python
from Message import *
from Phase1Result import *
from DataProvider import DataProvider
import re
##########################################################
class DataProviderParser:
#######################################################
def __init__(self, provider, file, line_number):
self.name = provider[0]
self.file = file
self.line = line_number
self.end = None
self.done = None
self.numberOfUnclosedParens = 1
self.chars = ""
self.params = []
self.data_provider = None
self.number_of_groups = 0
self.parse_line(Unknown(line_number, provider[2]))
#######################################################
def should_parse_sub_scopes(self):
return False
#######################################################
def verify_scope(self, scope):
return False
#######################################################
def get_container(self):
return None
#######################################################
def get_elem_parser(self, scope, file, line):
return None
#######################################################
def __handle_space(self, line, c):
if c.isspace():
return True
return None
#######################################################
def __parse_param(self, param):
matched = re.match(r'\s*\(\s*(?P<param>.+)\s*\)\s*$', param)
if not matched:
return
self.params.append(matched.group("param"))
#######################################################
def __parse_data_groups(self):
matched = re.match(r'(?P<groups>.+)\)\s*;\s*$', self.chars)
if not matched:
raw_fatal(self.file, self.line, "grammar error in data provider definition 1")
groups = matched.group("groups")
self.number_of_groups = len(re.findall(r',\s*DATA_GROUP\s*\(', groups))
raw_params = re.split(r'\s*,\s*DATA_GROUP\s*', groups)
for param in raw_params:
self.__parse_param(param)
if len(self.params) != self.number_of_groups:
raw_fatal(self.file, self.line, "grammar error in data provider definition 2")
#######################################################
def __handle_end(self, line, c):
if not self.end:
return None
if c == ';':
self.done = True
self.__parse_data_groups()
return True
fatal(self.file, line, "unexpected char '" + c + "' in data provider definition")
#######################################################
def __handle_done(self, line, c):
if not self.done:
return None
fatal(self.file, line, "unexpected char '" + c + "' in data provider definition")
#######################################################
def __handle_others(self, line, c):
if c == '(':
self.numberOfUnclosedParens += 1
elif c == ')':
self.numberOfUnclosedParens -= 1
if self.numberOfUnclosedParens == 0:
self.end = True
#######################################################
def handle_char(self, line, c):
self.chars += c
self.__handle_space(line, c) or \
self.__handle_end(line, c) or \
self.__handle_done(line, c) or \
self.__handle_others(line, c)
#######################################################
def is_done(self):
if self.done: return DataProvider(self.name, self.params)
return None
#######################################################
def parse_line(self, line):
if self.done:
fatal(self.file, line, "internal error while parsing a data provider")
for c in line.get_content():
self.handle_char(line, c)
return self.is_done()
#######################################################
def handle_tag(self, tag):
warning(self.file, tag, "annotation is not allowed in data provider definition")
##########################################################
|
lgpl-3.0
|
matthewshoup/airflow
|
airflow/operators/sensors.py
|
19
|
15625
|
from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import str
from datetime import datetime
import logging
from urllib.parse import urlparse
from time import sleep
from airflow import hooks, settings
from airflow.models import BaseOperator
from airflow.models import Connection as DB
from airflow.models import State
from airflow.models import TaskInstance
from airflow.utils import (
apply_defaults, AirflowException, AirflowSensorTimeout)
class BaseSensorOperator(BaseOperator):
'''
Sensor operators are derived from this class an inherit these attributes.
Sensor operators keep executing at a time interval and succeed when
a criteria is met and fail if and when they time out.
:param poke_interval: Time in seconds that the job should wait in
between each tries
:type poke_interval: int
:param timeout: Time, in seconds before the task times out and fails.
:type timeout: int
'''
ui_color = '#e6f1f2'
@apply_defaults
def __init__(
self,
poke_interval=60,
timeout=60*60*24*7,
*args, **kwargs):
super(BaseSensorOperator, self).__init__(*args, **kwargs)
self.poke_interval = poke_interval
self.timeout = timeout
def poke(self, context):
'''
Function that the sensors defined while deriving this class should
override.
'''
raise AirflowException('Override me.')
def execute(self, context):
started_at = datetime.now()
while not self.poke(context):
sleep(self.poke_interval)
if (datetime.now() - started_at).seconds > self.timeout:
raise AirflowSensorTimeout('Snap. Time is OUT.')
logging.info("Success criteria met. Exiting.")
class SqlSensor(BaseSensorOperator):
"""
Runs a sql statement until a criteria is met. It will keep trying until
sql returns no row, or if the first cell in (0, '0', '').
:param conn_id: The connection to run the sensor against
:type conn_id: string
:param sql: The sql to run. To pass, it needs to return at least one cell
that contains a non-zero / empty string value.
"""
template_fields = ('sql',)
template_ext = ('.hql', '.sql',)
@apply_defaults
def __init__(self, conn_id, sql, *args, **kwargs):
super(SqlSensor, self).__init__(*args, **kwargs)
self.sql = sql
self.conn_id = conn_id
session = settings.Session()
db = session.query(DB).filter(DB.conn_id == conn_id).first()
if not db:
raise AirflowException("conn_id doesn't exist in the repository")
self.hook = db.get_hook()
session.commit()
session.close()
def poke(self, context):
logging.info('Poking: ' + self.sql)
records = self.hook.get_records(self.sql)
if not records:
return False
else:
if str(records[0][0]) in ('0', '',):
return False
else:
return True
print(records[0][0])
class ExternalTaskSensor(BaseSensorOperator):
"""
Waits for a task to complete in a different DAG
:param external_dag_id: The dag_id that contains the task you want to
wait for
:type external_dag_id: string
:param external_task_id: The task_id that contains the task you want to
wait for
:type external_task_id: string
:param allowed_states: list of allowed states, default is ``['success']``
:type allowed_states: list
:param execution_delta: time difference with the previous execution to
look at, the default is the same execution_date as the current task.
For yesterday, use [positive!] datetime.timedelta(days=1)
:type execution_delta: datetime.timedelta
"""
@apply_defaults
def __init__(
self,
external_dag_id,
external_task_id,
allowed_states=None,
execution_delta=None,
*args, **kwargs):
super(ExternalTaskSensor, self).__init__(*args, **kwargs)
self.allowed_states = allowed_states or [State.SUCCESS]
self.execution_delta = execution_delta
self.external_dag_id = external_dag_id
self.external_task_id = external_task_id
def poke(self, context):
logging.info(
'Poking for '
'{self.external_dag_id}.'
'{self.external_task_id} on '
'{context[execution_date]} ... '.format(**locals()))
TI = TaskInstance
if self.execution_delta:
dttm = context['execution_date'] - self.execution_delta
else:
dttm = context['execution_date']
session = settings.Session()
count = session.query(TI).filter(
TI.dag_id == self.external_dag_id,
TI.task_id == self.external_task_id,
TI.state.in_(self.allowed_states),
TI.execution_date == dttm,
).count()
session.commit()
session.close()
return count
class HivePartitionSensor(BaseSensorOperator):
"""
Waits for a partition to show up in Hive
:param table: The name of the table to wait for, supports the dot
notation (my_database.my_table)
:type table: string
:param partition: The partition clause to wait for. This is passed as
is to the Metastore Thrift client "get_partitions_by_filter" method,
and apparently supports SQL like notation as in `ds='2015-01-01'
AND type='value'` and > < sings as in "ds>=2015-01-01"
:type partition: string
"""
template_fields = ('schema', 'table', 'partition',)
@apply_defaults
def __init__(
self,
table, partition="ds='{{ ds }}'",
metastore_conn_id='metastore_default',
schema='default',
poke_interval=60*3,
*args, **kwargs):
super(HivePartitionSensor, self).__init__(
poke_interval=poke_interval, *args, **kwargs)
if not partition:
partition = "ds='{{ ds }}'"
self.metastore_conn_id = metastore_conn_id
self.table = table
self.partition = partition
self.schema = schema
def poke(self, context):
if '.' in self.table:
self.schema, self.table = self.table.split('.')
logging.info(
'Poking for table {self.schema}.{self.table}, '
'partition {self.partition}'.format(**locals()))
if not hasattr(self, 'hook'):
self.hook = hooks.HiveMetastoreHook(
metastore_conn_id=self.metastore_conn_id)
return self.hook.check_for_partition(
self.schema, self.table, self.partition)
class HdfsSensor(BaseSensorOperator):
"""
Waits for a file or folder to land in HDFS
"""
template_fields = ('filepath',)
@apply_defaults
def __init__(
self,
filepath,
hdfs_conn_id='hdfs_default',
*args, **kwargs):
super(HdfsSensor, self).__init__(*args, **kwargs)
self.filepath = filepath
self.hdfs_conn_id = hdfs_conn_id
def poke(self, context):
sb = hooks.HDFSHook(self.hdfs_conn_id).get_conn()
logging.getLogger("snakebite").setLevel(logging.WARNING)
logging.info(
'Poking for file {self.filepath} '.format(**locals()))
try:
files = [f for f in sb.ls([self.filepath])]
except:
return False
return True
class S3KeySensor(BaseSensorOperator):
"""
Waits for a key (a file-like instance on S3) to be present in a S3 bucket.
S3 being a key/value it does not support folders. The path is just a key
a resource.
:param bucket_key: The key being waited on. Supports full s3:// style url
or relative path from root level.
:type bucket_key: str
:param bucket_name: Name of the S3 bucket
:type bucket_name: str
:param wildcard_match: whether the bucket_key should be interpreted as a
Unix wildcard pattern
:type wildcard_match: bool
"""
template_fields = ('bucket_key', 'bucket_name')
@apply_defaults
def __init__(
self, bucket_key,
bucket_name=None,
wildcard_match=False,
s3_conn_id='s3_default',
*args, **kwargs):
super(S3KeySensor, self).__init__(*args, **kwargs)
session = settings.Session()
db = session.query(DB).filter(DB.conn_id == s3_conn_id).first()
if not db:
raise AirflowException("conn_id doesn't exist in the repository")
# Parse
if bucket_name is None:
parsed_url = urlparse(bucket_key)
if parsed_url.netloc == '':
raise AirflowException('Please provide a bucket_name')
else:
bucket_name = parsed_url.netloc
if parsed_url.path[0] == '/':
bucket_key = parsed_url.path[1:]
else:
bucket_key = parsed_url.path
self.bucket_name = bucket_name
self.bucket_key = bucket_key
self.wildcard_match = wildcard_match
self.s3_conn_id = s3_conn_id
session.commit()
session.close()
def poke(self, context):
hook = hooks.S3Hook(s3_conn_id=self.s3_conn_id)
full_url = "s3://" + self.bucket_name + "/" + self.bucket_key
logging.info('Poking for key : {full_url}'.format(**locals()))
if self.wildcard_match:
return hook.check_for_wildcard_key(self.bucket_key,
self.bucket_name)
else:
return hook.check_for_key(self.bucket_key, self.bucket_name)
class S3PrefixSensor(BaseSensorOperator):
"""
Waits for a prefix to exist. A prefix is the first part of a key,
thus enabling checking of constructs similar to glob airfl* or
SQL LIKE 'airfl%'. There is the possibility to precise a delimiter to
indicate the hierarchy or keys, meaning that the match will stop at that
delimiter. Current code accepts sane delimiters, i.e. characters that
are NOT special characters in the Python regex engine.
:param bucket_name: Name of the S3 bucket
:type bucket_name: str
:param prefix: The prefix being waited on. Relative path from bucket root level.
:type prefix: str
:param delimiter: The delimiter intended to show hierarchy.
Defaults to '/'.
:type delimiter: str
"""
template_fields = ('prefix', 'bucket_name')
@apply_defaults
def __init__(
self, bucket_name,
prefix, delimiter='/',
s3_conn_id='s3_default',
*args, **kwargs):
super(S3PrefixSensor, self).__init__(*args, **kwargs)
session = settings.Session()
db = session.query(DB).filter(DB.conn_id == s3_conn_id).first()
if not db:
raise AirflowException("conn_id doesn't exist in the repository")
# Parse
self.bucket_name = bucket_name
self.prefix = prefix
self.delimiter = delimiter
self.full_url = "s3://" + bucket_name + '/' + prefix
self.s3_conn_id = s3_conn_id
session.commit()
session.close()
def poke(self, context):
logging.info('Poking for prefix : {self.prefix}\n'
'in bucket s3://{self.bucket_name}'.format(**locals()))
hook = hooks.S3Hook(s3_conn_id=self.s3_conn_id)
return hook.check_for_prefix(
prefix=self.prefix,
delimiter=self.delimiter,
bucket_name=self.bucket_name)
class TimeSensor(BaseSensorOperator):
"""
Waits until the specified time of the day.
:param target_time: time after which the job succeeds
:type target_time: datetime.time
"""
template_fields = tuple()
@apply_defaults
def __init__(self, target_time, *args, **kwargs):
super(TimeSensor, self).__init__(*args, **kwargs)
self.target_time = target_time
def poke(self, context):
logging.info(
'Checking if the time ({0}) has come'.format(self.target_time))
return datetime.now().time() > self.target_time
class TimeDeltaSensor(BaseSensorOperator):
"""
Waits for a timedelta after the task's execution_date + schedule_interval.
In Airflow, the daily task stamped with ``execution_date``
2016-01-01 can only start running on 2016-01-02. The timedelta here
represents the time after the execution period has closed.
:param delta: time length to wait after execution_date before succeeding
:type delta: datetime.timedelta
"""
template_fields = tuple()
@apply_defaults
def __init__(self, delta, *args, **kwargs):
super(TimeDeltaSensor, self).__init__(*args, **kwargs)
self.delta = delta
def poke(self, context):
target_dttm = (
context['execution_date'] +
context['dag'].schedule_interval +
self.delta)
logging.info('Checking if the time ({0}) has come'.format(target_dttm))
return datetime.now() > target_dttm
class HttpSensor(BaseSensorOperator):
"""
Executes a HTTP get statement and returns False on failure:
404 not found or response_check function returned False
:param http_conn_id: The connection to run the sensor against
:type http_conn_id: string
:param endpoint: The relative part of the full url
:type endpoint: string
:param params: The parameters to be added to the GET url
:type params: a dictionary of string key/value pairs
:param headers: The HTTP headers to be added to the GET request
:type headers: a dictionary of string key/value pairs
:param response_check: A check against the 'requests' response object.
Returns True for 'pass' and False otherwise.
:type response_check: A lambda or defined function.
:param extra_options: Extra options for the 'requests' library, see the
'requests' documentation (options to modify timeout, ssl, etc.)
:type extra_options: A dictionary of options, where key is string and value
depends on the option that's being modified.
"""
template_fields = ('endpoint',)
@apply_defaults
def __init__(self,
endpoint,
http_conn_id='http_default',
params=None,
headers=None,
response_check=None,
extra_options=None, *args, **kwargs):
super(HttpSensor, self).__init__(*args, **kwargs)
self.endpoint = endpoint
self.http_conn_id = http_conn_id
self.params = params or {}
self.headers = headers or {}
self.extra_options = extra_options or {}
self.response_check = response_check
self.hook = hooks.HttpHook(method='GET', http_conn_id=http_conn_id)
def poke(self, context):
logging.info('Poking: ' + self.endpoint)
try:
response = self.hook.run(self.endpoint,
data=self.params,
headers=self.headers,
extra_options=self.extra_options)
if self.response_check:
# run content check on response
return self.response_check(response)
except AirflowException as ae:
if ae.message.startswith("404"):
return False
return True
|
apache-2.0
|
golismero/golismero-devel
|
thirdparty_libs/django/db/backends/oracle/compiler.py
|
112
|
2995
|
from django.db.models.sql import compiler
# The izip_longest was renamed to zip_longest in py3
try:
from itertools import zip_longest
except ImportError:
from itertools import izip_longest as zip_longest
class SQLCompiler(compiler.SQLCompiler):
def resolve_columns(self, row, fields=()):
# If this query has limit/offset information, then we expect the
# first column to be an extra "_RN" column that we need to throw
# away.
if self.query.high_mark is not None or self.query.low_mark:
rn_offset = 1
else:
rn_offset = 0
index_start = rn_offset + len(self.query.extra_select)
values = [self.query.convert_values(v, None, connection=self.connection)
for v in row[rn_offset:index_start]]
for value, field in zip_longest(row[index_start:], fields):
values.append(self.query.convert_values(value, field, connection=self.connection))
return tuple(values)
def as_sql(self, with_limits=True, with_col_aliases=False):
"""
Creates the SQL for this query. Returns the SQL string and list
of parameters. This is overriden from the original Query class
to handle the additional SQL Oracle requires to emulate LIMIT
and OFFSET.
If 'with_limits' is False, any limit/offset information is not
included in the query.
"""
if with_limits and self.query.low_mark == self.query.high_mark:
return '', ()
# The `do_offset` flag indicates whether we need to construct
# the SQL needed to use limit/offset with Oracle.
do_offset = with_limits and (self.query.high_mark is not None
or self.query.low_mark)
if not do_offset:
sql, params = super(SQLCompiler, self).as_sql(with_limits=False,
with_col_aliases=with_col_aliases)
else:
sql, params = super(SQLCompiler, self).as_sql(with_limits=False,
with_col_aliases=True)
# Wrap the base query in an outer SELECT * with boundaries on
# the "_RN" column. This is the canonical way to emulate LIMIT
# and OFFSET on Oracle.
high_where = ''
if self.query.high_mark is not None:
high_where = 'WHERE ROWNUM <= %d' % (self.query.high_mark,)
sql = 'SELECT * FROM (SELECT ROWNUM AS "_RN", "_SUB".* FROM (%s) "_SUB" %s) WHERE "_RN" > %d' % (sql, high_where, self.query.low_mark)
return sql, params
class SQLInsertCompiler(compiler.SQLInsertCompiler, SQLCompiler):
pass
class SQLDeleteCompiler(compiler.SQLDeleteCompiler, SQLCompiler):
pass
class SQLUpdateCompiler(compiler.SQLUpdateCompiler, SQLCompiler):
pass
class SQLAggregateCompiler(compiler.SQLAggregateCompiler, SQLCompiler):
pass
class SQLDateCompiler(compiler.SQLDateCompiler, SQLCompiler):
pass
|
gpl-2.0
|
denisff/python-for-android
|
python3-alpha/python3-src/Lib/test/test_memoryview.py
|
56
|
13054
|
"""Unit tests for the memoryview
XXX We need more tests! Some tests are in test_bytes
"""
import unittest
import test.support
import sys
import gc
import weakref
import array
import io
class AbstractMemoryTests:
source_bytes = b"abcdef"
@property
def _source(self):
return self.source_bytes
@property
def _types(self):
return filter(None, [self.ro_type, self.rw_type])
def check_getitem_with_type(self, tp):
item = self.getitem_type
b = tp(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
self.assertEqual(m[0], item(b"a"))
self.assertIsInstance(m[0], bytes)
self.assertEqual(m[5], item(b"f"))
self.assertEqual(m[-1], item(b"f"))
self.assertEqual(m[-6], item(b"a"))
# Bounds checking
self.assertRaises(IndexError, lambda: m[6])
self.assertRaises(IndexError, lambda: m[-7])
self.assertRaises(IndexError, lambda: m[sys.maxsize])
self.assertRaises(IndexError, lambda: m[-sys.maxsize])
# Type checking
self.assertRaises(TypeError, lambda: m[None])
self.assertRaises(TypeError, lambda: m[0.0])
self.assertRaises(TypeError, lambda: m["a"])
m = None
self.assertEqual(sys.getrefcount(b), oldrefcount)
def test_getitem(self):
for tp in self._types:
self.check_getitem_with_type(tp)
def test_iter(self):
for tp in self._types:
b = tp(self._source)
m = self._view(b)
self.assertEqual(list(m), [m[i] for i in range(len(m))])
def test_setitem_readonly(self):
if not self.ro_type:
return
b = self.ro_type(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
def setitem(value):
m[0] = value
self.assertRaises(TypeError, setitem, b"a")
self.assertRaises(TypeError, setitem, 65)
self.assertRaises(TypeError, setitem, memoryview(b"a"))
m = None
self.assertEqual(sys.getrefcount(b), oldrefcount)
def test_setitem_writable(self):
if not self.rw_type:
return
tp = self.rw_type
b = self.rw_type(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
m[0] = tp(b"0")
self._check_contents(tp, b, b"0bcdef")
m[1:3] = tp(b"12")
self._check_contents(tp, b, b"012def")
m[1:1] = tp(b"")
self._check_contents(tp, b, b"012def")
m[:] = tp(b"abcdef")
self._check_contents(tp, b, b"abcdef")
# Overlapping copies of a view into itself
m[0:3] = m[2:5]
self._check_contents(tp, b, b"cdedef")
m[:] = tp(b"abcdef")
m[2:5] = m[0:3]
self._check_contents(tp, b, b"ababcf")
def setitem(key, value):
m[key] = tp(value)
# Bounds checking
self.assertRaises(IndexError, setitem, 6, b"a")
self.assertRaises(IndexError, setitem, -7, b"a")
self.assertRaises(IndexError, setitem, sys.maxsize, b"a")
self.assertRaises(IndexError, setitem, -sys.maxsize, b"a")
# Wrong index/slice types
self.assertRaises(TypeError, setitem, 0.0, b"a")
self.assertRaises(TypeError, setitem, (0,), b"a")
self.assertRaises(TypeError, setitem, "a", b"a")
# Trying to resize the memory object
self.assertRaises(ValueError, setitem, 0, b"")
self.assertRaises(ValueError, setitem, 0, b"ab")
self.assertRaises(ValueError, setitem, slice(1,1), b"a")
self.assertRaises(ValueError, setitem, slice(0,2), b"a")
m = None
self.assertEqual(sys.getrefcount(b), oldrefcount)
def test_delitem(self):
for tp in self._types:
b = tp(self._source)
m = self._view(b)
with self.assertRaises(TypeError):
del m[1]
with self.assertRaises(TypeError):
del m[1:4]
def test_tobytes(self):
for tp in self._types:
m = self._view(tp(self._source))
b = m.tobytes()
# This calls self.getitem_type() on each separate byte of b"abcdef"
expected = b"".join(
self.getitem_type(bytes([c])) for c in b"abcdef")
self.assertEqual(b, expected)
self.assertIsInstance(b, bytes)
def test_tolist(self):
for tp in self._types:
m = self._view(tp(self._source))
l = m.tolist()
self.assertEqual(l, list(b"abcdef"))
def test_compare(self):
# memoryviews can compare for equality with other objects
# having the buffer interface.
for tp in self._types:
m = self._view(tp(self._source))
for tp_comp in self._types:
self.assertTrue(m == tp_comp(b"abcdef"))
self.assertFalse(m != tp_comp(b"abcdef"))
self.assertFalse(m == tp_comp(b"abcde"))
self.assertTrue(m != tp_comp(b"abcde"))
self.assertFalse(m == tp_comp(b"abcde1"))
self.assertTrue(m != tp_comp(b"abcde1"))
self.assertTrue(m == m)
self.assertTrue(m == m[:])
self.assertTrue(m[0:6] == m[:])
self.assertFalse(m[0:5] == m)
# Comparison with objects which don't support the buffer API
self.assertFalse(m == "abcdef")
self.assertTrue(m != "abcdef")
self.assertFalse("abcdef" == m)
self.assertTrue("abcdef" != m)
# Unordered comparisons
for c in (m, b"abcdef"):
self.assertRaises(TypeError, lambda: m < c)
self.assertRaises(TypeError, lambda: c <= m)
self.assertRaises(TypeError, lambda: m >= c)
self.assertRaises(TypeError, lambda: c > m)
def check_attributes_with_type(self, tp):
m = self._view(tp(self._source))
self.assertEqual(m.format, self.format)
self.assertEqual(m.itemsize, self.itemsize)
self.assertEqual(m.ndim, 1)
self.assertEqual(m.shape, (6,))
self.assertEqual(len(m), 6)
self.assertEqual(m.strides, (self.itemsize,))
self.assertEqual(m.suboffsets, None)
return m
def test_attributes_readonly(self):
if not self.ro_type:
return
m = self.check_attributes_with_type(self.ro_type)
self.assertEqual(m.readonly, True)
def test_attributes_writable(self):
if not self.rw_type:
return
m = self.check_attributes_with_type(self.rw_type)
self.assertEqual(m.readonly, False)
def test_getbuffer(self):
# Test PyObject_GetBuffer() on a memoryview object.
for tp in self._types:
b = tp(self._source)
oldrefcount = sys.getrefcount(b)
m = self._view(b)
oldviewrefcount = sys.getrefcount(m)
s = str(m, "utf-8")
self._check_contents(tp, b, s.encode("utf-8"))
self.assertEqual(sys.getrefcount(m), oldviewrefcount)
m = None
self.assertEqual(sys.getrefcount(b), oldrefcount)
def test_gc(self):
for tp in self._types:
if not isinstance(tp, type):
# If tp is a factory rather than a plain type, skip
continue
class MySource(tp):
pass
class MyObject:
pass
# Create a reference cycle through a memoryview object
b = MySource(tp(b'abc'))
m = self._view(b)
o = MyObject()
b.m = m
b.o = o
wr = weakref.ref(o)
b = m = o = None
# The cycle must be broken
gc.collect()
self.assertTrue(wr() is None, wr())
def _check_released(self, m, tp):
check = self.assertRaisesRegex(ValueError, "released")
with check: bytes(m)
with check: m.tobytes()
with check: m.tolist()
with check: m[0]
with check: m[0] = b'x'
with check: len(m)
with check: m.format
with check: m.itemsize
with check: m.ndim
with check: m.readonly
with check: m.shape
with check: m.strides
with check:
with m:
pass
# str() and repr() still function
self.assertIn("released memory", str(m))
self.assertIn("released memory", repr(m))
self.assertEqual(m, m)
self.assertNotEqual(m, memoryview(tp(self._source)))
self.assertNotEqual(m, tp(self._source))
def test_contextmanager(self):
for tp in self._types:
b = tp(self._source)
m = self._view(b)
with m as cm:
self.assertIs(cm, m)
self._check_released(m, tp)
m = self._view(b)
# Can release explicitly inside the context manager
with m:
m.release()
def test_release(self):
for tp in self._types:
b = tp(self._source)
m = self._view(b)
m.release()
self._check_released(m, tp)
# Can be called a second time (it's a no-op)
m.release()
self._check_released(m, tp)
def test_writable_readonly(self):
# Issue #10451: memoryview incorrectly exposes a readonly
# buffer as writable causing a segfault if using mmap
tp = self.ro_type
if tp is None:
return
b = tp(self._source)
m = self._view(b)
i = io.BytesIO(b'ZZZZ')
self.assertRaises(TypeError, i.readinto, m)
# Variations on source objects for the buffer: bytes-like objects, then arrays
# with itemsize > 1.
# NOTE: support for multi-dimensional objects is unimplemented.
class BaseBytesMemoryTests(AbstractMemoryTests):
ro_type = bytes
rw_type = bytearray
getitem_type = bytes
itemsize = 1
format = 'B'
class BaseArrayMemoryTests(AbstractMemoryTests):
ro_type = None
rw_type = lambda self, b: array.array('i', list(b))
getitem_type = lambda self, b: array.array('i', list(b)).tobytes()
itemsize = array.array('i').itemsize
format = 'i'
def test_getbuffer(self):
# XXX Test should be adapted for non-byte buffers
pass
def test_tolist(self):
# XXX NotImplementedError: tolist() only supports byte views
pass
# Variations on indirection levels: memoryview, slice of memoryview,
# slice of slice of memoryview.
# This is important to test allocation subtleties.
class BaseMemoryviewTests:
def _view(self, obj):
return memoryview(obj)
def _check_contents(self, tp, obj, contents):
self.assertEqual(obj, tp(contents))
class BaseMemorySliceTests:
source_bytes = b"XabcdefY"
def _view(self, obj):
m = memoryview(obj)
return m[1:7]
def _check_contents(self, tp, obj, contents):
self.assertEqual(obj[1:7], tp(contents))
def test_refs(self):
for tp in self._types:
m = memoryview(tp(self._source))
oldrefcount = sys.getrefcount(m)
m[1:2]
self.assertEqual(sys.getrefcount(m), oldrefcount)
class BaseMemorySliceSliceTests:
source_bytes = b"XabcdefY"
def _view(self, obj):
m = memoryview(obj)
return m[:7][1:]
def _check_contents(self, tp, obj, contents):
self.assertEqual(obj[1:7], tp(contents))
# Concrete test classes
class BytesMemoryviewTest(unittest.TestCase,
BaseMemoryviewTests, BaseBytesMemoryTests):
def test_constructor(self):
for tp in self._types:
ob = tp(self._source)
self.assertTrue(memoryview(ob))
self.assertTrue(memoryview(object=ob))
self.assertRaises(TypeError, memoryview)
self.assertRaises(TypeError, memoryview, ob, ob)
self.assertRaises(TypeError, memoryview, argument=ob)
self.assertRaises(TypeError, memoryview, ob, argument=True)
class ArrayMemoryviewTest(unittest.TestCase,
BaseMemoryviewTests, BaseArrayMemoryTests):
def test_array_assign(self):
# Issue #4569: segfault when mutating a memoryview with itemsize != 1
a = array.array('i', range(10))
m = memoryview(a)
new_a = array.array('i', range(9, -1, -1))
m[:] = new_a
self.assertEqual(a, new_a)
class BytesMemorySliceTest(unittest.TestCase,
BaseMemorySliceTests, BaseBytesMemoryTests):
pass
class ArrayMemorySliceTest(unittest.TestCase,
BaseMemorySliceTests, BaseArrayMemoryTests):
pass
class BytesMemorySliceSliceTest(unittest.TestCase,
BaseMemorySliceSliceTests, BaseBytesMemoryTests):
pass
class ArrayMemorySliceSliceTest(unittest.TestCase,
BaseMemorySliceSliceTests, BaseArrayMemoryTests):
pass
def test_main():
test.support.run_unittest(__name__)
if __name__ == "__main__":
test_main()
|
apache-2.0
|
guijomatos/SickRage
|
lib/ndg/httpsclient/ssl_context_util.py
|
63
|
3357
|
"""ndg_httpsclient SSL Context utilities module containing convenience routines
for setting SSL context configuration.
"""
__author__ = "P J Kershaw (STFC)"
__date__ = "09/12/11"
__copyright__ = "(C) 2012 Science and Technology Facilities Council"
__license__ = "BSD - see LICENSE file in top-level directory"
__contact__ = "[email protected]"
__revision__ = '$Id$'
import urlparse
from OpenSSL import SSL
from ndg.httpsclient.ssl_peer_verification import ServerSSLCertVerification
class SSlContextConfig(object):
"""
Holds configuration options for creating a SSL context. This is used as a
template to create the contexts with specific verification callbacks.
"""
def __init__(self, key_file=None, cert_file=None, pem_file=None, ca_dir=None,
verify_peer=False):
self.key_file = key_file
self.cert_file = cert_file
self.pem_file = pem_file
self.ca_dir = ca_dir
self.verify_peer = verify_peer
def make_ssl_context_from_config(ssl_config=False, url=None):
return make_ssl_context(ssl_config.key_file, ssl_config.cert_file,
ssl_config.pem_file, ssl_config.ca_dir,
ssl_config.verify_peer, url)
def make_ssl_context(key_file=None, cert_file=None, pem_file=None, ca_dir=None,
verify_peer=False, url=None, method=SSL.TLSv1_METHOD,
key_file_passphrase=None):
"""
Creates SSL context containing certificate and key file locations.
"""
ssl_context = SSL.Context(method)
# Key file defaults to certificate file if present.
if cert_file:
ssl_context.use_certificate_file(cert_file)
if key_file_passphrase:
passwd_cb = lambda max_passphrase_len, set_prompt, userdata: \
key_file_passphrase
ssl_context.set_passwd_cb(passwd_cb)
if key_file:
ssl_context.use_privatekey_file(key_file)
elif cert_file:
ssl_context.use_privatekey_file(cert_file)
if pem_file or ca_dir:
ssl_context.load_verify_locations(pem_file, ca_dir)
def _callback(conn, x509, errnum, errdepth, preverify_ok):
"""Default certification verification callback.
Performs no checks and returns the status passed in.
"""
return preverify_ok
verify_callback = _callback
if verify_peer:
ssl_context.set_verify_depth(9)
if url:
set_peer_verification_for_url_hostname(ssl_context, url)
else:
ssl_context.set_verify(SSL.VERIFY_PEER, verify_callback)
else:
ssl_context.set_verify(SSL.VERIFY_NONE, verify_callback)
return ssl_context
def set_peer_verification_for_url_hostname(ssl_context, url,
if_verify_enabled=False):
'''Convenience routine to set peer verification callback based on
ServerSSLCertVerification class'''
if not if_verify_enabled or (ssl_context.get_verify_mode() & SSL.VERIFY_PEER):
urlObj = urlparse.urlparse(url)
hostname = urlObj.hostname
server_ssl_cert_verif = ServerSSLCertVerification(hostname=hostname)
verify_callback_ = server_ssl_cert_verif.get_verify_server_cert_func()
ssl_context.set_verify(SSL.VERIFY_PEER, verify_callback_)
|
gpl-3.0
|
jaredkoontz/leetcode
|
Python/pascals-triangle-ii.py
|
3
|
1608
|
# Time: O(n^2)
# Space: O(1)
# Given an index k, return the kth row of the Pascal's triangle.
#
# For example, given k = 3,
# Return [1,3,3,1].
#
# Note:
# Could you optimize your algorithm to use only O(k) extra space?
#
class Solution:
# @return a list of integers
def getRow(self, rowIndex):
result = [0] * (rowIndex + 1)
for i in xrange(rowIndex + 1):
old = result[0] = 1
for j in xrange(1, i + 1):
old, result[j] = result[j], old + result[j]
return result
def getRow2(self, rowIndex):
"""
:type rowIndex: int
:rtype: List[int]
"""
row = [1]
for _ in range(rowIndex):
row = [x + y for x, y in zip([0] + row, row + [0])]
return row
def getRow3(self, rowIndex):
"""
:type rowIndex: int
:rtype: List[int]
"""
if rowIndex == 0: return [1]
res = [1, 1]
def add(nums):
res = nums[:1]
for i, j in enumerate(nums):
if i < len(nums) - 1:
res += [nums[i] + nums[i + 1]]
res += nums[:1]
return res
while res[1] < rowIndex:
res = add(res)
return res
# Time: O(n^2)
# Space: O(n)
class Solution2:
# @return a list of integers
def getRow(self, rowIndex):
result = [1]
for i in range(1, rowIndex + 1):
result = [1] + [result[j - 1] + result[j] for j in xrange(1, i)] + [1]
return result
if __name__ == "__main__":
print Solution().getRow(3)
|
mit
|
yjhjstz/gyp
|
test/cxxflags/gyptest-cxxflags.py
|
128
|
1034
|
#!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Verifies the use of the environment during regeneration when the gyp file
changes, specifically via build of an executable with C++ flags specified by
CXXFLAGS.
In this test, gyp happens within a local environment, but build outside of it.
"""
import TestGyp
FORMATS = ('ninja',)
test = TestGyp.TestGyp(formats=FORMATS)
# We reset the environ after calling gyp. When the auto-regeneration happens,
# the same define should be reused anyway.
with TestGyp.LocalEnv({'CXXFLAGS': ''}):
test.run_gyp('cxxflags.gyp')
test.build('cxxflags.gyp')
expect = """\
No define
"""
test.run_built_executable('cxxflags', stdout=expect)
test.sleep()
with TestGyp.LocalEnv({'CXXFLAGS': '-DABC'}):
test.run_gyp('cxxflags.gyp')
test.build('cxxflags.gyp')
expect = """\
With define
"""
test.run_built_executable('cxxflags', stdout=expect)
test.pass_test()
|
bsd-3-clause
|
procangroup/edx-platform
|
lms/djangoapps/certificates/apis/v0/tests/test_views.py
|
3
|
7233
|
"""
Tests for the Certificate REST APIs.
"""
from datetime import datetime, timedelta
from django.core.urlresolvers import reverse
from django.utils import timezone
from freezegun import freeze_time
from oauth2_provider import models as dot_models
from rest_framework import status
from rest_framework.test import APITestCase
from lms.djangoapps.certificates.models import CertificateStatuses
from lms.djangoapps.certificates.tests.factories import GeneratedCertificateFactory
from course_modes.models import CourseMode
from student.tests.factories import UserFactory
from xmodule.modulestore.tests.django_utils import SharedModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory
USER_PASSWORD = 'test'
class CertificatesRestApiTest(SharedModuleStoreTestCase, APITestCase):
"""
Test for the Certificates REST APIs
"""
now = timezone.now()
@classmethod
def setUpClass(cls):
super(CertificatesRestApiTest, cls).setUpClass()
cls.course = CourseFactory.create(
org='edx',
number='verified',
display_name='Verified Course'
)
def setUp(self):
freezer = freeze_time(self.now)
freezer.start()
self.addCleanup(freezer.stop)
super(CertificatesRestApiTest, self).setUp()
self.student = UserFactory.create(password=USER_PASSWORD)
self.student_no_cert = UserFactory.create(password=USER_PASSWORD)
self.staff_user = UserFactory.create(password=USER_PASSWORD, is_staff=True)
GeneratedCertificateFactory.create(
user=self.student,
course_id=self.course.id,
status=CertificateStatuses.downloadable,
mode='verified',
download_url='www.google.com',
grade="0.88"
)
self.namespaced_url = 'certificates_api:v0:certificates:detail'
# create a configuration for django-oauth-toolkit (DOT)
dot_app_user = UserFactory.create(password=USER_PASSWORD)
dot_app = dot_models.Application.objects.create(
name='test app',
user=dot_app_user,
client_type='confidential',
authorization_grant_type='authorization-code',
redirect_uris='http://localhost:8079/complete/edxorg/'
)
self.dot_access_token = dot_models.AccessToken.objects.create(
user=self.student,
application=dot_app,
expires=datetime.utcnow() + timedelta(weeks=1),
scope='read write',
token='16MGyP3OaQYHmpT1lK7Q6MMNAZsjwF'
)
def get_url(self, username):
"""
Helper function to create the url for certificates
"""
return reverse(
self.namespaced_url,
kwargs={
'course_id': self.course.id,
'username': username
}
)
def assert_oauth_status(self, access_token, expected_status):
"""
Helper method for requests with OAUTH token
"""
self.client.logout()
auth_header = "Bearer {0}".format(access_token)
response = self.client.get(self.get_url(self.student.username), HTTP_AUTHORIZATION=auth_header)
self.assertEqual(response.status_code, expected_status)
def test_permissions(self):
"""
Test that only the owner of the certificate can access the url
"""
# anonymous user
resp = self.client.get(self.get_url(self.student.username))
self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
# another student
self.client.login(username=self.student_no_cert.username, password=USER_PASSWORD)
resp = self.client.get(self.get_url(self.student.username))
# gets 404 instead of 403 for security reasons
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
self.assertEqual(resp.data, {u'detail': u'Not found.'})
self.client.logout()
# same student of the certificate
self.client.login(username=self.student.username, password=USER_PASSWORD)
resp = self.client.get(self.get_url(self.student.username))
self.assertEqual(resp.status_code, status.HTTP_200_OK)
self.client.logout()
# staff user
self.client.login(username=self.staff_user.username, password=USER_PASSWORD)
resp = self.client.get(self.get_url(self.student.username))
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_inactive_user_access(self):
"""
Verify inactive users - those who have not verified their email addresses -
are allowed to access the endpoint.
"""
self.client.login(username=self.student.username, password=USER_PASSWORD)
self.student.is_active = False
self.student.save()
resp = self.client.get(self.get_url(self.student.username))
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_dot_valid_accesstoken(self):
"""
Verify access with a valid Django Oauth Toolkit access token.
"""
self.assert_oauth_status(self.dot_access_token, status.HTTP_200_OK)
def test_dot_invalid_accesstoken(self):
"""
Verify the endpoint is inaccessible for authorization
attempts made with an invalid OAuth access token.
"""
self.assert_oauth_status("fooooooooooToken", status.HTTP_401_UNAUTHORIZED)
def test_dot_expired_accesstoken(self):
"""
Verify the endpoint is inaccessible for authorization
attempts made with an expired OAuth access token.
"""
# set the expiration date in the past
self.dot_access_token.expires = datetime.utcnow() - timedelta(weeks=1)
self.dot_access_token.save()
self.assert_oauth_status(self.dot_access_token, status.HTTP_401_UNAUTHORIZED)
def test_no_certificate_for_user(self):
"""
Test for case with no certificate available
"""
self.client.login(username=self.student_no_cert.username, password=USER_PASSWORD)
resp = self.client.get(self.get_url(self.student_no_cert.username))
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
self.assertIn('error_code', resp.data)
self.assertEqual(
resp.data['error_code'],
'no_certificate_for_user'
)
def test_certificate_for_user(self):
"""
Tests case user that pulls her own certificate
"""
self.client.login(username=self.student.username, password=USER_PASSWORD)
resp = self.client.get(self.get_url(self.student.username))
self.assertEqual(resp.status_code, status.HTTP_200_OK)
self.assertEqual(
resp.data,
{
'username': self.student.username,
'status': CertificateStatuses.downloadable,
'is_passing': True,
'grade': '0.88',
'download_url': 'www.google.com',
'certificate_type': CourseMode.VERIFIED,
'course_id': unicode(self.course.id),
'created_date': self.now,
}
)
|
agpl-3.0
|
bgxavier/nova
|
nova/api/metadata/handler.py
|
24
|
8773
|
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Metadata request handler."""
import hashlib
import hmac
import os
from oslo_config import cfg
from oslo_log import log as logging
import six
import webob.dec
import webob.exc
from nova.api.metadata import base
from nova import exception
from nova.i18n import _
from nova.i18n import _LE
from nova.i18n import _LW
from nova.openstack.common import memorycache
from nova import utils
from nova import wsgi
CONF = cfg.CONF
CONF.import_opt('use_forwarded_for', 'nova.api.auth')
metadata_proxy_opts = [
cfg.BoolOpt(
'service_metadata_proxy',
default=False,
help='Set flag to indicate Neutron will proxy metadata requests and '
'resolve instance ids.'),
cfg.StrOpt(
'metadata_proxy_shared_secret',
default='', secret=True,
help='Shared secret to validate proxies Neutron metadata requests'),
]
metadata_opts = [
cfg.IntOpt('metadata_cache_expiration',
default=15,
help='Time in seconds to cache metadata; 0 to disable '
'metadata caching entirely (not recommended). Increasing'
'this should improve response times of the metadata API '
'when under heavy load. Higher values may increase memory'
'usage and result in longer times for host metadata '
'changes to take effect.')
]
CONF.register_opts(metadata_proxy_opts, 'neutron')
CONF.register_opts(metadata_opts)
LOG = logging.getLogger(__name__)
class MetadataRequestHandler(wsgi.Application):
"""Serve metadata."""
def __init__(self):
self._cache = memorycache.get_client()
def get_metadata_by_remote_address(self, address):
if not address:
raise exception.FixedIpNotFoundForAddress(address=address)
cache_key = 'metadata-%s' % address
data = self._cache.get(cache_key)
if data:
LOG.debug("Using cached metadata for %s", address)
return data
try:
data = base.get_metadata_by_address(address)
except exception.NotFound:
return None
if CONF.metadata_cache_expiration > 0:
self._cache.set(cache_key, data, CONF.metadata_cache_expiration)
return data
def get_metadata_by_instance_id(self, instance_id, address):
cache_key = 'metadata-%s' % instance_id
data = self._cache.get(cache_key)
if data:
LOG.debug("Using cached metadata for instance %s", instance_id)
return data
try:
data = base.get_metadata_by_instance_id(instance_id, address)
except exception.NotFound:
return None
if CONF.metadata_cache_expiration > 0:
self._cache.set(cache_key, data, CONF.metadata_cache_expiration)
return data
@webob.dec.wsgify(RequestClass=wsgi.Request)
def __call__(self, req):
if os.path.normpath(req.path_info) == "/":
resp = base.ec2_md_print(base.VERSIONS + ["latest"])
req.response.body = resp
req.response.content_type = base.MIME_TYPE_TEXT_PLAIN
return req.response
if CONF.neutron.service_metadata_proxy:
meta_data = self._handle_instance_id_request(req)
else:
if req.headers.get('X-Instance-ID'):
LOG.warning(
_LW("X-Instance-ID present in request headers. The "
"'service_metadata_proxy' option must be "
"enabled to process this header."))
meta_data = self._handle_remote_ip_request(req)
if meta_data is None:
raise webob.exc.HTTPNotFound()
try:
data = meta_data.lookup(req.path_info)
except base.InvalidMetadataPath:
raise webob.exc.HTTPNotFound()
if callable(data):
return data(req, meta_data)
resp = base.ec2_md_print(data)
if isinstance(resp, six.text_type):
req.response.text = resp
else:
req.response.body = resp
req.response.content_type = meta_data.get_mimetype()
return req.response
def _handle_remote_ip_request(self, req):
remote_address = req.remote_addr
if CONF.use_forwarded_for:
remote_address = req.headers.get('X-Forwarded-For', remote_address)
try:
meta_data = self.get_metadata_by_remote_address(remote_address)
except Exception:
LOG.exception(_LE('Failed to get metadata for ip: %s'),
remote_address)
msg = _('An unknown error has occurred. '
'Please try your request again.')
raise webob.exc.HTTPInternalServerError(
explanation=six.text_type(msg))
if meta_data is None:
LOG.error(_LE('Failed to get metadata for ip: %s'),
remote_address)
return meta_data
def _handle_instance_id_request(self, req):
instance_id = req.headers.get('X-Instance-ID')
tenant_id = req.headers.get('X-Tenant-ID')
signature = req.headers.get('X-Instance-ID-Signature')
remote_address = req.headers.get('X-Forwarded-For')
# Ensure that only one header was passed
if instance_id is None:
msg = _('X-Instance-ID header is missing from request.')
elif signature is None:
msg = _('X-Instance-ID-Signature header is missing from request.')
elif tenant_id is None:
msg = _('X-Tenant-ID header is missing from request.')
elif not isinstance(instance_id, six.string_types):
msg = _('Multiple X-Instance-ID headers found within request.')
elif not isinstance(tenant_id, six.string_types):
msg = _('Multiple X-Tenant-ID headers found within request.')
else:
msg = None
if msg:
raise webob.exc.HTTPBadRequest(explanation=msg)
expected_signature = hmac.new(
CONF.neutron.metadata_proxy_shared_secret,
instance_id,
hashlib.sha256).hexdigest()
if not utils.constant_time_compare(expected_signature, signature):
if instance_id:
LOG.warning(_LW('X-Instance-ID-Signature: %(signature)s does '
'not match the expected value: '
'%(expected_signature)s for id: '
'%(instance_id)s. Request From: '
'%(remote_address)s'),
{'signature': signature,
'expected_signature': expected_signature,
'instance_id': instance_id,
'remote_address': remote_address})
msg = _('Invalid proxy request signature.')
raise webob.exc.HTTPForbidden(explanation=msg)
try:
meta_data = self.get_metadata_by_instance_id(instance_id,
remote_address)
except Exception:
LOG.exception(_LE('Failed to get metadata for instance id: %s'),
instance_id)
msg = _('An unknown error has occurred. '
'Please try your request again.')
raise webob.exc.HTTPInternalServerError(
explanation=six.text_type(msg))
if meta_data is None:
LOG.error(_LE('Failed to get metadata for instance id: %s'),
instance_id)
elif meta_data.instance.project_id != tenant_id:
LOG.warning(_LW("Tenant_id %(tenant_id)s does not match tenant_id "
"of instance %(instance_id)s."),
{'tenant_id': tenant_id, 'instance_id': instance_id})
# causes a 404 to be raised
meta_data = None
return meta_data
|
apache-2.0
|
zhjunlang/kbengine
|
kbe/res/scripts/common/Lib/site-packages/pip/_vendor/html5lib/treeadapters/sax.py
|
1835
|
1661
|
from __future__ import absolute_import, division, unicode_literals
from xml.sax.xmlreader import AttributesNSImpl
from ..constants import adjustForeignAttributes, unadjustForeignAttributes
prefix_mapping = {}
for prefix, localName, namespace in adjustForeignAttributes.values():
if prefix is not None:
prefix_mapping[prefix] = namespace
def to_sax(walker, handler):
"""Call SAX-like content handler based on treewalker walker"""
handler.startDocument()
for prefix, namespace in prefix_mapping.items():
handler.startPrefixMapping(prefix, namespace)
for token in walker:
type = token["type"]
if type == "Doctype":
continue
elif type in ("StartTag", "EmptyTag"):
attrs = AttributesNSImpl(token["data"],
unadjustForeignAttributes)
handler.startElementNS((token["namespace"], token["name"]),
token["name"],
attrs)
if type == "EmptyTag":
handler.endElementNS((token["namespace"], token["name"]),
token["name"])
elif type == "EndTag":
handler.endElementNS((token["namespace"], token["name"]),
token["name"])
elif type in ("Characters", "SpaceCharacters"):
handler.characters(token["data"])
elif type == "Comment":
pass
else:
assert False, "Unknown token type"
for prefix, namespace in prefix_mapping.items():
handler.endPrefixMapping(prefix)
handler.endDocument()
|
lgpl-3.0
|
jwenerd/SKTimeline
|
sktimeline/views/dashboard/twitter.py
|
2
|
1613
|
from sktimeline import *
from sktimeline.views import login_required, page_not_found
def hashtag_validator(form, field):
if field.data[0] != '#':
raise validators.ValidationError('Must start with #')
if len(field.data.split()) > 1:
raise validators.ValidationError('Enter only one hashtag')
TwitterForm = model_form(TwitterFeedSetting, Form, exclude=['user','status','last_updated','feed_items'], field_args = {
'hashtag' : {
'validators' : [validators.Required(), hashtag_validator]
}
})
@app.route('/dashboard/twitter/new',methods=['GET','POST'])
@login_required
def dashboard_twitter_new(id=None):
model = TwitterFeedSetting()
#todo: if id present check user is allowed to edit this item
form = TwitterForm(request.form, model)
model.user = User.query.get(session['user_id'])
if request.method == 'POST' and form.validate():
form.populate_obj(model)
model.status = 'new'
db.session.add(model)
db.session.commit()
db.session.close()
flash("Twitter entry added.")
return redirect(url_for("dashboard"))
return render_template("dashboard/twitter.html", form = form)
@app.route('/dashboard/twitter/delete/<id>',methods=['POST'])
@login_required
def dashboard_twitter_delete(id):
model = TwitterFeedSetting.query.get(id)
if not(model) or model.user_id != session['user_id']:
return page_not_found()
db.session.delete(model)
db.session.commit()
db.session.close()
flash('Twitter hashtag deleted!')
return redirect( url_for("dashboard") + '#twitter')
|
mit
|
garyp/sifter
|
sifter/comparator.py
|
1
|
1168
|
import sifter.handler
__all__ = ('register', 'get_match_fn',)
def register(comparator_name, comparator_cls):
sifter.handler.register('comparator', comparator_name, comparator_cls)
def get_match_fn(comparator, match_type):
# section 2.7.3: default comparator is 'i;ascii-casemap'
if comparator is None: comparator = 'i;ascii-casemap'
# RFC 4790, section 3.1: the special identifier 'default' refers to the
# implementation-defined default comparator
elif comparator == 'default': comparator = 'i;ascii-casemap'
# section 2.7.1: default match type is ":is"
if match_type is None: match_type = 'IS'
# TODO: support wildcard matching in comparator names (RFC 4790)
cmp_handler = sifter.handler.get('comparator', comparator)
if not cmp_handler:
raise RuntimeError("Comparator not supported: %s" % comparator)
try:
cmp_fn = getattr(cmp_handler, 'cmp_%s' % match_type.lower())
except AttributeError:
raise RuntimeError(
"':%s' matching not supported by comparator '%s'"
% (match_type, comparator)
)
return (cmp_fn, comparator, match_type)
|
bsd-2-clause
|
tcstewar/parser
|
fig1.py
|
1
|
1991
|
import pylab
pylab.figure(figsize=(6,4))
pylab.axes((0.1, 0.15, 0.85, 0.8))
xdata = [0, 0.05, 0.1, 0.15, 0.2, 0.25]
prob = [0.8460282942701628, 0.82180483919511071, 0.76707987722978233, 0.68378508230323654, 0.55306743181409768, 0.38986098204964698]
prob_ci = [[-0.014085009764384537, -0.018748421744134269, -0.020100806791683468, -0.019680248650571164, -0.019543693497638293, -0.015030066427057398],[-0.01285147459159508, -0.011726667849281225, -0.017260737163283912, -0.017229313094358378, -0.015767339932743041, -0.018808090677573441]]
prob_high = [prob[i]-prob_ci[0][i] for i in range(len(xdata))]
prob_low = [prob[i]+prob_ci[1][i] for i in range(len(xdata))]
#pylab.errorbar(xdata,[0.8460282942701628, 0.82180483919511071, 0.76707987722978233, 0.68378508230323654, 0.55306743181409768, 0.38986098204964698],yerr=prob_ci)
retry = [0.60392156862745094, 0.66470588235294115, 0.62352941176470589, 0.70588235294117652, 0.92941176470588238, 0.98235294117647054]
retry_ci = [[-0.086274509803921595, -0.090196078431372562, -0.098039215686274495, -0.11176470588235288, -0.14117647058823524, -0.12745098039215697],[-0.094117647058823528, -0.084313725490196001, -0.074509803921568585, -0.074509803921568696, -0.11372549019607847, -0.13529411764705879]]
retry_high = [retry[i]-retry_ci[0][i] for i in range(len(xdata))]
retry_low = [retry[i]+retry_ci[1][i] for i in range(len(xdata))]
pylab.plot(xdata, retry, color='b', label='retry')
#pylab.errorbar(xdata,retry,yerr=retry_ci)
pylab.fill_between(xdata, retry_high, retry_low, facecolor='b', alpha=0.3)
pylab.plot(xdata,prob, color='k', linewidth=1)
pylab.fill_between(xdata, prob_high, prob_low, facecolor='k', alpha=0.5)
pylab.ylim(0.3, 1.2)
retry_rect = pylab.Rectangle((0, 0), 1, 1, fc="b", alpha=0.3)
prob_rect = pylab.Rectangle((0, 0), 1, 1, fc="k", alpha=0.5)
pylab.legend([prob_rect, retry_rect], ['parse probability', "# of retries"], loc='upper left')
pylab.xlabel('noise')
pylab.savefig('figure1.png', dpi=900)
pylab.show()
|
gpl-2.0
|
jendrikseipp/rednotebook
|
rednotebook/gui/customwidgets.py
|
1
|
11381
|
# -----------------------------------------------------------------------
# Copyright (c) 2009 Jendrik Seipp
#
# RedNotebook is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# RedNotebook is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with RedNotebook; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# -----------------------------------------------------------------------
import datetime
import logging
import os
import webbrowser
from gi.repository import GObject, Gtk
class ActionButton(Gtk.Button):
def __init__(self, text, action):
Gtk.Button.__init__(self, text)
self.connect("clicked", action)
class UrlButton(ActionButton):
def __init__(self, text, url):
ActionButton.__init__(self, text, lambda _: webbrowser.open(url))
class CustomComboBoxEntry:
def __init__(self, combo_box):
self.combo_box = combo_box
self.liststore = Gtk.ListStore(GObject.TYPE_STRING)
self.entries = set()
self.combo_box.set_model(self.liststore)
self.combo_box.set_entry_text_column(0)
self.entry = self.combo_box.get_child()
# Autocompletion
entry_completion = Gtk.EntryCompletion()
entry_completion.set_model(self.liststore)
entry_completion.set_minimum_key_length(1)
entry_completion.set_text_column(0)
self.entry.set_completion(entry_completion)
def add_entry(self, entry):
if entry not in self.entries:
self.liststore.append([entry])
self.entries.add(entry)
def set_entries(self, value_list):
self.clear()
for entry in value_list:
self.add_entry(entry)
self.combo_box.set_model(self.liststore)
def get_active_text(self):
return self.entry.get_text()
def set_active_text(self, text):
return self.entry.set_text(text)
def clear(self):
self.combo_box.set_model(None)
self.liststore.clear()
self.entries.clear()
self.set_active_text("")
self.combo_box.set_model(self.liststore)
class CustomListView(Gtk.TreeView):
def __init__(self, columns):
"""
*columns* must be a list of (header, type) pairs e.g. [('title', str)].
"""
Gtk.TreeView.__init__(self)
headers, types = list(zip(*columns))
# create a TreeStore with columns to use as the model
self.set_model(Gtk.ListStore(*types))
columns = [Gtk.TreeViewColumn(header) for header in headers]
# add tvcolumns to tree_view
for index, column in enumerate(columns):
self.append_column(column)
# create a CellRendererText to render the data
cell_renderer = Gtk.CellRendererText()
# add the cell to the tvcolumn and allow it to expand
column.pack_start(cell_renderer, True)
# Get markup for column, not text
column.set_attributes(cell_renderer, markup=index)
# Allow sorting on the column
column.set_sort_column_id(index)
# make it searchable
self.set_search_column(1)
class Calendar(Gtk.Calendar):
def __init__(self, week_numbers=False):
Gtk.Calendar.__init__(self)
self.set_property("show-week-numbers", week_numbers)
def set_date(self, date):
# Set the day temporarily to a day that is present in all months.
self.select_day(1)
# Gtk.Calendar show months in range [0,11].
self.select_month(date.month - 1, date.year)
# Select the day after the month and year have been set
self.select_day(date.day)
def get_date(self):
year, month, day = Gtk.Calendar.get_date(self)
return datetime.date(year, month + 1, day)
class Info(Gtk.InfoBar):
icons = {Gtk.MessageType.ERROR: Gtk.STOCK_DIALOG_ERROR}
def __init__(self):
Gtk.InfoBar.__init__(self)
self.title_label = Gtk.Label()
self.msg_label = Gtk.Label()
self.title_label.set_alignment(0.0, 0.5)
self.msg_label.set_alignment(0.0, 0.5)
vbox = Gtk.VBox(spacing=5)
vbox.pack_start(self.title_label, False, False, 0)
vbox.pack_start(self.msg_label, False, False, 0)
self.image = Gtk.Image()
content = self.get_content_area()
content.pack_start(self.image, False, False, 0)
content.pack_start(vbox, False, False, 0)
self.add_button(Gtk.STOCK_CLOSE, Gtk.ResponseType.CLOSE)
self.connect("close", lambda x: self.hide())
self.connect("response", self.on_response)
def on_response(self, infobar, response_id):
if response_id == Gtk.ResponseType.CLOSE:
self.hide()
def show_message(self, title, msg, msg_type):
if not title:
title = msg
msg = ""
self.title_label.set_markup("<b>%s</b>" % title)
self.msg_label.set_markup(msg)
self.set_message_type(msg_type)
self.image.set_from_stock(
self.icons.get(msg_type, Gtk.STOCK_DIALOG_INFO), Gtk.IconSize.DIALOG
)
self.show_all()
# ------------------------- Assistant Pages ------------------------------------
class AssistantPage(Gtk.VBox):
def __init__(self, *args, **kwargs):
GObject.GObject.__init__(self, *args, **kwargs)
self.set_spacing(5)
self.set_border_width(10)
self.header = None
self.show_all()
def _add_header(self):
self.header = Gtk.Label()
self.header.set_markup("Unset")
self.header.set_alignment(0.0, 0.5)
self.pack_start(self.header, False, False, 0)
self.separator = Gtk.HSeparator()
self.pack_start(self.separator, False, False, 0)
self.reorder_child(self.header, 0)
self.reorder_child(self.separator, 1)
self.show_all()
def set_header(self, text):
if not self.header:
self._add_header()
self.header.set_markup(text)
class RadioButtonPage(AssistantPage):
def __init__(self, *args, **kwargs):
AssistantPage.__init__(self, *args, **kwargs)
self.buttons = []
def add_radio_option(self, object, label, tooltip=""):
sensitive = object.is_available()
group = self.buttons[0] if self.buttons else None
button = Gtk.RadioButton(group=group)
button.set_tooltip_markup(tooltip)
button.set_label(label)
button.object = object
button.set_sensitive(sensitive)
self.pack_start(button, False, False, 0)
self.buttons.append(button)
if tooltip:
description = Gtk.Label()
description.set_alignment(0.0, 0.5)
description.set_markup(" " * 10 + tooltip)
description.set_sensitive(sensitive)
self.pack_start(description, False, False, 0)
def get_selected_object(self):
for button in self.buttons:
if button.get_active():
return button.object
class PathChooserPage(AssistantPage):
def __init__(self, assistant, *args, **kwargs):
AssistantPage.__init__(self, *args, **kwargs)
self.assistant = assistant
self.last_path = None
self.chooser = Gtk.FileChooserWidget()
self.chooser.connect("selection-changed", self.on_path_changed)
self.pack_start(self.chooser, True, True, 0)
def _remove_filters(self):
for filter in self.chooser.list_filters():
self.chooser.remove_filter(filter)
def prepare(self, porter):
self._remove_filters()
self.path_type = porter.PATHTYPE.upper()
path = porter.DEFAULTPATH
extension = porter.EXTENSION
helptext = porter.PATHTEXT
if helptext:
self.set_header(helptext)
if self.path_type == "DIR":
self.chooser.set_action(Gtk.FileChooserAction.SELECT_FOLDER)
elif self.path_type == "FILE":
self.chooser.set_action(Gtk.FileChooserAction.OPEN)
elif self.path_type == "NEWFILE":
self.chooser.set_action(Gtk.FileChooserAction.SAVE)
else:
logging.error('Wrong path_type "%s"' % self.path_type)
if self.path_type in ["FILE", "NEWFILE"] and extension:
filter = Gtk.FileFilter()
filter.set_name(extension)
filter.add_pattern("*." + extension)
self.chooser.add_filter(filter)
if self.last_path and os.path.exists(self.last_path):
path = self.last_path
if os.path.isdir(path):
self.chooser.set_current_folder(path)
else:
dirname, basename = os.path.split(path)
filename, _ = os.path.splitext(basename)
self.chooser.set_current_folder(dirname)
self.chooser.set_current_name(filename + "." + extension)
def get_selected_path(self):
self.last_path = self.chooser.get_filename()
return self.last_path
def on_path_changed(self, widget):
return
class Assistant(Gtk.Assistant):
def __init__(self, journal, *args, **kwargs):
GObject.GObject.__init__(self, *args, **kwargs)
self.journal = journal
self.set_size_request(1000, 500)
self.connect("cancel", self._on_cancel)
self.connect("close", self._on_close)
self.connect("prepare", self._on_prepare)
def run(self):
"""
Show assistant
"""
def _on_cancel(self, assistant):
"""
Cancelled -> Hide assistant
"""
self.hide()
def _on_close(self, assistant):
"""
Do the action
"""
def _on_prepare(self, assistant, page):
"""
Called when a new page should be prepared, before it is shown
"""
class TemplateBar(Gtk.HBox):
def __init__(self):
GObject.GObject.__init__(self)
self.set_spacing(2)
label = Gtk.Label(label="<b>%s</b>:" % _("Template"))
label.set_use_markup(True)
self.pack_start(label, False, False, 0)
self.save_insert_button = Gtk.Button(_("Save and insert"))
self.pack_start(self.save_insert_button, False, False, 0)
self.save_button = Gtk.Button(stock=Gtk.STOCK_SAVE)
self.pack_start(self.save_button, False, False, 0)
self.close_button = Gtk.Button(stock=Gtk.STOCK_CLOSE)
self.pack_start(self.close_button, False, False, 0)
self.show_all()
class ToolbarMenuButton(Gtk.ToolButton):
def __init__(self, stock_id, menu):
Gtk.ToolButton.__init__(self)
self.set_stock_id(stock_id)
self._menu = menu
self.connect("clicked", self._on_clicked)
self.show_all()
def _on_clicked(self, button):
self._menu.popup(None, None, None, None, 0, Gtk.get_current_event_time())
def set_menu(self, menu):
self._menu = menu
|
gpl-2.0
|
frohoff/Empire
|
lib/modules/powershell/privesc/getsystem.py
|
2
|
4655
|
from lib.common import helpers
class Module:
def __init__(self, mainMenu, params=[]):
self.info = {
'Name': 'Get-SiteListPassword',
'Author': ['@harmj0y', '@mattifestation'],
'Description': ("Gets SYSTEM privileges with one of two methods."),
'Background' : False,
'OutputExtension' : None,
'NeedsAdmin' : True,
'OpsecSafe' : False,
'Language' : 'powershell',
'MinLanguageVersion' : '2',
'Comments': [
'https://github.com/rapid7/meterpreter/blob/2a891a79001fc43cb25475cc43bced9449e7dc37/source/extensions/priv/server/elevate/namedpipe.c',
'https://github.com/obscuresec/shmoocon/blob/master/Invoke-TwitterBot',
'http://blog.cobaltstrike.com/2014/04/02/what-happens-when-i-type-getsystem/',
'http://clymb3r.wordpress.com/2013/11/03/powershell-and-token-impersonation/'
]
}
# any options needed by the module, settable during runtime
self.options = {
# format:
# value_name : {description, required, default_value}
'Agent' : {
'Description' : 'Agent to run module on.',
'Required' : True,
'Value' : ''
},
'Technique' : {
'Description' : "Technique to use, 'NamedPipe' for service named pipe impersonation or 'Token' for adjust token privs.",
'Required' : False,
'Value' : 'NamedPipe'
},
'ServiceName' : {
'Description' : "Optional service name to used for 'NamedPipe' impersonation.",
'Required' : False,
'Value' : ''
},
'PipeName' : {
'Description' : "Optional pipe name to used for 'NamedPipe' impersonation.",
'Required' : False,
'Value' : ''
},
'RevToSelf' : {
'Description' : "Switch. Reverts the current thread privileges.",
'Required' : False,
'Value' : ''
},
'WhoAmI' : {
'Description' : "Switch. Display the credentials for the current PowerShell thread.",
'Required' : False,
'Value' : ''
}
}
# save off a copy of the mainMenu object to access external functionality
# like listeners/agent handlers/etc.
self.mainMenu = mainMenu
for param in params:
# parameter format is [Name, Value]
option, value = param
if option in self.options:
self.options[option]['Value'] = value
def generate(self, obfuscate=False, obfuscationCommand=""):
# read in the common module source code
moduleSource = self.mainMenu.installPath + "/data/module_source/privesc/Get-System.ps1"
if obfuscate:
helpers.obfuscate_module(moduleSource=moduleSource, obfuscationCommand=obfuscationCommand)
moduleSource = moduleSource.replace("module_source", "obfuscated_module_source")
try:
f = open(moduleSource, 'r')
except:
print helpers.color("[!] Could not read module source path at: " + str(moduleSource))
return ""
moduleCode = f.read()
f.close()
script = moduleCode
scriptEnd = "Get-System "
if self.options['RevToSelf']['Value'].lower() == "true":
scriptEnd += " -RevToSelf"
elif self.options['WhoAmI']['Value'].lower() == "true":
scriptEnd += " -WhoAmI"
else:
for option,values in self.options.iteritems():
if option.lower() != "agent":
if values['Value'] and values['Value'] != '':
if values['Value'].lower() == "true":
# if we're just adding a switch
scriptEnd += " -" + str(option)
else:
scriptEnd += " -" + str(option) + " " + str(values['Value'])
scriptEnd += "| Out-String | %{$_ + \"`n\"};"
scriptEnd += "'Get-System completed'"
if obfuscate:
scriptEnd = helpers.obfuscate(psScript=scriptEnd, obfuscationCommand=obfuscationCommand)
script += scriptEnd
return script
|
bsd-3-clause
|
pombredanne/pyjs
|
tests/test-coverage.py
|
6
|
3672
|
#!/usr/bin/env python
import sys
class Coverage:
def __init__(self, testset_name):
self.testset_name = testset_name
self.lines = {}
def tracer(self, frame, event, arg):
lineno = frame.f_lineno
filename = frame.f_globals["__file__"]
if filename[-4:] in [".pyc", ".pyo"]:
filename = filename[:-1]
self.lines[filename][lineno] = self.lines.setdefault(filename, {}).get(lineno, 0) + 1
return self.tracer
def start(self):
sys.settrace(self.tracer)
def stop(self):
sys.settrace(None)
def output(self, *files):
print """
<html>
<head>
<title>Coverage for %s</title>
<style>
body {
color: #000;
background-color: #FFF;
}
h1, h2 {
font-family: sans-serif;
font-weight: normal;
}
td {
white-space: pre;
padding: 1px 5px;
font-family: monospace;
font-size: 10pt;
}
td.hit {
}
td.hit-line {
}
td.miss {
background-color: #C33;
}
td.miss-line {
background-color: #FCC;
}
td.ignore {
color: #999;
}
td.ignore-line {
color: #999;
}
td.lineno {
color: #999;
background-color: #EEE;
}
</style>
</head>
<body>
""" % self.testset_name
print """
<h1>Coverage for %s</h1>
""" % self.testset_name
for filename in files:
print """
<h2>%s</h2>
<table>
""" % filename
code = open(filename).readlines()
for lineno, line in enumerate(code):
count = self.lines[filename].get(lineno + 1, 0)
if count == 0:
if line.strip() in ["", "else:"] or line.strip().startswith("#"):
klass = "ignore"
else:
klass = "miss"
else:
klass = "hit"
klass2 = klass + "-line"
print """<tr><td class="lineno">%s</td><td class="%s">%s</td><td class="%s">%s</td></tr>""" % (lineno + 1, klass, count, klass2, line.strip("\n"))
print """
</table>
"""
print """
</body>
</html>
"""
# Tester
import sys
sys.path.append("..")
import pyjs
def pyjs_tester(filename, module):
output = pyjs.translate(filename + ".py", module)
# Test Plan
pyjs_test = [
("test001", "ui"),
("test002", "ui"),
("test003", "ui"),
("test004", "ui"),
("test005", "ui"),
("test006", "ui"),
("test007", "ui"),
("test008", "ui"),
("test009", "ui"),
("test010", None),
("test011", None),
("test012", None),
("test013", "ui"),
("test014", None),
("test015", None),
("test016", None),
("test017", None),
("test018", None),
("test019", None),
("test020", None),
("test021", None),
("test022", None),
("test023", None),
("test024", None),
("test025", None),
("test026", None),
("test027", None),
("test028", None),
("test029", None),
("test030", None),
("test031", None),
("test032", None),
("test033", None),
("test034", None),
("test035", None),
("test036", None),
("test037", None),
("test038", None),
("test039", None),
("test040", None),
("test041", None),
("test042", None),
("test043", None),
("test044", None),
("test045", None),
("test046", None),
]
c = Coverage("pyjs unit tests")
c.start()
for filename, module in pyjs_test:
pyjs_tester(filename, module)
c.stop()
c.output("../pyjs.py")
|
apache-2.0
|
mzizzi/ansible
|
test/units/modules/system/test_known_hosts.py
|
107
|
3840
|
import os
import tempfile
from ansible.compat.tests import unittest
from ansible.module_utils._text import to_bytes
from ansible.modules.system.known_hosts import compute_diff
class KnownHostsDiffTestCase(unittest.TestCase):
def _create_file(self, content):
tmp_file = tempfile.NamedTemporaryFile(prefix='ansible-test-', suffix='-known_hosts', delete=False)
tmp_file.write(to_bytes(content))
tmp_file.close()
self.addCleanup(os.unlink, tmp_file.name)
return tmp_file.name
def test_no_existing_file(self):
path = tempfile.mktemp(prefix='ansible-test-', suffix='-known_hosts')
key = 'example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=None, replace_or_add=False, state='present', key=key)
self.assertEqual(diff, {
'before_header': '/dev/null',
'after_header': path,
'before': '',
'after': 'example.com ssh-rsa AAAAetc\n',
})
def test_key_addition(self):
path = self._create_file(
'two.example.com ssh-rsa BBBBetc\n'
)
key = 'one.example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=None, replace_or_add=False, state='present', key=key)
self.assertEqual(diff, {
'before_header': path,
'after_header': path,
'before': 'two.example.com ssh-rsa BBBBetc\n',
'after': 'two.example.com ssh-rsa BBBBetc\none.example.com ssh-rsa AAAAetc\n',
})
def test_no_change(self):
path = self._create_file(
'one.example.com ssh-rsa AAAAetc\n'
'two.example.com ssh-rsa BBBBetc\n'
)
key = 'one.example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=1, replace_or_add=False, state='present', key=key)
self.assertEqual(diff, {
'before_header': path,
'after_header': path,
'before': 'one.example.com ssh-rsa AAAAetc\ntwo.example.com ssh-rsa BBBBetc\n',
'after': 'one.example.com ssh-rsa AAAAetc\ntwo.example.com ssh-rsa BBBBetc\n',
})
def test_key_change(self):
path = self._create_file(
'one.example.com ssh-rsa AAAaetc\n'
'two.example.com ssh-rsa BBBBetc\n'
)
key = 'one.example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=1, replace_or_add=True, state='present', key=key)
self.assertEqual(diff, {
'before_header': path,
'after_header': path,
'before': 'one.example.com ssh-rsa AAAaetc\ntwo.example.com ssh-rsa BBBBetc\n',
'after': 'two.example.com ssh-rsa BBBBetc\none.example.com ssh-rsa AAAAetc\n',
})
def test_key_removal(self):
path = self._create_file(
'one.example.com ssh-rsa AAAAetc\n'
'two.example.com ssh-rsa BBBBetc\n'
)
key = 'one.example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=1, replace_or_add=False, state='absent', key=key)
self.assertEqual(diff, {
'before_header': path,
'after_header': path,
'before': 'one.example.com ssh-rsa AAAAetc\ntwo.example.com ssh-rsa BBBBetc\n',
'after': 'two.example.com ssh-rsa BBBBetc\n',
})
def test_key_removal_no_change(self):
path = self._create_file(
'two.example.com ssh-rsa BBBBetc\n'
)
key = 'one.example.com ssh-rsa AAAAetc\n'
diff = compute_diff(path, found_line=None, replace_or_add=False, state='absent', key=key)
self.assertEqual(diff, {
'before_header': path,
'after_header': path,
'before': 'two.example.com ssh-rsa BBBBetc\n',
'after': 'two.example.com ssh-rsa BBBBetc\n',
})
|
gpl-3.0
|
tparks5/tor-stem
|
test/unit/descriptor/microdescriptor.py
|
1
|
7493
|
"""
Unit tests for stem.descriptor.microdescriptor.
"""
import unittest
import stem.exit_policy
import stem.descriptor
from stem.util import str_type
from stem.descriptor.microdescriptor import Microdescriptor
from test.unit.descriptor import get_resource
FIRST_ONION_KEY = """\
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAMhPQtZPaxP3ukybV5LfofKQr20/ljpRk0e9IlGWWMSTkfVvBcHsa6IM
H2KE6s4uuPHp7FqhakXAzJbODobnPHY8l1E4efyrqMQZXEQk2IMhgSNtG6YqUrVF
CxdSKSSy0mmcBe2TOyQsahlGZ9Pudxfnrey7KcfqnArEOqNH09RpAgMBAAE=
-----END RSA PUBLIC KEY-----\
"""
SECOND_ONION_KEY = """\
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBALCOxZdpMI2WO496njSQ2M7b4IgAGATqpJmH3So7lXOa25sK6o7JipgP
qQE83K/t/xsMIpxQ/hHkft3G78HkeXXFc9lVUzH0HmHwYEu0M+PMVULSkG36MfEl
7WeSZzaG+Tlnh9OySAzVyTsv1ZJsTQFHH9V8wuM0GOMo9X8DFC+NAgMBAAE=
-----END RSA PUBLIC KEY-----\
"""
THIRD_ONION_KEY = """\
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAOWFQHxO+5kGuhwPUX5jB7wJCrTbSU0fZwolNV1t9UaDdjGDvIjIhdit
y2sMbyd9K8lbQO7x9rQjNst5ZicuaSOs854XQddSjm++vMdjYbOcVMqnKGSztvpd
w/1LVWFfhcBnsGi4JMGbmP+KUZG9A8kI9deSyJhfi35jA7UepiHHAgMBAAE=
-----END RSA PUBLIC KEY-----\
"""
class TestMicrodescriptor(unittest.TestCase):
def test_local_microdescriptors(self):
"""
Checks a small microdescriptor file with known contents.
"""
descriptor_path = get_resource('cached-microdescs')
with open(descriptor_path, 'rb') as descriptor_file:
descriptors = stem.descriptor.parse_file(descriptor_file, 'microdescriptor 1.0')
router = next(descriptors)
self.assertEqual(FIRST_ONION_KEY, router.onion_key)
self.assertEqual(None, router.ntor_onion_key)
self.assertEqual([], router.or_addresses)
self.assertEqual([], router.family)
self.assertEqual(stem.exit_policy.MicroExitPolicy('reject 1-65535'), router.exit_policy)
self.assertEqual({b'@last-listed': b'2013-02-24 00:18:36'}, router.get_annotations())
self.assertEqual([b'@last-listed 2013-02-24 00:18:36'], router.get_annotation_lines())
router = next(descriptors)
self.assertEqual(SECOND_ONION_KEY, router.onion_key)
self.assertEqual(str_type('r5572HzD+PMPBbXlZwBhsm6YEbxnYgis8vhZ1jmdI2k='), router.ntor_onion_key)
self.assertEqual([], router.or_addresses)
self.assertEqual(['$6141629FA0D15A6AEAEF3A1BEB76E64C767B3174'], router.family)
self.assertEqual(stem.exit_policy.MicroExitPolicy('reject 1-65535'), router.exit_policy)
self.assertEqual({b'@last-listed': b'2013-02-24 00:18:37'}, router.get_annotations())
self.assertEqual([b'@last-listed 2013-02-24 00:18:37'], router.get_annotation_lines())
router = next(descriptors)
self.assertEqual(THIRD_ONION_KEY, router.onion_key)
self.assertEqual(None, router.ntor_onion_key)
self.assertEqual([(str_type('2001:6b0:7:125::242'), 9001, True)], router.or_addresses)
self.assertEqual([], router.family)
self.assertEqual(stem.exit_policy.MicroExitPolicy('accept 80,443'), router.exit_policy)
self.assertEqual({b'@last-listed': b'2013-02-24 00:18:36'}, router.get_annotations())
self.assertEqual([b'@last-listed 2013-02-24 00:18:36'], router.get_annotation_lines())
def test_minimal_microdescriptor(self):
"""
Basic sanity check that we can parse a microdescriptor with minimal
attributes.
"""
desc = Microdescriptor.create()
self.assertTrue(stem.descriptor.CRYPTO_BLOB in desc.onion_key)
self.assertEqual(None, desc.ntor_onion_key)
self.assertEqual([], desc.or_addresses)
self.assertEqual([], desc.family)
self.assertEqual(stem.exit_policy.MicroExitPolicy('reject 1-65535'), desc.exit_policy)
self.assertEqual(None, desc.exit_policy_v6)
self.assertEqual({}, desc.identifiers)
self.assertEqual(None, desc.identifier_type)
self.assertEqual(None, desc.identifier)
self.assertEqual({}, desc.protocols)
self.assertEqual([], desc.get_unrecognized_lines())
def test_unrecognized_line(self):
"""
Includes unrecognized content in the descriptor.
"""
desc = Microdescriptor.create({'pepperjack': 'is oh so tasty!'})
self.assertEqual(['pepperjack is oh so tasty!'], desc.get_unrecognized_lines())
def test_proceeding_line(self):
"""
Includes a line prior to the 'onion-key' entry.
"""
desc_text = b'family Amunet1\n' + Microdescriptor.content()
self.assertRaises(ValueError, Microdescriptor, desc_text, True)
desc = Microdescriptor(desc_text, validate = False)
self.assertEqual(['Amunet1'], desc.family)
def test_a_line(self):
"""
Sanity test with both an IPv4 and IPv6 address.
"""
desc_text = Microdescriptor.content()
desc_text += b'\na 10.45.227.253:9001'
desc_text += b'\na [fd9f:2e19:3bcf::02:9970]:9001'
expected = [
('10.45.227.253', 9001, False),
('fd9f:2e19:3bcf::02:9970', 9001, True),
]
desc = Microdescriptor(desc_text)
self.assertEqual(expected, desc.or_addresses)
def test_family(self):
"""
Check the family line.
"""
desc = Microdescriptor.create({'family': 'Amunet1 Amunet2 Amunet3'})
self.assertEqual(['Amunet1', 'Amunet2', 'Amunet3'], desc.family)
# try multiple family lines
desc_text = Microdescriptor.content()
desc_text += b'\nfamily Amunet1'
desc_text += b'\nfamily Amunet2'
self.assertRaises(ValueError, Microdescriptor, desc_text, True)
# family entries will overwrite each other
desc = Microdescriptor(desc_text, validate = False)
self.assertEqual(1, len(desc.family))
def test_exit_policy(self):
"""
Basic check for 'p' lines. The router status entries contain an identical
field so we're not investing much effort here.
"""
desc = Microdescriptor.create({'p': 'accept 80,110,143,443'})
self.assertEqual(stem.exit_policy.MicroExitPolicy('accept 80,110,143,443'), desc.exit_policy)
def test_protocols(self):
"""
Basic check for 'pr' lines.
"""
desc = Microdescriptor.create({'pr': 'Cons=1 Desc=1 DirCache=1 HSDir=1 HSIntro=3 HSRend=1 Link=1-4 LinkAuth=1 Microdesc=1 Relay=1-2'})
self.assertEqual(10, len(desc.protocols))
def test_identifier(self):
"""
Basic check for 'id' lines.
"""
desc = Microdescriptor.create({'id': 'rsa1024 Cd47okjCHD83YGzThGBDptXs9Z4'})
self.assertEqual({'rsa1024': 'Cd47okjCHD83YGzThGBDptXs9Z4'}, desc.identifiers)
self.assertEqual('rsa1024', desc.identifier_type)
self.assertEqual('Cd47okjCHD83YGzThGBDptXs9Z4', desc.identifier)
# check when there's multiple key types
desc_text = b'\n'.join((
Microdescriptor.content(),
b'id rsa1024 Cd47okjCHD83YGzThGBDptXs9Z4',
b'id ed25519 50f6ddbecdc848dcc6b818b14d1',
))
desc = Microdescriptor(desc_text, validate = True)
self.assertEqual({'rsa1024': 'Cd47okjCHD83YGzThGBDptXs9Z4', 'ed25519': '50f6ddbecdc848dcc6b818b14d1'}, desc.identifiers)
self.assertEqual('ed25519', desc.identifier_type)
self.assertEqual('50f6ddbecdc848dcc6b818b14d1', desc.identifier)
# check when there's conflicting keys
desc_text = b'\n'.join((
Microdescriptor.content(),
b'id rsa1024 Cd47okjCHD83YGzThGBDptXs9Z4',
b'id rsa1024 50f6ddbecdc848dcc6b818b14d1',
))
desc = Microdescriptor(desc_text)
self.assertEqual({}, desc.identifiers)
exc_msg = "There can only be one 'id' line per a key type, but 'rsa1024' appeared multiple times"
self.assertRaisesRegexp(ValueError, exc_msg, Microdescriptor, desc_text, validate = True)
|
lgpl-3.0
|
kaoscoach/crits
|
crits/indicators/urls.py
|
7
|
1184
|
from django.conf.urls import patterns
urlpatterns = patterns('crits.indicators.views',
(r'^details/(?P<indicator_id>\w+)/$', 'indicator'),
(r'^search/$', 'indicator_search'),
(r'^upload/$', 'upload_indicator'),
(r'^add_action/$', 'new_indicator_action'),
(r'^remove/(?P<_id>[\S ]+)$', 'remove_indicator'),
(r'^action/remove/(?P<indicator_id>\w+)/$', 'remove_action'),
(r'^activity/remove/(?P<indicator_id>\w+)/$', 'remove_activity'),
(r'^actions/(?P<method>\S+)/(?P<indicator_id>\w+)/$', 'add_update_action'),
(r'^activity/(?P<method>\S+)/(?P<indicator_id>\w+)/$', 'add_update_activity'),
(r'^ci/update/(?P<indicator_id>\w+)/(?P<ci_type>\S+)/$', 'update_ci'),
(r'^type/update/(?P<indicator_id>\w+)/$', 'update_indicator_type'),
(r'^threat_type/update/(?P<indicator_id>\w+)/$', 'update_indicator_threat_type'),
(r'^attack_type/update/(?P<indicator_id>\w+)/$', 'update_indicator_attack_type'),
(r'^and_ip/$', 'indicator_and_ip'),
(r'^from_obj/$', 'indicator_from_tlo'),
(r'^list/$', 'indicators_listing'),
(r'^list/(?P<option>\S+)/$', 'indicators_listing'),
(r'^get_dropdown/$', 'get_indicator_type_dropdown'),
)
|
mit
|
stshine/servo
|
components/script/dom/bindings/codegen/parser/tests/test_duplicate_qualifiers.py
|
241
|
1893
|
def WebIDLTest(parser, harness):
threw = False
try:
parser.parse("""
interface DuplicateQualifiers1 {
getter getter byte foo(unsigned long index);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
threw = False
try:
parser.parse("""
interface DuplicateQualifiers2 {
setter setter byte foo(unsigned long index, byte value);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
threw = False
try:
parser.parse("""
interface DuplicateQualifiers3 {
creator creator byte foo(unsigned long index, byte value);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
threw = False
try:
parser.parse("""
interface DuplicateQualifiers4 {
deleter deleter byte foo(unsigned long index);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
threw = False
try:
parser.parse("""
interface DuplicateQualifiers5 {
getter deleter getter byte foo(unsigned long index);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
threw = False
try:
results = parser.parse("""
interface DuplicateQualifiers6 {
creator setter creator byte foo(unsigned long index, byte value);
};
""")
results = parser.finish()
except:
threw = True
harness.ok(threw, "Should have thrown.")
|
mpl-2.0
|
amaas-fintech/amaas-core-sdk-python
|
tests/unit/transactions/transaction.py
|
3
|
8017
|
from __future__ import absolute_import, division, print_function, unicode_literals
from amaasutils.random_utils import random_string
import copy
import json
import unittest
from amaascore.exceptions import TransactionNeedsSaving
from amaascore.transactions.children import Link, Party
from amaascore.transactions.transaction import Transaction
from amaascore.tools.generate_transaction import generate_transaction, REFERENCE_TYPES
class TransactionTest(unittest.TestCase):
def setUp(self):
self.longMessage = True # Print complete error message on failure
self.transaction = generate_transaction(net_affecting_charges=True)
self.transaction_id = self.transaction.transaction_id
def tearDown(self):
pass
def test_Transaction(self):
self.assertEqual(type(self.transaction), Transaction)
def test_ChargesNetEffect(self):
"""
Long-winded approach as the shorter sum based approach is used in the Transaction class
:return:
"""
total = 0
for charge in self.transaction.charges.values():
if charge.net_affecting:
total += charge.charge_value
self.assertEqual(self.transaction.charges_net_effect(), total)
def test_TransactionNetSettlement(self):
"""
Long-winded approach as the shorter sum based approach is used in the Transaction class
:return:
"""
total = 0
for charge in self.transaction.charges.values():
if charge.net_affecting:
total += charge.charge_value
self.assertEqual(self.transaction.net_settlement, self.transaction.gross_settlement - total)
def test_TransactionToDict(self):
transaction_dict = self.transaction.__dict__
self.assertEqual(type(transaction_dict), dict)
self.assertEqual(transaction_dict.get('transaction_id'), self.transaction_id)
self.assertEqual(type(transaction_dict.get('charges')), dict)
def test_TransactionToJSON(self):
transaction_json = self.transaction.to_json()
self.assertEqual(transaction_json.get('transaction_id'), self.transaction_id)
# If transaction_json is valid JSON, this will run without serialisation errors
json_transaction_id = json.loads(json.dumps(transaction_json, ensure_ascii=False)).get('transaction_id')
self.assertEqual(json_transaction_id, self.transaction_id)
def test_TransactionPostings(self):
with self.assertRaises(TransactionNeedsSaving):
self.transaction.postings
# TODO - Save the transaction, and check that the postings are now present
def test_TransactionEquality(self):
transaction2 = copy.deepcopy(self.transaction)
transaction3 = copy.deepcopy(self.transaction)
transaction3.transaction_status = 'Cancelled'
self.assertEqual(self.transaction, transaction2)
self.assertEqual(len({self.transaction, transaction2}), 1)
self.assertEqual(len({self.transaction, transaction3}), 2)
self.assertNotEqual(self.transaction, transaction3)
def test_References(self):
self.assertEqual(len(self.transaction.references), len(REFERENCE_TYPES) + 1,
"AMaaS Reference + the ones added by the transaction generator")
self.assertEqual(self.transaction.references.get('AMaaS').reference_value, self.transaction.transaction_id)
def test_MultipleLink(self):
links = self.transaction.links.get('Multiple')
self.assertEqual(len(links), 3) # The test script inserts 3 links
def test_UpsertLinkList(self):
links = self.transaction.links.get('Multiple')
random_id = random_string(8)
links.add(Link(linked_transaction_id=random_id))
self.transaction.upsert_link_set('Multiple', links)
links = self.transaction.links.get('Multiple')
self.assertEqual(len(links), 4) # The test script inserts 3 links
random_id_link = [link for link in links if link.linked_transaction_id == random_id]
self.assertEqual(len(random_id_link), 1)
def test_UpsertLinkListEmptyValue(self):
self.transaction.upsert_link_set('Single', None)
self.assertEqual(self.transaction.links.get('Single', 'DUMMY'), 'DUMMY')
# Try to upsert a link_list which isn't present
self.transaction.upsert_link_set('TEST', None)
def test_AddLink(self):
# Add to a Single item
random_id = random_string(8)
self.transaction.add_link(link_type='Single', linked_transaction_id=random_id)
links = self.transaction.links.get('Single')
self.assertEqual(len(links), 2) # The test script inserts 1 link
random_id_link = [link for link in links if link.linked_transaction_id == random_id]
self.assertEqual(len(random_id_link), 1)
# Add to a multiple item
self.transaction.add_link(link_type='Multiple', linked_transaction_id=random_id)
links = self.transaction.links.get('Multiple')
self.assertEqual(len(links), 4) # The test script inserts 3 links
random_id_link = [link for link in links if link.linked_transaction_id == random_id]
self.assertEqual(len(random_id_link), 1)
# Add brand new item
self.transaction.add_link(link_type='TEST', linked_transaction_id=random_id)
link = self.transaction.links.get('TEST')
self.assertEqual(type(link), Link) # The test script inserts 3 links
self.assertEqual(link.linked_transaction_id, random_id)
def test_RemoveLink(self):
# Remove a single link
single_id = self.transaction.links.get('Single').linked_transaction_id
self.transaction.remove_link(link_type='Single', linked_transaction_id=single_id)
self.assertEqual(self.transaction.links.get('Single', 'DUMMY'), 'DUMMY')
# Remove a multiple link
multiple_id = next(iter(self.transaction.links.get('Multiple'))).linked_transaction_id
self.transaction.remove_link(link_type='Multiple', linked_transaction_id=multiple_id)
multiple = self.transaction.links.get('Multiple')
self.assertEqual(len(multiple), 2) # Test originally added 3
multiple_id_link = [link for link in multiple if link.linked_transaction_id == multiple_id]
self.assertEqual(len(multiple_id_link), 0)
# Remove a link_type that doesn't exist
with self.assertRaisesRegexp(KeyError, 'Cannot remove link'):
self.transaction.remove_link('TEST', '1234')
# Remove a link that doesn't exist
with self.assertRaisesRegexp(KeyError, 'Cannot remove link'):
self.transaction.remove_link('Multiple', '1234')
def test_InvalidTransactionType(self):
with self.assertRaisesRegexp(ValueError, 'Invalid transaction type Invalid'):
transaction = generate_transaction(transaction_type='Invalid')
def test_InvalidTransactionAction(self):
with self.assertRaisesRegexp(ValueError, 'Invalid transaction action Invalid'):
transaction = generate_transaction(transaction_action='Invalid')
def test_InvalidTransactionStatus(self):
with self.assertRaisesRegexp(ValueError, 'Invalid transaction status Invalid'):
transaction = generate_transaction(transaction_status='Invalid')
def test_ImmutableDicts(self):
attr = self.transaction.to_dict()
attr.pop('parties') # Remove parties so that the default constructor is used
transaction = Transaction(**attr)
transaction.parties.update({'TEST': Party(party_id=random_string(8))})
self.assertEqual(len(transaction.parties), 1)
transaction2 = Transaction(**attr)
self.assertEqual(len(transaction2.parties), 0)
def test_InvalidCurrency(self):
with self.assertRaisesRegexp(ValueError, 'Invalid currency Invalid'):
self.transaction.transaction_currency = 'Invalid'
if __name__ == '__main__':
unittest.main()
|
apache-2.0
|
jagguli/flexmock
|
tests/flexmock_modern_test.py
|
2
|
1451
|
import flexmock
import sys
import unittest
class ModernClass(object):
"""Contains features only available in 2.6 and above."""
def test_context_manager_on_instance(self):
class CM(object):
def __enter__(self): pass
def __exit__(self, *_): pass
cm = CM()
flexmock(cm).should_call('__enter__').once
flexmock(cm).should_call('__exit__').once
with cm: pass
self._tear_down()
def test_context_manager_on_class(self):
class CM(object):
def __enter__(self): pass
def __exit__(self, *_): pass
cm = CM()
flexmock(CM).should_receive('__enter__').once
flexmock(CM).should_receive('__exit__').once
with cm: pass
self._tear_down()
def test_flexmock_should_support_with(self):
foo = flexmock()
with foo as mock:
mock.should_receive('bar').and_return('baz')
assert foo.bar() == 'baz'
def test_builtin_open(self):
if sys.version_info < (3, 0):
mock = flexmock(sys.modules['__builtin__'])
else:
mock = flexmock(sys.modules['builtins'])
fake_fd = flexmock(read=lambda: 'some data')
mock.should_receive('open').once.with_args('file_name').and_return(fake_fd)
with open('file_name') as f:
data = f.read()
self.assertEqual('some data', data)
class TestFlexmockUnittestModern(ModernClass, unittest.TestCase):
def _tear_down(self):
return unittest.TestCase.tearDown(self)
if __name__ == '__main__':
unittest.main()
|
bsd-2-clause
|
britcey/ansible
|
lib/ansible/modules/cloud/amazon/ecs_ecr.py
|
51
|
11804
|
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ecs_ecr
version_added: "2.3"
short_description: Manage Elastic Container Registry repositories
description:
- Manage Elastic Container Registry repositories
options:
name:
description:
- the name of the repository
required: true
registry_id:
description:
- AWS account id associated with the registry.
- If not specified, the default registry is assumed.
required: false
policy:
description:
- JSON or dict that represents the new policy
required: false
force_set_policy:
description:
- if no, prevents setting a policy that would prevent you from
setting another policy in the future.
required: false
default: false
delete_policy:
description:
- if yes, remove the policy from the repository
required: false
default: false
state:
description:
- create or destroy the repository
required: false
choices: [present, absent]
default: 'present'
author:
- David M. Lee (@leedm777)
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# If the repository does not exist, it is created. If it does exist, would not
# affect any policies already on it.
- name: ecr-repo
ecs_ecr: name=super/cool
- name: destroy-ecr-repo
ecs_ecr: name=old/busted state=absent
- name: Cross account ecr-repo
ecs_ecr: registry_id=999999999999 name=cross/account
- name: set-policy as object
ecs_ecr:
name: needs-policy-object
policy:
Version: '2008-10-17'
Statement:
- Sid: read-only
Effect: Allow
Principal:
AWS: '{{ read_only_arn }}'
Action:
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- name: set-policy as string
ecs_ecr:
name: needs-policy-string
policy: "{{ lookup('template', 'policy.json.j2') }}"
- name: delete-policy
ecs_ecr:
name: needs-no-policy
delete_policy: yes
'''
RETURN = '''
state:
type: string
description: The asserted state of the repository (present, absent)
returned: always
created:
type: boolean
description: If true, the repository was created
returned: always
name:
type: string
description: The name of the repository
returned: "when state == 'absent'"
repository:
type: dict
description: The created or updated repository
returned: "when state == 'present'"
sample:
createdAt: '2017-01-17T08:41:32-06:00'
registryId: '999999999999'
repositoryArn: arn:aws:ecr:us-east-1:999999999999:repository/ecr-test-1484664090
repositoryName: ecr-test-1484664090
repositoryUri: 999999999999.dkr.ecr.us-east-1.amazonaws.com/ecr-test-1484664090
'''
import json
import time
import inspect
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
try:
import boto3
from botocore.exceptions import ClientError
HAS_BOTO3 = True
except ImportError:
HAS_BOTO3 = False
def boto_exception(err):
'''boto error message handler'''
if hasattr(err, 'error_message'):
error = err.error_message
elif hasattr(err, 'message'):
error = err.message
else:
error = '%s: %s' % (Exception, err)
return error
def build_kwargs(registry_id):
"""
Builds a kwargs dict which may contain the optional registryId.
:param registry_id: Optional string containing the registryId.
:return: kwargs dict with registryId, if given
"""
if not registry_id:
return dict()
else:
return dict(registryId=registry_id)
class EcsEcr:
def __init__(self, module):
region, ec2_url, aws_connect_kwargs = \
get_aws_connection_info(module, boto3=True)
self.ecr = boto3_conn(module, conn_type='client',
resource='ecr', region=region,
endpoint=ec2_url, **aws_connect_kwargs)
self.check_mode = module.check_mode
self.changed = False
self.skipped = False
def get_repository(self, registry_id, name):
try:
res = self.ecr.describe_repositories(
repositoryNames=[name], **build_kwargs(registry_id))
repos = res.get('repositories')
return repos and repos[0]
except ClientError as err:
code = err.response['Error'].get('Code', 'Unknown')
if code == 'RepositoryNotFoundException':
return None
raise
def get_repository_policy(self, registry_id, name):
try:
res = self.ecr.get_repository_policy(
repositoryName=name, **build_kwargs(registry_id))
text = res.get('policyText')
return text and json.loads(text)
except ClientError as err:
code = err.response['Error'].get('Code', 'Unknown')
if code == 'RepositoryPolicyNotFoundException':
return None
raise
def create_repository(self, registry_id, name):
if not self.check_mode:
repo = self.ecr.create_repository(
repositoryName=name, **build_kwargs(registry_id)).get(
'repository')
self.changed = True
return repo
else:
self.skipped = True
return dict(repositoryName=name)
def set_repository_policy(self, registry_id, name, policy_text, force):
if not self.check_mode:
policy = self.ecr.set_repository_policy(
repositoryName=name,
policyText=policy_text,
force=force,
**build_kwargs(registry_id))
self.changed = True
return policy
else:
self.skipped = True
if self.get_repository(registry_id, name) is None:
printable = name
if registry_id:
printable = '{}:{}'.format(registry_id, name)
raise Exception(
'could not find repository {}'.format(printable))
return
def delete_repository(self, registry_id, name):
if not self.check_mode:
repo = self.ecr.delete_repository(
repositoryName=name, **build_kwargs(registry_id))
self.changed = True
return repo
else:
repo = self.get_repository(registry_id, name)
if repo:
self.skipped = True
return repo
return None
def delete_repository_policy(self, registry_id, name):
if not self.check_mode:
policy = self.ecr.delete_repository_policy(
repositoryName=name, **build_kwargs(registry_id))
self.changed = True
return policy
else:
policy = self.get_repository_policy(registry_id, name)
if policy:
self.skipped = True
return policy
return None
def run(ecr, params, verbosity):
# type: (EcsEcr, dict, int) -> Tuple[bool, dict]
result = {}
try:
name = params['name']
state = params['state']
policy_text = params['policy']
delete_policy = params['delete_policy']
registry_id = params['registry_id']
force_set_policy = params['force_set_policy']
# If a policy was given, parse it
policy = policy_text and json.loads(policy_text)
result['state'] = state
result['created'] = False
repo = ecr.get_repository(registry_id, name)
if state == 'present':
result['created'] = False
if not repo:
repo = ecr.create_repository(registry_id, name)
result['changed'] = True
result['created'] = True
result['repository'] = repo
if delete_policy:
original_policy = ecr.get_repository_policy(registry_id, name)
if verbosity >= 2:
result['policy'] = None
if verbosity >= 3:
result['original_policy'] = original_policy
if original_policy:
ecr.delete_repository_policy(registry_id, name)
result['changed'] = True
elif policy_text is not None:
try:
policy = sort_json_policy_dict(policy)
if verbosity >= 2:
result['policy'] = policy
original_policy = ecr.get_repository_policy(
registry_id, name)
if original_policy:
original_policy = sort_json_policy_dict(original_policy)
if verbosity >= 3:
result['original_policy'] = original_policy
if original_policy != policy:
ecr.set_repository_policy(
registry_id, name, policy_text, force_set_policy)
result['changed'] = True
except:
# Some failure w/ the policy. It's helpful to know what the
# policy is.
result['policy'] = policy_text
raise
elif state == 'absent':
result['name'] = name
if repo:
ecr.delete_repository(registry_id, name)
result['changed'] = True
except Exception as err:
msg = str(err)
if isinstance(err, ClientError):
msg = boto_exception(err)
result['msg'] = msg
result['exception'] = traceback.format_exc()
return False, result
if ecr.skipped:
result['skipped'] = True
if ecr.changed:
result['changed'] = True
return True, result
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
name=dict(required=True),
registry_id=dict(required=False),
state=dict(required=False, choices=['present', 'absent'],
default='present'),
force_set_policy=dict(required=False, type='bool', default=False),
policy=dict(required=False, type='json'),
delete_policy=dict(required=False, type='bool')))
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['policy', 'delete_policy']])
if not HAS_BOTO3:
module.fail_json(msg='boto3 required for this module')
ecr = EcsEcr(module)
passed, result = run(ecr, module.params, module._verbosity)
if passed:
module.exit_json(**result)
else:
module.fail_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
sgongar/Herschel-PACS-Toolbox-Red-Leak
|
massive/massive_aux.py
|
1
|
2557
|
from time import time
from datetime import datetime
from csv import reader
from string import uppercase
# coding = utf-8
#
# This file is part of Herschel Common Science System (HCSS).
# Copyright 2001-2013 Herschel Science Ground Segment Consortium
#
# HCSS is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# HCSS is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General
# Public License along with HCSS.
# If not, see <http://www.gnu.org/licenses/>.
#
def get_formatted_time():
""" Return formatted time function
@return time_hr: a time string formatted
"""
time_hr = datetime.fromtimestamp(time())
time_hr = str(time_hr)
return time_hr
def save_message(message, mode, file):
""" save a message to a file
"""
message_file = open(file, mode)
message_file.write(message)
message_file.close()
return True
def save_exception(exception):
""" Save expection to file
"""
print exception
def create_dictionary(obs_list):
""" Create dictionary from observations list
Esta funcion crea un diccionario cuyas keys son la
observaciones y los valores son un string del tipo XXX
@param obs_list: a list which contains observation ids
@return obs_dict: a dictionary which contains the observations
"""
obs_dict = {}
i, j, k, w = (0, )*4
for i in range(len(obs_list)):
if j == int(len(list(uppercase))):
j = 0
k += 1
if k == int(len(list(uppercase))):
k = 0
w += 1
obs_dict[obs_list[i]] = 'SED' + list(uppercase)[j] +\
list(uppercase)[k] + list(uppercase)[w]
j = j + 1
return obs_dict
def populate_obs(obs_file):
""" Populate list from csv file
@param obs_file: location of csv file to be read
@return obs_list: a list which contains file observation ids
"""
obs_list = []
with open(str(obs_file), 'rb') as f:
row_reader = reader(f, delimiter=',')
for row in row_reader:
obs_list.append(row[1])
return obs_list
|
lgpl-3.0
|
sbtlaarzc/vispy
|
examples/benchmark/scene_test_2.py
|
17
|
6029
|
# -*- coding: utf-8 -*-
# vispy: testskip
# -----------------------------------------------------------------------------
# Copyright (c) 2015, Vispy Development Team.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
# -----------------------------------------------------------------------------
"""
Compare an optimal plot grid implementation to the same functionality
provided by scenegraph.
Use --vispy-cprofile to see an overview of time spent in all functions.
Use util.profiler and --vispy-profile=ClassName.method_name for more directed
profiling measurements.
"""
from __future__ import division
import numpy as np
from vispy import gloo, app, scene, visuals
from vispy.util.profiler import Profiler
class GridCanvas(app.Canvas):
def __init__(self, cells, **kwargs):
m, n = (10, 10)
self.grid_size = (m, n)
self.cells = cells
super(GridCanvas, self).__init__(keys='interactive',
show=True, **kwargs)
def on_initialize(self, event):
self.context.set_state(clear_color='black', blend=True,
blend_func=('src_alpha', 'one_minus_src_alpha'))
def on_mouse_move(self, event):
if event.is_dragging and not event.modifiers:
dx = (event.pos - event.last_event.pos) * [1, -1]
i, j = event.press_event.pos / self.size
m, n = len(self.cells), len(self.cells[0])
cell = self.cells[int(i*m)][n - 1 - int(j*n)]
if event.press_event.button == 1:
offset = (np.array(cell.offset) +
(dx / (np.array(self.size) / [m, n])) *
(2 / np.array(cell.scale)))
cell.set_transform(offset, cell.scale)
else:
cell.set_transform(cell.offset, cell.scale * 1.05 ** dx)
self.update()
def on_draw(self, event):
prof = Profiler() # noqa
self.context.clear()
M = len(self.cells)
N = len(self.cells[0])
w, h = self.size
for i in range(M):
for j in range(N):
self.context.set_viewport(w*i/M, h*j/N, w/M, h/N)
self.cells[i][j].draw()
vert = """
attribute vec2 pos;
uniform vec2 offset;
uniform vec2 scale;
void main() {
gl_Position = vec4((pos + offset) * scale, 0, 1);
}
"""
frag = """
void main() {
gl_FragColor = vec4(1, 1, 1, 0.5);
}
"""
class Line(object):
def __init__(self, data, offset, scale):
self.data = gloo.VertexBuffer(data)
self.program = gloo.Program(vert, frag)
self.program['pos'] = self.data
self.set_transform(offset, scale)
def set_transform(self, offset, scale):
self.offset = offset
self.scale = scale
self.program['offset'] = self.offset
self.program['scale'] = self.scale
def draw(self):
self.program.draw('line_strip')
scales = np.array((1.9 / 100., 2. / 10.))
class VisualCanvas(app.Canvas):
def __init__(self, vis, **kwargs):
super(VisualCanvas, self).__init__(keys='interactive',
show=True, **kwargs)
m, n = (10, 10)
self.grid_size = (m, n)
self.visuals = vis
def on_initialize(self, event):
self.context.set_state(clear_color='black', blend=True,
blend_func=('src_alpha', 'one_minus_src_alpha'))
def on_mouse_move(self, event):
if event.is_dragging and not event.modifiers:
dx = np.array(event.pos - event.last_event.pos)
x, y = event.press_event.pos / self.size
m, n = self.grid_size
i, j = int(x*m), n - 1 - int(y*n)
v = self.visuals[i][j]
tr = v.transform
if event.press_event.button == 1:
tr.translate = np.array(tr.translate)[:2] + \
dx * scales * (1, -1)
else:
tr.scale = tr.scale[:2] * 1.05 ** (dx * (1, -1))
self.update()
def on_draw(self, event):
prof = Profiler() # noqa
self.context.clear()
M, N = self.grid_size
w, h = self.size
for i in range(M):
for j in range(N):
self.context.set_viewport(w*i/M, h*j/N, w/M, h/N)
self.visuals[i][j].draw()
if __name__ == '__main__':
M, N = (10, 10)
data = np.empty((10000, 2), dtype=np.float32)
data[:, 0] = np.linspace(0, 100, data.shape[0])
data[:, 1] = np.random.normal(size=data.shape[0])
# Optimized version
cells = []
for i in range(M):
row = []
cells.append(row)
for j in range(N):
row.append(Line(data, offset=(-50, 0), scale=scales))
gcanvas = GridCanvas(cells, position=(400, 300), size=(800, 600),
title="GridCanvas")
# Visual version
vlines = []
for i in range(M):
row = []
vlines.append(row)
for j in range(N):
v = visuals.LineVisual(pos=data, color='w', method='gl')
v.transform = visuals.transforms.STTransform(
translate=(-1, 0), scale=scales)
row.append(v)
vcanvas = VisualCanvas(vlines, position=(400, 300), size=(800, 600),
title="VisualCanvas")
# Scenegraph version
scanvas = scene.SceneCanvas(show=True, keys='interactive',
title="SceneCanvas")
scanvas.size = 800, 600
grid = scanvas.central_widget.add_grid(margin=0)
lines = []
for i in range(10):
lines.append([])
for j in range(10):
vb = grid.add_view(camera='panzoom', row=i, col=j)
vb.camera.set_range([0, 100], [-5, 5], margin=0)
line = scene.visuals.Line(pos=data, color='w', method='gl')
vb.add(line)
scanvas.show()
import sys
if sys.flags.interactive != 1:
app.run()
|
bsd-3-clause
|
gsuresh92/mongo-connector
|
mongo_connector/command_helper.py
|
33
|
2692
|
# Copyright 2013-2014 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Preprocesses the oplog command entries.
"""
import logging
import mongo_connector.errors
LOG = logging.getLogger(__name__)
class CommandHelper(object):
def __init__(self, namespace_set=[], dest_mapping={}):
self.namespace_set = namespace_set
self.dest_mapping = dest_mapping
# Create a db to db mapping from the namespace mapping.
db_pairs = set((ns.split('.')[0],
self.map_namespace(ns).split('.')[0])
for ns in self.namespace_set)
targets = set()
for _, dest in db_pairs:
if dest in targets:
dbs = [src2 for src2, dest2 in db_pairs
if dest == dest2]
raise mongo_connector.errors.MongoConnectorError(
"Database mapping is not one-to-one."
" %s %s have collections mapped to %s"
% (", ".join(dbs),
"both" if len(dbs) == 2 else "all",
dest))
else:
targets.add(dest)
self.db_mapping = {}
for src, dest in db_pairs:
arr = self.db_mapping.get(src, [])
arr.append(dest)
self.db_mapping[src] = arr
# Applies the namespace mapping to a database.
# Individual collections in a database can be mapped to
# different target databases, so map_db can return multiple results.
def map_db(self, db):
if self.db_mapping:
return self.db_mapping.get(db, [])
else:
return [db]
# Applies the namespace mapping to a "db.collection" string
def map_namespace(self, ns):
if not self.namespace_set:
return ns
elif ns not in self.namespace_set:
return None
else:
return self.dest_mapping.get(ns, ns)
# Applies the namespace mapping to a db and collection
def map_collection(self, db, coll):
ns = self.map_namespace(db + '.' + coll)
if ns:
return tuple(ns.split('.', 1))
else:
return None, None
|
apache-2.0
|
jslvtr/FriendFinderBackend
|
src/db/database.py
|
1
|
1287
|
__author__ = 'jslvtr'
import pymongo
import pymongo.errors
class Database(object):
def __init__(self, uri):
client = pymongo.MongoClient(uri)
self.db = client.get_default_database()
self.collection = None
def insert(self, data):
if self.collection is not None:
self.collection.insert(data)
else:
raise pymongo.errors.InvalidOperation
def remove(self, data):
if self.collection is not None:
self.collection.remove(data)
else:
raise pymongo.errors.InvalidOperation
def update(self, query, data):
if self.collection is not None:
self.collection.update(query, data)
else:
raise pymongo.errors.InvalidOperation
def find(self, query=None):
if self.collection is not None:
if query is None:
return self.collection.find()
else:
return self.collection.find(query)
else:
raise pymongo.errors.InvalidOperation
def find_one(self, query):
if self.collection is not None:
return self.collection.find_one(query)
else:
raise pymongo.errors.InvalidOperation
def close(self):
self.db.close()
|
mit
|
hnakamur/django
|
tests/test_client_regress/urls.py
|
352
|
2521
|
from django.conf.urls import include, url
from django.views.generic import RedirectView
from . import views
urlpatterns = [
url(r'', include('test_client.urls')),
url(r'^no_template_view/$', views.no_template_view),
url(r'^staff_only/$', views.staff_only_view),
url(r'^get_view/$', views.get_view),
url(r'^request_data/$', views.request_data),
url(r'^request_data_extended/$', views.request_data, {'template': 'extended.html', 'data': 'bacon'}),
url(r'^arg_view/(?P<name>.+)/$', views.view_with_argument, name='arg_view'),
url(r'^nested_view/$', views.nested_view, name='nested_view'),
url(r'^login_protected_redirect_view/$', views.login_protected_redirect_view),
url(r'^redirects/$', RedirectView.as_view(url='/redirects/further/')),
url(r'^redirects/further/$', RedirectView.as_view(url='/redirects/further/more/')),
url(r'^redirects/further/more/$', RedirectView.as_view(url='/no_template_view/')),
url(r'^redirect_to_non_existent_view/$', RedirectView.as_view(url='/non_existent_view/')),
url(r'^redirect_to_non_existent_view2/$', RedirectView.as_view(url='/redirect_to_non_existent_view/')),
url(r'^redirect_to_self/$', RedirectView.as_view(url='/redirect_to_self/')),
url(r'^redirect_to_self_with_changing_query_view/$', views.redirect_to_self_with_changing_query_view),
url(r'^circular_redirect_1/$', RedirectView.as_view(url='/circular_redirect_2/')),
url(r'^circular_redirect_2/$', RedirectView.as_view(url='/circular_redirect_3/')),
url(r'^circular_redirect_3/$', RedirectView.as_view(url='/circular_redirect_1/')),
url(r'^redirect_other_host/$', RedirectView.as_view(url='https://otherserver:8443/no_template_view/')),
url(r'^set_session/$', views.set_session_view),
url(r'^check_session/$', views.check_session_view),
url(r'^request_methods/$', views.request_methods_view),
url(r'^check_unicode/$', views.return_unicode),
url(r'^check_binary/$', views.return_undecodable_binary),
url(r'^json_response/$', views.return_json_response),
url(r'^parse_unicode_json/$', views.return_json_file),
url(r'^check_headers/$', views.check_headers),
url(r'^check_headers_redirect/$', RedirectView.as_view(url='/check_headers/')),
url(r'^body/$', views.body),
url(r'^read_all/$', views.read_all),
url(r'^read_buffer/$', views.read_buffer),
url(r'^request_context_view/$', views.request_context_view),
url(r'^render_template_multiple_times/$', views.render_template_multiple_times),
]
|
bsd-3-clause
|
balloob/home-assistant
|
tests/components/homekit/test_type_cameras.py
|
7
|
24940
|
"""Test different accessory types: Camera."""
from uuid import UUID
from pyhap.accessory_driver import AccessoryDriver
import pytest
from homeassistant.components import camera, ffmpeg
from homeassistant.components.homekit.accessories import HomeBridge
from homeassistant.components.homekit.const import (
AUDIO_CODEC_COPY,
CHAR_MOTION_DETECTED,
CHAR_PROGRAMMABLE_SWITCH_EVENT,
CONF_AUDIO_CODEC,
CONF_LINKED_DOORBELL_SENSOR,
CONF_LINKED_MOTION_SENSOR,
CONF_STREAM_SOURCE,
CONF_SUPPORT_AUDIO,
CONF_VIDEO_CODEC,
DEVICE_CLASS_MOTION,
DEVICE_CLASS_OCCUPANCY,
SERV_DOORBELL,
SERV_MOTION_SENSOR,
SERV_STATELESS_PROGRAMMABLE_SWITCH,
VIDEO_CODEC_COPY,
VIDEO_CODEC_H264_OMX,
)
from homeassistant.components.homekit.img_util import TurboJPEGSingleton
from homeassistant.components.homekit.type_cameras import Camera
from homeassistant.components.homekit.type_switches import Switch
from homeassistant.const import ATTR_DEVICE_CLASS, STATE_OFF, STATE_ON
from homeassistant.exceptions import HomeAssistantError
from homeassistant.setup import async_setup_component
from .common import mock_turbo_jpeg
from tests.async_mock import AsyncMock, MagicMock, PropertyMock, patch
MOCK_START_STREAM_TLV = "ARUCAQEBEDMD1QMXzEaatnKSQ2pxovYCNAEBAAIJAQECAgECAwEAAwsBAgAFAgLQAgMBHgQXAQFjAgQ768/RAwIrAQQEAAAAPwUCYgUDLAEBAwIMAQEBAgEAAwECBAEUAxYBAW4CBCzq28sDAhgABAQAAKBABgENBAEA"
MOCK_END_POINTS_TLV = "ARAzA9UDF8xGmrZykkNqcaL2AgEAAxoBAQACDTE5Mi4xNjguMjA4LjUDAi7IBAKkxwQlAQEAAhDN0+Y0tZ4jzoO0ske9UsjpAw6D76oVXnoi7DbawIG4CwUlAQEAAhCyGcROB8P7vFRDzNF2xrK1Aw6NdcLugju9yCfkWVSaVAYEDoAsAAcEpxV8AA=="
MOCK_START_STREAM_SESSION_UUID = UUID("3303d503-17cc-469a-b672-92436a71a2f6")
PID_THAT_WILL_NEVER_BE_ALIVE = 2147483647
async def _async_start_streaming(hass, acc):
"""Start streaming a camera."""
acc.set_selected_stream_configuration(MOCK_START_STREAM_TLV)
await acc.run_handler()
await hass.async_block_till_done()
async def _async_setup_endpoints(hass, acc):
"""Set camera endpoints."""
acc.set_endpoints(MOCK_END_POINTS_TLV)
await acc.run_handler()
await hass.async_block_till_done()
async def _async_reconfigure_stream(hass, acc, session_info, stream_config):
"""Reconfigure the stream."""
await acc.reconfigure_stream(session_info, stream_config)
await acc.run_handler()
await hass.async_block_till_done()
async def _async_stop_all_streams(hass, acc):
"""Stop all camera streams."""
await acc.stop()
await acc.run_handler()
await hass.async_block_till_done()
async def _async_stop_stream(hass, acc, session_info):
"""Stop a camera stream."""
await acc.stop_stream(session_info)
await acc.run_handler()
await hass.async_block_till_done()
@pytest.fixture()
def run_driver(hass):
"""Return a custom AccessoryDriver instance for HomeKit accessory init."""
with patch("pyhap.accessory_driver.Zeroconf"), patch(
"pyhap.accessory_driver.AccessoryEncoder"
), patch("pyhap.accessory_driver.HAPServer"), patch(
"pyhap.accessory_driver.AccessoryDriver.publish"
), patch(
"pyhap.accessory_driver.AccessoryDriver.persist"
):
yield AccessoryDriver(
pincode=b"123-45-678", address="127.0.0.1", loop=hass.loop
)
def _get_exits_after_startup_mock_ffmpeg():
"""Return a ffmpeg that will have an invalid pid."""
ffmpeg = MagicMock()
type(ffmpeg.process).pid = PropertyMock(return_value=PID_THAT_WILL_NEVER_BE_ALIVE)
ffmpeg.open = AsyncMock(return_value=True)
ffmpeg.close = AsyncMock(return_value=True)
ffmpeg.kill = AsyncMock(return_value=True)
return ffmpeg
def _get_working_mock_ffmpeg():
"""Return a working ffmpeg."""
ffmpeg = MagicMock()
ffmpeg.open = AsyncMock(return_value=True)
ffmpeg.close = AsyncMock(return_value=True)
ffmpeg.kill = AsyncMock(return_value=True)
return ffmpeg
def _get_failing_mock_ffmpeg():
"""Return an ffmpeg that fails to shutdown."""
ffmpeg = MagicMock()
type(ffmpeg.process).pid = PropertyMock(return_value=PID_THAT_WILL_NEVER_BE_ALIVE)
ffmpeg.open = AsyncMock(return_value=False)
ffmpeg.close = AsyncMock(side_effect=OSError)
ffmpeg.kill = AsyncMock(side_effect=OSError)
return ffmpeg
async def test_camera_stream_source_configured(hass, run_driver, events):
"""Test a camera that can stream with a configured source."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{CONF_STREAM_SOURCE: "/dev/null", CONF_SUPPORT_AUDIO: True},
)
not_camera_acc = Switch(
hass,
run_driver,
"Switch",
entity_id,
4,
{},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
bridge.add_accessory(not_camera_acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
working_ffmpeg = _get_working_mock_ffmpeg()
session_info = acc.sessions[MOCK_START_STREAM_SESSION_UUID]
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value=None,
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=working_ffmpeg,
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
expected_output = (
"-map 0:v:0 -an -c:v libx264 -profile:v high -tune zerolatency -pix_fmt "
"yuv420p -r 30 -b:v 299k -bufsize 1196k -maxrate 299k -payload_type 99 -ssrc {v_ssrc} -f "
"rtp -srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params "
"zdPmNLWeI86DtLJHvVLI6YPvqhVeeiLsNtrAgbgL "
"srtp://192.168.208.5:51246?rtcpport=51246&localrtcpport=51246&pkt_size=1316 -map 0:a:0 "
"-vn -c:a libopus -application lowdelay -ac 1 -ar 24k -b:a 24k -bufsize 96k -payload_type "
"110 -ssrc {a_ssrc} -f rtp -srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params "
"shnETgfD+7xUQ8zRdsaytY11wu6CO73IJ+RZVJpU "
"srtp://192.168.208.5:51108?rtcpport=51108&localrtcpport=51108&pkt_size=188"
)
working_ffmpeg.open.assert_called_with(
cmd=[],
input_source="-i /dev/null",
output=expected_output.format(**session_info),
stdout_pipe=False,
)
await _async_setup_endpoints(hass, acc)
working_ffmpeg = _get_working_mock_ffmpeg()
session_info = acc.sessions[MOCK_START_STREAM_SESSION_UUID]
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value="rtsp://example.local",
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=working_ffmpeg,
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
# Calling a second time should not throw
await _async_stop_all_streams(hass, acc)
turbo_jpeg = mock_turbo_jpeg(
first_width=16, first_height=12, second_width=300, second_height=200
)
with patch("turbojpeg.TurboJPEG", return_value=turbo_jpeg):
TurboJPEGSingleton()
assert await hass.async_add_executor_job(
acc.get_snapshot, {"aid": 2, "image-width": 300, "image-height": 200}
)
# Verify the bridge only forwards get_snapshot for
# cameras and valid accessory ids
assert await hass.async_add_executor_job(
bridge.get_snapshot, {"aid": 2, "image-width": 300, "image-height": 200}
)
with pytest.raises(ValueError):
assert await hass.async_add_executor_job(
bridge.get_snapshot, {"aid": 3, "image-width": 300, "image-height": 200}
)
with pytest.raises(ValueError):
assert await hass.async_add_executor_job(
bridge.get_snapshot, {"aid": 4, "image-width": 300, "image-height": 200}
)
async def test_camera_stream_source_configured_with_failing_ffmpeg(
hass, run_driver, events
):
"""Test a camera that can stream with a configured source with ffmpeg failing."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{CONF_STREAM_SOURCE: "/dev/null", CONF_SUPPORT_AUDIO: True},
)
not_camera_acc = Switch(
hass,
run_driver,
"Switch",
entity_id,
4,
{},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
bridge.add_accessory(not_camera_acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value="rtsp://example.local",
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=_get_failing_mock_ffmpeg(),
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
# Calling a second time should not throw
await _async_stop_all_streams(hass, acc)
async def test_camera_stream_source_found(hass, run_driver, events):
"""Test a camera that can stream and we get the source from the entity."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{},
)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value="rtsp://example.local",
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=_get_working_mock_ffmpeg(),
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
await _async_setup_endpoints(hass, acc)
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value="rtsp://example.local",
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=_get_working_mock_ffmpeg(),
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
async def test_camera_stream_source_fails(hass, run_driver, events):
"""Test a camera that can stream and we cannot get the source from the entity."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{},
)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
side_effect=OSError,
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=_get_working_mock_ffmpeg(),
):
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
async def test_camera_with_no_stream(hass, run_driver, events):
"""Test a camera that cannot stream."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(hass, camera.DOMAIN, {camera.DOMAIN: {}})
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{},
)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
await _async_start_streaming(hass, acc)
await _async_stop_all_streams(hass, acc)
with pytest.raises(HomeAssistantError):
await hass.async_add_executor_job(
acc.get_snapshot, {"aid": 2, "image-width": 300, "image-height": 200}
)
async def test_camera_stream_source_configured_and_copy_codec(hass, run_driver, events):
"""Test a camera that can stream with a configured source."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{
CONF_STREAM_SOURCE: "/dev/null",
CONF_SUPPORT_AUDIO: True,
CONF_VIDEO_CODEC: VIDEO_CODEC_COPY,
CONF_AUDIO_CODEC: AUDIO_CODEC_COPY,
},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
session_info = acc.sessions[MOCK_START_STREAM_SESSION_UUID]
working_ffmpeg = _get_working_mock_ffmpeg()
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value=None,
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=working_ffmpeg,
):
await _async_start_streaming(hass, acc)
await _async_reconfigure_stream(hass, acc, session_info, {})
await _async_stop_stream(hass, acc, session_info)
await _async_stop_all_streams(hass, acc)
expected_output = (
"-map 0:v:0 -an -c:v copy -tune zerolatency -pix_fmt yuv420p -r 30 -b:v 299k "
"-bufsize 1196k -maxrate 299k -payload_type 99 -ssrc {v_ssrc} -f rtp -srtp_out_suite "
"AES_CM_128_HMAC_SHA1_80 -srtp_out_params zdPmNLWeI86DtLJHvVLI6YPvqhVeeiLsNtrAgbgL "
"srtp://192.168.208.5:51246?rtcpport=51246&localrtcpport=51246&pkt_size=1316 -map 0:a:0 "
"-vn -c:a copy -ac 1 -ar 24k -b:a 24k -bufsize 96k -payload_type 110 -ssrc {a_ssrc} "
"-f rtp -srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params "
"shnETgfD+7xUQ8zRdsaytY11wu6CO73IJ+RZVJpU "
"srtp://192.168.208.5:51108?rtcpport=51108&localrtcpport=51108&pkt_size=188"
)
working_ffmpeg.open.assert_called_with(
cmd=[],
input_source="-i /dev/null",
output=expected_output.format(**session_info),
stdout_pipe=False,
)
async def test_camera_streaming_fails_after_starting_ffmpeg(hass, run_driver, events):
"""Test a camera that can stream with a configured source."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{
CONF_STREAM_SOURCE: "/dev/null",
CONF_SUPPORT_AUDIO: True,
CONF_VIDEO_CODEC: VIDEO_CODEC_H264_OMX,
CONF_AUDIO_CODEC: AUDIO_CODEC_COPY,
},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
await _async_setup_endpoints(hass, acc)
session_info = acc.sessions[MOCK_START_STREAM_SESSION_UUID]
ffmpeg_with_invalid_pid = _get_exits_after_startup_mock_ffmpeg()
with patch(
"homeassistant.components.demo.camera.DemoCamera.stream_source",
return_value=None,
), patch(
"homeassistant.components.homekit.type_cameras.HAFFmpeg",
return_value=ffmpeg_with_invalid_pid,
):
await _async_start_streaming(hass, acc)
await _async_reconfigure_stream(hass, acc, session_info, {})
# Should not throw
await _async_stop_stream(hass, acc, {"id": "does_not_exist"})
await _async_stop_all_streams(hass, acc)
expected_output = (
"-map 0:v:0 -an -c:v h264_omx -profile:v high -tune zerolatency -pix_fmt yuv420p -r 30 -b:v 299k "
"-bufsize 1196k -maxrate 299k -payload_type 99 -ssrc {v_ssrc} -f rtp -srtp_out_suite "
"AES_CM_128_HMAC_SHA1_80 -srtp_out_params zdPmNLWeI86DtLJHvVLI6YPvqhVeeiLsNtrAgbgL "
"srtp://192.168.208.5:51246?rtcpport=51246&localrtcpport=51246&pkt_size=1316 -map 0:a:0 "
"-vn -c:a copy -ac 1 -ar 24k -b:a 24k -bufsize 96k -payload_type 110 -ssrc {a_ssrc} "
"-f rtp -srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params "
"shnETgfD+7xUQ8zRdsaytY11wu6CO73IJ+RZVJpU "
"srtp://192.168.208.5:51108?rtcpport=51108&localrtcpport=51108&pkt_size=188"
)
ffmpeg_with_invalid_pid.open.assert_called_with(
cmd=[],
input_source="-i /dev/null",
output=expected_output.format(**session_info),
stdout_pipe=False,
)
async def test_camera_with_linked_motion_sensor(hass, run_driver, events):
"""Test a camera with a linked motion sensor can update."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
motion_entity_id = "binary_sensor.motion"
hass.states.async_set(
motion_entity_id, STATE_ON, {ATTR_DEVICE_CLASS: DEVICE_CLASS_MOTION}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{
CONF_STREAM_SOURCE: "/dev/null",
CONF_SUPPORT_AUDIO: True,
CONF_VIDEO_CODEC: VIDEO_CODEC_H264_OMX,
CONF_AUDIO_CODEC: AUDIO_CODEC_COPY,
CONF_LINKED_MOTION_SENSOR: motion_entity_id,
},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
service = acc.get_service(SERV_MOTION_SENSOR)
assert service
char = service.get_characteristic(CHAR_MOTION_DETECTED)
assert char
assert char.value is True
hass.states.async_set(
motion_entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: DEVICE_CLASS_MOTION}
)
await hass.async_block_till_done()
assert char.value is False
char.set_value(True)
hass.states.async_set(
motion_entity_id, STATE_ON, {ATTR_DEVICE_CLASS: DEVICE_CLASS_MOTION}
)
await hass.async_block_till_done()
assert char.value is True
# Ensure we do not throw when the linked
# motion sensor is removed
hass.states.async_remove(motion_entity_id)
await hass.async_block_till_done()
await acc.run_handler()
await hass.async_block_till_done()
assert char.value is True
async def test_camera_with_a_missing_linked_motion_sensor(hass, run_driver, events):
"""Test a camera with a configured linked motion sensor that is missing."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
motion_entity_id = "binary_sensor.motion"
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{CONF_LINKED_MOTION_SENSOR: motion_entity_id},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
assert not acc.get_service(SERV_MOTION_SENSOR)
async def test_camera_with_linked_doorbell_sensor(hass, run_driver, events):
"""Test a camera with a linked doorbell sensor can update."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
doorbell_entity_id = "binary_sensor.doorbell"
hass.states.async_set(
doorbell_entity_id, STATE_ON, {ATTR_DEVICE_CLASS: DEVICE_CLASS_OCCUPANCY}
)
await hass.async_block_till_done()
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{
CONF_STREAM_SOURCE: "/dev/null",
CONF_SUPPORT_AUDIO: True,
CONF_VIDEO_CODEC: VIDEO_CODEC_H264_OMX,
CONF_AUDIO_CODEC: AUDIO_CODEC_COPY,
CONF_LINKED_DOORBELL_SENSOR: doorbell_entity_id,
},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
service = acc.get_service(SERV_DOORBELL)
assert service
char = service.get_characteristic(CHAR_PROGRAMMABLE_SWITCH_EVENT)
assert char
assert char.value == 0
service2 = acc.get_service(SERV_STATELESS_PROGRAMMABLE_SWITCH)
assert service2
char2 = service.get_characteristic(CHAR_PROGRAMMABLE_SWITCH_EVENT)
assert char2
assert char2.value == 0
hass.states.async_set(
doorbell_entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: DEVICE_CLASS_OCCUPANCY}
)
await hass.async_block_till_done()
assert char.value == 0
assert char2.value == 0
char.set_value(True)
char2.set_value(True)
hass.states.async_set(
doorbell_entity_id, STATE_ON, {ATTR_DEVICE_CLASS: DEVICE_CLASS_OCCUPANCY}
)
await hass.async_block_till_done()
assert char.value == 0
assert char2.value == 0
# Ensure we do not throw when the linked
# doorbell sensor is removed
hass.states.async_remove(doorbell_entity_id)
await hass.async_block_till_done()
await acc.run_handler()
await hass.async_block_till_done()
assert char.value == 0
assert char2.value == 0
async def test_camera_with_a_missing_linked_doorbell_sensor(hass, run_driver, events):
"""Test a camera with a configured linked doorbell sensor that is missing."""
await async_setup_component(hass, ffmpeg.DOMAIN, {ffmpeg.DOMAIN: {}})
await async_setup_component(
hass, camera.DOMAIN, {camera.DOMAIN: {"platform": "demo"}}
)
await hass.async_block_till_done()
doorbell_entity_id = "binary_sensor.doorbell"
entity_id = "camera.demo_camera"
hass.states.async_set(entity_id, None)
await hass.async_block_till_done()
acc = Camera(
hass,
run_driver,
"Camera",
entity_id,
2,
{CONF_LINKED_DOORBELL_SENSOR: doorbell_entity_id},
)
bridge = HomeBridge("hass", run_driver, "Test Bridge")
bridge.add_accessory(acc)
await acc.run_handler()
assert acc.aid == 2
assert acc.category == 17 # Camera
assert not acc.get_service(SERV_DOORBELL)
assert not acc.get_service(SERV_STATELESS_PROGRAMMABLE_SWITCH)
|
apache-2.0
|
etsinko/oerplib
|
tests/test_session.py
|
2
|
3342
|
# -*- coding: UTF-8 -*-
try:
import unittest2 as unittest
except:
import unittest
import tempfile
import os
from args import ARGS
import oerplib
class TestSession(unittest.TestCase):
def setUp(self):
self.oerp = oerplib.OERP(
ARGS.server, protocol=ARGS.protocol, port=ARGS.port,
version=ARGS.version)
self.user = self.oerp.login(ARGS.user, ARGS.passwd, ARGS.database)
self.session_name = ARGS.database
self.file_path = tempfile.mkstemp(suffix='.cfg', prefix='oerplib_')[1]
def tearDown(self):
os.remove(self.file_path)
def test_session_oerp_list(self):
result = oerplib.OERP.list(rc_file=self.file_path)
self.assertIsInstance(result, list)
other_file_path = tempfile.mkstemp()[1]
result = oerplib.OERP.list(rc_file=other_file_path)
self.assertIsInstance(result, list)
def test_session_oerp_save_and_remove(self):
self.oerp.save(self.session_name, rc_file=self.file_path)
result = oerplib.OERP.list(rc_file=self.file_path)
self.assertIn(self.session_name, result)
oerplib.OERP.remove(self.session_name, rc_file=self.file_path)
def test_session_oerp_load(self):
self.oerp.save(self.session_name, rc_file=self.file_path)
oerp = oerplib.OERP.load(self.session_name, rc_file=self.file_path)
self.assertIsInstance(oerp, oerplib.OERP)
self.assertEqual(self.oerp.server, oerp.server)
self.assertEqual(self.oerp.port, oerp.port)
self.assertEqual(self.oerp.database, oerp.database)
self.assertEqual(self.oerp.protocol, oerp.protocol)
self.assertEqual(self.oerp.user, oerp.user)
oerplib.OERP.remove(self.session_name, rc_file=self.file_path)
def test_session_tools_get(self):
self.oerp.save(self.session_name, rc_file=self.file_path)
data = {
'type': self.oerp.__class__.__name__,
'server': self.oerp.server,
'protocol': self.oerp.protocol,
'port': int(self.oerp.port),
'timeout': self.oerp.config['timeout'],
'user': self.oerp.user.login,
'passwd': self.oerp._password,
'database': self.oerp.database,
}
result = oerplib.tools.session.get(
self.session_name, rc_file=self.file_path)
self.assertEqual(data, result)
oerplib.OERP.remove(self.session_name, rc_file=self.file_path)
def test_session_tools_get_all(self):
self.oerp.save(self.session_name, rc_file=self.file_path)
data = {
self.session_name: {
'type': self.oerp.__class__.__name__,
'server': self.oerp.server,
'protocol': self.oerp.protocol,
'port': int(self.oerp.port),
'timeout': self.oerp.config['timeout'],
'user': self.oerp.user.login,
'passwd': self.oerp._password,
'database': self.oerp.database,
}
}
result = oerplib.tools.session.get_all(rc_file=self.file_path)
self.assertIn(self.session_name, result)
self.assertEqual(data, result)
oerplib.OERP.remove(self.session_name, rc_file=self.file_path)
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
lgpl-3.0
|
codeb2cc/tornado
|
tornado/test/asyncio_test.py
|
107
|
2520
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import, division, print_function, with_statement
import sys
import textwrap
from tornado import gen
from tornado.testing import AsyncTestCase, gen_test
from tornado.test.util import unittest
try:
from tornado.platform.asyncio import asyncio, AsyncIOLoop
except ImportError:
asyncio = None
skipIfNoSingleDispatch = unittest.skipIf(
gen.singledispatch is None, "singledispatch module not present")
@unittest.skipIf(asyncio is None, "asyncio module not present")
class AsyncIOLoopTest(AsyncTestCase):
def get_new_ioloop(self):
io_loop = AsyncIOLoop()
asyncio.set_event_loop(io_loop.asyncio_loop)
return io_loop
def test_asyncio_callback(self):
# Basic test that the asyncio loop is set up correctly.
asyncio.get_event_loop().call_soon(self.stop)
self.wait()
@skipIfNoSingleDispatch
@gen_test
def test_asyncio_future(self):
# Test that we can yield an asyncio future from a tornado coroutine.
# Without 'yield from', we must wrap coroutines in asyncio.async.
x = yield asyncio.async(
asyncio.get_event_loop().run_in_executor(None, lambda: 42))
self.assertEqual(x, 42)
@unittest.skipIf(sys.version_info < (3, 3),
'PEP 380 not available')
@skipIfNoSingleDispatch
@gen_test
def test_asyncio_yield_from(self):
# Test that we can use asyncio coroutines with 'yield from'
# instead of asyncio.async(). This requires python 3.3 syntax.
global_namespace = dict(globals(), **locals())
local_namespace = {}
exec(textwrap.dedent("""
@gen.coroutine
def f():
event_loop = asyncio.get_event_loop()
x = yield from event_loop.run_in_executor(None, lambda: 42)
return x
"""), global_namespace, local_namespace)
result = yield local_namespace['f']()
self.assertEqual(result, 42)
|
apache-2.0
|
PyORBIT-Collaboration/py-orbit
|
py/orbit/py_linac/lattice/LinacRfGapNodes.py
|
2
|
33066
|
"""
This package is a collection of the RF gap node implementations.
The RF Cavities and gaps in them are different from the ring RF.
"""
import os
import math
import sys
#---- MPI module function and classes
import orbit_mpi
from orbit_mpi import mpi_comm
from orbit_mpi import mpi_datatype
from orbit_mpi import mpi_op
# import from orbit Python utilities
from orbit.utils import orbitFinalize
from orbit.utils import phaseNearTargetPhase, phaseNearTargetPhaseDeg
from orbit.utils import speed_of_light
# import from orbit c++ utilities
from orbit_utils import Polynomial
from orbit_utils import Function
# from LinacAccLattice import Sequence
from LinacAccLatticeLib import Sequence
from LinacAccNodes import Drift, BaseLinacNode
# from linac import the RF gap classes
from linac import BaseRfGap, MatrixRfGap, RfGapTTF, RfGapThreePointTTF
from linac import BaseRfGap_slow, RfGapTTF_slow, RfGapThreePointTTF_slow
# The abstract RF gap import
from LinacAccNodes import AbstractRF_Gap
# import teapot base functions from wrapper around C++ functions
from orbit.teapot_base import TPB
# Import the linac specific tracking from linac_tracking. This module has
# the following functions duplicated the original TEAPOT functions
# drift - linac drift tracking
# quad1 - linac quad linear part of tracking
# quad2 - linac quad non-linear part of tracking
import linac_tracking
from bunch import Bunch
class BaseRF_Gap(AbstractRF_Gap):
"""
The simplest RF gap representation. The only E0*T*L or E0*L and TTFs define
all effects of the node. By default the Matrix RF Gap model is used.
This model can be replaced later with a more complex RF gap model by using
the setCppGapModel(...) method. User should provide the necessary parameters
for each type of RF gap model.
MatrixRfGap - E0TL, mode
BaseRfGap - E0TL, mode
RfGapTTF - E0L, T,S,Tp,Sp, beta_min, beta_max
The phase of the first gap in the cavity is defined by the parent cavity instance.
The relative amplitude is also defined by the parent cavity instance.
The 'mode' parameter is a shift of the phase between two gaps in PI units.
"""
def __init__(self, name = "baserfgap"):
"""
Constructor for the simplest RF gap.
E0L and E0TL parameters are in GeV. Phases are in radians.
"""
AbstractRF_Gap.__init__(self,name)
self.addParam("E0TL",0.)
self.addParam("mode",0.)
self.addParam("gap_phase",0.)
self.addParam("rfCavity", None)
#----- TTF model params ------
self.addParam("E0L",0.)
self.polyT = Polynomial(0)
self.polyT.coefficient(0,1.0)
self.polyS = Polynomial(0)
self.polyS.coefficient(0,0.0)
self.polyTp = Polynomial(0)
self.polyTp.coefficient(0,0.0)
self.polySp = Polynomial(0)
self.polySp.coefficient(0,0.0)
self.addParam("beta_min",0.)
self.addParam("beta_max",1.)
#-----------------------------
self.addParam("rfCavity",None)
self.addParam("EzFile","no_file")
self.setType("baserfgap")
self.__isFirstGap = False
#---- by default we use the TTF model
#---- which is a Transit-Time-Factor model from Parmila
#self.cppGapModel = MatrixRfGap()
#self.cppGapModel = BaseRfGap()
self.cppGapModel = RfGapTTF()
def setLinacTracker(self, switch = True):
"""
This method will switch RF gap model to slower one where transformations
coefficients are calculated for each particle in the bunch.
"""
AbstractRF_Gap.setLinacTracker(self,switch)
if(switch):
if(isinstance(self.cppGapModel,BaseRfGap) or isinstance(self.cppGapModel,BaseRfGap_slow)):
self.cppGapModel = BaseRfGap_slow()
if(isinstance(self.cppGapModel,RfGapTTF) or isinstance(self.cppGapModel,RfGapTTF_slow)):
self.cppGapModel = RfGapTTF_slow()
else:
if(isinstance(self.cppGapModel,BaseRfGap) or isinstance(self.cppGapModel,BaseRfGap_slow)):
self.cppGapModel = BaseRfGap()
if(isinstance(self.cppGapModel,RfGapTTF) or isinstance(self.cppGapModel,RfGapTTF_slow)):
self.cppGapModel = RfGapTTF()
def setnParts(self, n = 1):
"""
Method. Sets the number of body parts of the node.
For the RF gap with zero length it will be only 1.
"""
BaseLinacNode.setnParts(self,1)
def setCppGapModel(self, cppGapModel = MatrixRfGap()):
"""
This method will set the fast c++ simple model for the RF Gap.
By default it is Matrix RF Gap model which is a linear transport matrix.
"""
self.cppGapModel = cppGapModel
def initialize(self):
"""
The BaseRF_Gap class implementation
of the AccNode class initialize() method.
"""
nParts = self.getnParts()
if(nParts != 1):
msg = "The BaseRF_Gap RF gap should have 1 parts!"
msg = msg + os.linesep
msg = msg + "Method initialize():"
msg = msg + os.linesep
msg = msg + "Name of element=" + self.getName()
msg = msg + os.linesep
msg = msg + "Type of element=" + self.getType()
msg = msg + os.linesep
msg = msg + "nParts =" + str(nParts)
msg = msg + os.linesep
msg = msg + "lenght =" + str(self.getLength())
orbitFinalize(msg)
self.setLength(0.,0)
def isFirstRFGap(self):
"""
Returns True if it is the first gap in RF cavity.
"""
return self.__isFirstGap
def setAsFirstRFGap(self, isFirst):
"""
Sets if it is the first gap in RF cavity.
"""
self.__isFirstGap = isFirst
def setRF_Cavity(self, rf_cav):
"""
Sets the parent RF Cavity.
"""
self.addParam("rfCavity",rf_cav)
def getRF_Cavity(self):
"""
Returns the parent RF Cavity.
"""
return self.getParam("rfCavity")
def setGapPhase(self, gap_phase):
"""
Sets the rf gap phase.
"""
self.setParam("gap_phase",gap_phase)
def getGapPhase(self):
"""
Returns the rf gap phase.
"""
return self.getParam("gap_phase")
def getTTF_Polynimials(self):
"""
Returns the T,S,Tp,Sp, polynomials in the TTF model.
"""
return (self.polyT,self.polyS,self.polyTp,self.polySp)
def getBetaMinMax(self):
"""
Returns beta min and max for TTF model polynomials.
"""
return (self.getParam("beta_min"),self.getParam("beta_max") )
def setBetaMinMax(self,beta_min,beta_max):
"""
Sets beta min and max for TTF model polynomials.
"""
self.setParam("beta_min", beta_min)
self.setParam("beta_max", beta_max)
def track(self, paramsDict):
"""
The simplest RF gap class implementation of
the AccNode class track(probe) method.
"""
bunch = paramsDict["bunch"]
syncPart = bunch.getSyncParticle()
E0TL = self.getParam("E0TL")
E0L = self.getParam("E0L")
modePhase = self.getParam("mode")*math.pi
rfCavity = self.getRF_Cavity()
frequency = rfCavity.getFrequency()
phase = rfCavity.getPhase() + modePhase
rf_ampl = rfCavity.getAmp()
arrival_time = syncPart.time()
designArrivalTime = rfCavity.getDesignArrivalTime()
if(self.__isFirstGap):
if(rfCavity.isDesignSetUp()):
#print "debug RF =",self.getName()," phase=",(phase*180./math.pi - 180.)
phase = math.fmod(frequency*(arrival_time - designArrivalTime)*2.0*math.pi + phase,2.0*math.pi)
#print "debug RF =",self.getName()," phase=",(phase*180./math.pi - 180.)
else:
sequence = self.getSequence()
accLattice = sequence.getLinacAccLattice()
msg = "The BaseRF_Gap class. You have to run trackDesign on the LinacAccLattice first to initialize all RF Cavities' phases!"
msg = msg + os.linesep
if(accLattice != None):
msg = msg + "Lattice =" + accLattice.getName()
msg = msg + os.linesep
if(sequence != None):
msg = msg + "Sequence =" + sequence.getName()
msg = msg + os.linesep
msg = msg + "RF Cavity =" + rfCavity.getName()
msg = msg + os.linesep
msg = msg + "Name of element=" + self.getName()
msg = msg + os.linesep
msg = msg + "Type of element=" + self.getType()
msg = msg + os.linesep
orbitFinalize(msg)
else:
phase = math.fmod(frequency*(arrival_time - designArrivalTime)*2.0*math.pi+phase,2.0*math.pi)
#---- rf gap input phase -----
self.setGapPhase(phase)
#call rf gap model to track the bunch
if(rf_ampl == 0.): return
if(isinstance(self.cppGapModel,MatrixRfGap) or isinstance(self.cppGapModel,BaseRfGap) or isinstance(self.cppGapModel,BaseRfGap_slow)):
self.cppGapModel.trackBunch(bunch,frequency,E0TL*rf_ampl,phase)
else:
self.ttf_track_bunch__(bunch,frequency,E0L*rf_ampl,phase)
#print "debug delta_time in deg=",frequency*(arrival_time - designArrivalTime)*380.
#print "debug RF =",self.getName()," E0TL=",E0TL," phase=",(phase*180./math.pi - 180.)," eKin[MeV]=",bunch.getSyncParticle().kinEnergy()*1.0e+3
def trackDesign(self, paramsDict):
"""
The RF First Gap node setups the design time of passage
of the bunch through this node.
"""
bunch = paramsDict["bunch"]
eKin_in = bunch.getSyncParticle().kinEnergy()
E0TL = self.getParam("E0TL")
E0L = self.getParam("E0L")
rfCavity = self.getRF_Cavity()
modePhase = self.getParam("mode")*math.pi
arrival_time = bunch.getSyncParticle().time()
frequency = rfCavity.getFrequency()
phase = rfCavity.getPhase() + modePhase
if(self.__isFirstGap):
rfCavity.setDesignArrivalTime(arrival_time)
rfCavity.setDesignSetUp(True)
rfCavity._setDesignPhase(rfCavity.getPhase())
rfCavity._setDesignAmp(rfCavity.getAmp())
else:
first_gap_arr_time = rfCavity.getDesignArrivalTime()
#print "debug name=",self.getName()," delta_phase=",frequency*(arrival_time - first_gap_arr_time)*360.0," phase=",phase*180/math.pi
phase = math.fmod(frequency*(arrival_time - first_gap_arr_time)*2.0*math.pi+phase,2.0*math.pi)
#print "debug design name=",self.getName()," arr_time=",arrival_time," phase=",phase*180./math.pi," E0TL=",E0TL*1.0e+3," freq=",frequency
#---- rf gap input phase -----
self.setGapPhase(phase)
#call rf gap model to track the bunch
rf_ampl = rfCavity.getDesignAmp()
if(rf_ampl == 0.): return
if(isinstance(self.cppGapModel,MatrixRfGap) or isinstance(self.cppGapModel,BaseRfGap) or isinstance(self.cppGapModel,BaseRfGap_slow)):
self.cppGapModel.trackBunch(bunch,frequency,E0TL*rf_ampl,phase)
else:
self.ttf_track_bunch__(bunch,frequency,E0L*rf_ampl,phase)
#eKin_out = bunch.getSyncParticle().kinEnergy()
#print "debug name=",self.getName()," phase=",(phase*180./math.pi-180.)," Ein=",eKin_in*1000.," Eout=",eKin_out*1000.," dE=",(eKin_out-eKin_in)*1000.
def ttf_track_bunch__(self,bunch,frequency,E0L,phase):
"""
Tracks the bunch through the TTF thin gap model. This private method was
introduced to to check the beta TTF limits in the polynomial representation
of T,T',S,and S' functions of the relativistic beta.
"""
beta = bunch.getSyncParticle().beta()
beta_min = self.getParam("beta_min")
beta_max = self.getParam("beta_max")
if(beta < beta_min or beta > beta_max):
sequence = self.getSequence()
accLattice = sequence.getLinacAccLattice()
rfCavity = self.getRF_Cavity()
msg = "The Python BaseRF_Gap class. The beta for SyncPart is not in the range [min:max]!"
msg = msg + os.linesep
if(accLattice != None):
msg = msg + "Lattice =" + accLattice.getName()
msg = msg + os.linesep
if(sequence != None):
msg = msg + "Sequence =" + sequence.getName()
msg = msg + os.linesep
msg = msg + "RF Cavity =" + rfCavity.getName()
msg = msg + os.linesep
msg = msg + "Name of element=" + self.getName()
msg = msg + os.linesep
msg = msg + "Type of element=" + self.getType()
msg = msg + os.linesep
msg = msg + "beta=" + str(beta)
msg = msg + os.linesep
msg = msg + "beta min=" + str(beta_min)
msg = msg + os.linesep
msg = msg + "beta max=" + str(beta_max)
msg = msg + os.linesep
orbitFinalize(msg)
self.cppGapModel.trackBunch(bunch,frequency,E0L,phase,self.polyT,self.polyS,self.polyTp,self.polySp)
#-----------------------------------------------------------------------
#
# This part of the package is for classes related to the axis RF fields
#
#------------------------------------------------------------------------
class RF_AxisFieldsStore:
"""
The dictionary with the axis field Functions
with the input file names as keys.
This is a collection of the static methods.
"""
#---- static_axis_field_dict[file_name] = Function
static_axis_field_dict = {}
def __init__(self):
pass
@classmethod
def addAxisFieldsForAccSeq(cls,accLattice,accSeqNamesList,dir_location = ""):
"""
This method add to the store the axis RF fields of all RF gap nodes
(BaseRF_Gap class instance with "EzFile" parameter) from the set of accSeqences.
The dir_location string variable will be added to the rf_gap.getParam("EzFile") to get
the file names.
"""
for accNamesSeq in accSeqList:
accSeq = accLattice.getSequence(accNamesSeq)
cavs = accSeq.getRF_Cavities()
for cav in cavs:
rf_gaps = cav.getRF_GapNodes()
for rf_gap in rf_gaps:
cls.addAxisField(rf_gap.getParam("EzFile"),dir_location)
@classmethod
def addAxisField(cls,fl_name,dir_location = ""):
"""
This method add to the store the axis RF field for the RF gap node.
The dir_location string variable will be added to the fl_name to get
the file name.
Returns the axis RF field function.
"""
if(cls.static_axis_field_dict.has_key(fl_name)):
return cls.static_axis_field_dict[fl_name]
comm = orbit_mpi.mpi_comm.MPI_COMM_WORLD
data_type = mpi_datatype.MPI_DOUBLE
rank = orbit_mpi.MPI_Comm_rank(comm)
main_rank = 0
x_arr = []
y_arr = []
if(rank == 0):
fl_in = open(dir_location + fl_name,"r")
lns = fl_in.readlines()
fl_in.close()
for ln in lns:
res_arr = ln.split()
if(len(res_arr) == 2):
x = float(res_arr[0])
y = float(res_arr[1])
x_arr.append(x)
y_arr.append(y)
x_arr = orbit_mpi.MPI_Bcast(x_arr,data_type,main_rank,comm)
y_arr = orbit_mpi.MPI_Bcast(y_arr,data_type,main_rank,comm)
function = Function()
for ind in range(len(x_arr)):
function.add(x_arr[ind],y_arr[ind])
#---- setting the const step (if function will allow it)
#---- will speed up function calculation later
function.setConstStep(1)
cls.static_axis_field_dict[fl_name] = function
return function
@classmethod
def getAxisFieldFunction(cls,fl_name):
"""
This method returns the Function with the RF axis field for a particular
name of the file. If store does not have this function it will return
the None object.
"""
if(cls.static_axis_field_dict.has_key(fl_name)):
return cls.static_axis_field_dict[fl_name]
else:
return None
@classmethod
def getSize(cls):
"""
This method returns the number of Functions with the RF axis fields in this
store.
"""
return len(cls.static_axis_field_dict.keys())
class AxisFieldRF_Gap(AbstractRF_Gap):
"""
The RF gap representation that uses the RF axis field. User have to provide the
input file with this field. This function should be normalized to the integral of 1.
The absolute value of the field will be calculated as cavAmp*E0L*Field(z).
The three point tracker RfGapThreePointTTF will be used to track the Bunch instance.
The longitudinal step during the tracking z_step should be defined externally. The
default value is 1 cm. The minimal and maximal longitudinal coordinates z_min
and z_max could be used directly from the axis field file or can be corrected
externally to avoid overlapping of electric fields from neighboring gaps.
The instance of this class has the reference to the BaseRF_Gap instance and uses
it as a source of information.
"""
#---- static test bunch for the design phase calculation
static_test_bunch = Bunch()
def __init__(self, baserf_gap):
"""
Constructor for the axis field RF gap.
E0L parameter is in GeV. Phases are in radians.
"""
AbstractRF_Gap.__init__(self,baserf_gap.getName())
self.setAsFirstRFGap(baserf_gap.isFirstRFGap())
self.baserf_gap = baserf_gap
self.setType("axis_field_rfgap")
self.addParam("E0TL",self.baserf_gap.getParam("E0TL"))
self.addParam("mode",self.baserf_gap.getParam("mode"))
self.addParam("gap_phase",self.baserf_gap.getParam("gap_phase"))
self.addParam("rfCavity",self.baserf_gap.getParam("rfCavity"))
self.addParam("E0L",self.baserf_gap.getParam("E0L"))
self.addParam("EzFile",self.baserf_gap.getParam("EzFile"))
self.setPosition(self.baserf_gap.getPosition())
#---- aperture parameters
if(baserf_gap.hasParam("aperture") and baserf_gap.hasParam("aprt_type")):
self.addParam("aperture",baserf_gap.getParam("aperture"))
self.addParam("aprt_type",baserf_gap.getParam("aprt_type"))
#---- axis field related parameters
self.axis_field_func = None
self.z_step = 0.01
self.z_min = 0.
self.z_max = 0.
self.z_tolerance = 0.000001 # in meters
self.phase_tolerance = 0.001 # in degrees
#---- gap_phase_vs_z_arr keeps [pos,phase] pairs after the tracking
self.gap_phase_vs_z_arr = []
#---- The position of the particle during the run.
#---- It is used for the path length accounting.
self.part_pos = 0.
#---- The RF gap model - three points model
self.cppGapModel = RfGapThreePointTTF()
def setLinacTracker(self, switch = True):
"""
This method will switch RF gap model to slower one where transformations
coefficients are calculated for each particle in the bunch.
"""
AbstractRF_Gap.setLinacTracker(self,switch)
if(switch):
self.cppGapModel = RfGapThreePointTTF_slow()
else:
self.cppGapModel = RfGapThreePointTTF()
def readAxisFieldFile(self,dir_location = "", file_name = "", z_step = 0.01):
"""
Method. Reads the axis field from the file. User have to call this method.
There is no other source of information about the axis field.
"""
if(file_name == ""):
self.axis_field_func = RF_AxisFieldsStore.addAxisField(self.baserf_gap.getParam("EzFile"),dir_location)
else:
self.axis_field_func = RF_AxisFieldsStore.addAxisField(file_name,dir_location)
z_min = self.axis_field_func.getMinX()
z_max = self.axis_field_func.getMaxX()
self.z_step = z_step
self.setZ_Min_Max(z_min,z_max)
def getAxisFieldFunction(self):
"""
It returns the axis field function.
"""
return self.axis_field_func
def setAxisFieldFunction(self,axis_field_func):
"""
It sets the axis field function.
"""
self.axis_field_func = axis_field_func
def getZ_Step(self):
"""
Returns the longitudinal step during the tracking.
"""
return self.z_step
def setZ_Step(self,z_step):
if(self.axis_field_func == None):
msg = "Class AxisFieldRF_Gap: You have to get the axis field from a file first!"
msg = msg + os.linesep
msg = "Call readAxisFieldFile(dir_location,file_name) method first!"
orbitFinalize(msg)
length = self.getLength()
nParts = int(length*1.0000001/z_step)
if(nParts < 1): nParts = 1
self.z_step = length/nParts
#---- this will set the even distribution of the lengths between parts
self.setnParts(nParts)
def getZ_Min_Max(self):
"""
Returns the tuple (z_min,z_max) with the limits of the axis field.
These parameters define the length of the node. The center of the node
is at 0.
"""
return (self.z_min,self.z_max)
def setZ_Min_Max(self,z_min,z_max):
"""
Sets the actual longitudinal sizes of the node. It is used for small correction
of the length to avoid fields overlapping from neighbouring gaps.
"""
self.z_min = z_min
self.z_max = z_max
length = self.z_max - self.z_min
self.setLength(length)
self.setZ_Step(self.z_step)
def getEzFiled(self,z):
"""
Returns the Ez field on the axis of the RF gap in V/m.
"""
rfCavity = self.getRF_Cavity()
E0L = 1.0e+9*self.getParam("E0L")
rf_ampl = rfCavity.getAmp()
Ez = E0L*rf_ampl*self.axis_field_func.getY(z)
return Ez
def getRF_Cavity(self):
"""
Returns the parent RF Cavity.
"""
return self.getParam("rfCavity")
def track(self, paramsDict):
"""
The AxisFieldRF_Gap class implementation of
the AccNode class track(probe) method.
User have to track the design bunch first to setup all gaps arrival time.
"""
rfCavity = self.getRF_Cavity()
if(not rfCavity.isDesignSetUp()):
sequence = self.getSequence()
accLattice = sequence.getLinacAccLattice()
msg = "The AxisFieldRF_Gap class. "
msg += "You have to run trackDesign on the LinacAccLattice"
msg += "first to initialize all RF Cavities' phases!"
msg += os.linesep
if(accLattice != None):
msg = msg + "Lattice =" + accLattice.getName()
msg = msg + os.linesep
if(sequence != None):
msg = msg + "Sequence =" + sequence.getName()
msg = msg + os.linesep
msg = msg + "RF Cavity =" + rfCavity.getName()
msg = msg + os.linesep
msg = msg + "Name of element=" + self.getName()
msg = msg + os.linesep
msg = msg + "Type of element=" + self.getType()
msg = msg + os.linesep
orbitFinalize(msg)
#-----------------------------------------
nParts = self.getnParts()
index = self.getActivePartIndex()
part_length = self.getLength(index)
bunch = paramsDict["bunch"]
syncPart = bunch.getSyncParticle()
eKin_in = syncPart.kinEnergy()
E0L = 1.0e+9*self.getParam("E0L")
modePhase = self.baserf_gap.getParam("mode")*math.pi
frequency = rfCavity.getFrequency()
rf_ampl = rfCavity.getAmp()
arrival_time = syncPart.time()
designArrivalTime = rfCavity.getDesignArrivalTime()
phase_shift = rfCavity.getPhase() - rfCavity.getDesignPhase()
phase = rfCavity.getFirstGapEtnrancePhase() + phase_shift
#----------------------------------------
phase = math.fmod(frequency*(arrival_time - designArrivalTime)*2.0*math.pi + phase,2.0*math.pi)
if(index == 0):
self.part_pos = self.z_min
self.gap_phase_vs_z_arr = [[self.part_pos,phase],]
zm = self.part_pos
z0 = zm + part_length/2
zp = z0 + part_length/2
Em = E0L*rf_ampl*self.axis_field_func.getY(zm)
E0 = E0L*rf_ampl*self.axis_field_func.getY(z0)
Ep = E0L*rf_ampl*self.axis_field_func.getY(zp)
#---- advance the particle position
self.tracking_module.drift(bunch,part_length/2)
self.part_pos += part_length/2
#call rf gap model to track the bunch
time_middle_gap = syncPart.time() - arrival_time
delta_phase = math.fmod(2*math.pi*time_middle_gap*frequency,2.0*math.pi)
self.gap_phase_vs_z_arr.append([self.part_pos,phase+delta_phase])
#---- this part is the debugging ---START---
#eKin_out = syncPart.kinEnergy()
#s = "debug pos[mm]= %7.2f "%(self.part_pos*1000.)
#s += " ekin= %9.6f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase+delta_phase)*180./math.pi,0.))
#s += " dE= %9.6f "%((eKin_out-eKin_in)*1000.)
#print s
#---- this part is the debugging ---STOP---
self.cppGapModel.trackBunch(bunch,part_length/2,Em,E0,Ep,frequency,phase+delta_phase+modePhase)
self.tracking_module.drift(bunch,part_length/2)
#---- advance the particle position
self.part_pos += part_length/2
time_middle_gap = syncPart.time() - arrival_time
delta_phase = math.fmod(2*math.pi*time_middle_gap*frequency,2.0*math.pi)
self.gap_phase_vs_z_arr.append([self.part_pos,phase+delta_phase])
#---- this part is the debugging ---START---
#eKin_out = syncPart.kinEnergy()
#s = "debug pos[mm]= %7.2f "%(self.part_pos*1000.)
#s += " ekin= %9.6f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase+delta_phase)*180./math.pi,0.))
#s += " dE= %9.6f "%((eKin_out-eKin_in)*1000.)
#print s
#---- this part is the debugging ---STOP---
#---- Calculate the phase at the center
if(index == (nParts - 1)):
pos_old = self.gap_phase_vs_z_arr[0][0]
phase_gap = self.gap_phase_vs_z_arr[0][1]
ind_min = -1
for ind in range(1,len(self.gap_phase_vs_z_arr)):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
if(math.fabs(pos) >= math.fabs(pos_old)):
ind_min = ind -1
phase_gap = self.gap_phase_vs_z_arr[ind_min][1]
phase_gap = phaseNearTargetPhase(phase_gap,0.)
self.gap_phase_vs_z_arr[ind_min][1] = phase_gap
break
pos_old = pos
self.setGapPhase(phase_gap)
#---- wrap all gap part's phases around the central one
if(ind_min > 0):
for ind in range(ind_min-1,-1,-1):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
[pos,phase_gap1] = self.gap_phase_vs_z_arr[ind+1]
self.gap_phase_vs_z_arr[ind][1] = phaseNearTargetPhase(phase_gap,phase_gap1)
for ind in range(ind_min+1,len(self.gap_phase_vs_z_arr)):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
[pos,phase_gap1] = self.gap_phase_vs_z_arr[ind-1]
self.gap_phase_vs_z_arr[ind][1] = phaseNearTargetPhase(phase_gap,phase_gap1)
def trackDesign(self, paramsDict):
"""
The method is tracking the design synchronous particle through the RF Gap.
If the gap is a first gap in the cavity we put the arrival time as
a cavity parameter. The pair of the cavity design phase and this arrival time
at the first gap are used during the real bunch tracking.
"""
nParts = self.getnParts()
index = self.getActivePartIndex()
part_length = self.getLength(index)
bunch = paramsDict["bunch"]
syncPart = bunch.getSyncParticle()
eKin_in = syncPart.kinEnergy()
#---- parameter E0L is in GeV, but cppGapModel = RfGapThreePointTTF() uses fields in V/m
E0L = 1.0e+9*self.getParam("E0L")
modePhase = self.baserf_gap.getParam("mode")*math.pi
rfCavity = self.getRF_Cavity()
rf_ampl = rfCavity.getDesignAmp()
arrival_time = syncPart.time()
frequency = rfCavity.getFrequency()
phase = rfCavity.getFirstGapEtnrancePhase()
#---- calculate the entance phase
if(self.isFirstRFGap() and index == 0):
rfCavity.setDesignArrivalTime(arrival_time)
phase = self.__calculate_first_part_phase(bunch)
rfCavity.setFirstGapEtnrancePhase(phase)
rfCavity.setFirstGapEtnranceDesignPhase(phase)
rfCavity.setDesignSetUp(True)
rfCavity._setDesignPhase(rfCavity.getPhase())
rfCavity._setDesignAmp(rfCavity.getAmp())
#print "debug firs gap first part phase=",phase*180./math.pi," arr time=",arrival_time
else:
first_gap_arr_time = rfCavity.getDesignArrivalTime()
#print "debug name=",self.getName()," delta_phase=",frequency*(arrival_time - first_gap_arr_time)*360.0," phase=",phase*180/math.pi
phase = math.fmod(frequency*(arrival_time - first_gap_arr_time)*2.0*math.pi+phase,2.0*math.pi)
if(index == 0):
self.part_pos = self.z_min
self.gap_phase_vs_z_arr = [[self.part_pos,phase],]
#print "debug design name=",self.getName()," index=",index," pos=",self.part_pos," arr_time=",arrival_time," phase=",phase*180./math.pi," freq=",frequency
zm = self.part_pos
z0 = zm + part_length/2
zp = z0 + part_length/2
Em = E0L*rf_ampl*self.axis_field_func.getY(zm)
E0 = E0L*rf_ampl*self.axis_field_func.getY(z0)
Ep = E0L*rf_ampl*self.axis_field_func.getY(zp)
#---- advance the particle position
self.tracking_module.drift(bunch,part_length/2)
self.part_pos += part_length/2
#call rf gap model to track the bunch
time_middle_gap = syncPart.time() - arrival_time
delta_phase = math.fmod(2*math.pi*time_middle_gap*frequency,2.0*math.pi)
self.gap_phase_vs_z_arr.append([self.part_pos,phase+delta_phase])
#---- this part is the debugging ---START---
#eKin_out = syncPart.kinEnergy()
#s = "debug pos[mm]= %7.2f "%(self.part_pos*1000.)
#s += " ekin= %9.6f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase+delta_phase)*180./math.pi,0.))
#s += " dE= %9.6f "%((eKin_out-eKin_in)*1000.)
#print s
#---- this part is the debugging ---STOP---
self.cppGapModel.trackBunch(bunch,part_length/2,Em,E0,Ep,frequency,phase+delta_phase+modePhase)
self.tracking_module.drift(bunch,part_length/2)
#---- advance the particle position
self.part_pos += part_length/2
time_middle_gap = syncPart.time() - arrival_time
delta_phase = math.fmod(2*math.pi*time_middle_gap*frequency,2.0*math.pi)
self.gap_phase_vs_z_arr.append([self.part_pos,phase+delta_phase])
#---- this part is the debugging ---START---
#eKin_out = syncPart.kinEnergy()
#s = "debug pos[mm]= %7.2f "%(self.part_pos*1000.)
#s += " ekin= %9.6f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase+delta_phase)*180./math.pi,0.))
#s += " dE= %9.6f "%((eKin_out-eKin_in)*1000.)
#print s
#---- this part is the debugging ---STOP---
#---- Calculate the phase at the center
if(index == (nParts - 1)):
pos_old = self.gap_phase_vs_z_arr[0][0]
phase_gap = self.gap_phase_vs_z_arr[0][1]
ind_min = -1
for ind in range(1,len(self.gap_phase_vs_z_arr)):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
if(math.fabs(pos) >= math.fabs(pos_old)):
ind_min = ind -1
phase_gap = self.gap_phase_vs_z_arr[ind_min][1]
phase_gap = phaseNearTargetPhase(phase_gap,0.)
self.gap_phase_vs_z_arr[ind_min][1] = phase_gap
break
pos_old = pos
self.setGapPhase(phase_gap)
#---- wrap all gap part's phases around the central one
if(ind_min > 0):
for ind in range(ind_min-1,-1,-1):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
[pos,phase_gap1] = self.gap_phase_vs_z_arr[ind+1]
self.gap_phase_vs_z_arr[ind][1] = phaseNearTargetPhase(phase_gap,phase_gap1)
for ind in range(ind_min+1,len(self.gap_phase_vs_z_arr)):
[pos,phase_gap] = self.gap_phase_vs_z_arr[ind]
[pos,phase_gap1] = self.gap_phase_vs_z_arr[ind-1]
self.gap_phase_vs_z_arr[ind][1] = phaseNearTargetPhase(phase_gap,phase_gap1)
def calculate_first_part_phase(self,bunch_in):
"""
The privat method should be exposed to the AxisField_and_Quad_RF_Gap class
"""
phase_start = self.__calculate_first_part_phase(bunch_in)
return phase_start
def __calculate_first_part_phase(self,bunch_in):
rfCavity = self.getRF_Cavity()
#---- the design phase at the center of the RF gap
#---- (this is from a thin gap approach)
frequency = rfCavity.getFrequency()
modePhase = self.baserf_gap.getParam("mode")*math.pi
phase_cavity = phaseNearTargetPhase(rfCavity.getPhase(),0.)
#---- parameter E0L is in GeV, but cppGapModel = RfGapThreePointTTF() uses fields in V/m
E0L_local = 1.0e+9*rfCavity.getAmp()*self.getParam("E0L")
#---- we have to find the phase_start
#---- which is the phase at the distance z_min before the gap center
#---- z_min by defenition is negative
bunch = AxisFieldRF_Gap.static_test_bunch
bunch_in.copyEmptyBunchTo(bunch)
syncPart = bunch.getSyncParticle()
syncPart.time(0.)
eKin_init = syncPart.kinEnergy()
#print "debug eKin[MeV]= %9.5f"%(syncPart.kinEnergy()*1000.)
beta = syncPart.beta()
phase_adv = 2.0*math.pi*frequency*math.fabs(self.z_min)/(beta*speed_of_light)
#print "debug phase diff at start=",phase_adv*180./math.pi
phase_start = phaseNearTargetPhase(phase_cavity - phase_adv,0.)
#print "debug phase at start=",phase_start*180./math.pi
phase_cavity_new = phase_cavity + 10*self.phase_tolerance
while(math.fabs(phase_cavity_new-phase_cavity) > self.phase_tolerance*math.pi/180.):
bunch_in.copyEmptyBunchTo(bunch)
syncPart.time(0.)
syncPart.kinEnergy(eKin_init)
z_old = self.z_min
z = self.z_min + self.z_step
while(z < 0.):
if((z+ self.z_step) > 0.):
z = 0.
if(math.fabs(z - z_old) < self.z_tolerance):
break
half_step = (z - z_old)/2
zm = z_old
z0 = zm + half_step
zp = z0 + half_step
self.tracking_module.drift(bunch,half_step)
time_gap = syncPart.time()
delta_phase = 2*math.pi*time_gap*frequency
Em = E0L_local*self.axis_field_func.getY(zm)
E0 = E0L_local*self.axis_field_func.getY(z0)
Ep = E0L_local*self.axis_field_func.getY(zp)
#s = "debug z[mm]= %7.2f "%(z0*1000.)
#s += " ekin= %9.5f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase_start+delta_phase+modePhase)*180./math.pi,0.))
#print s
self.cppGapModel.trackBunch(bunch,half_step,Em,E0,Ep,frequency,phase_start+delta_phase+modePhase)
self.tracking_module.drift(bunch,half_step)
#time_gap = syncPart.time()
#delta_phase = 2*math.pi*time_gap*frequency
#s = "debug z[mm]= %7.2f "%(zp*1000.)
#s += " ekin= %9.5f"%(syncPart.kinEnergy()*1000.)
#s += " phase = %9.2f "%(phaseNearTargetPhaseDeg((phase_start+delta_phase+modePhase)*180./math.pi,0.))
#print s
z_old = z
z = z_old + self.z_step
time_gap = syncPart.time()
delta_phase =2*math.pi*time_gap*frequency
phase_cavity_new = phaseNearTargetPhase(phase_start+delta_phase,0.)
#s = " phase_diff = %8.4f "%(delta_phase*180./math.pi)
#s += " phase_cavity = %8.4f "%(phase_cavity*180./math.pi)
#s += " new = %8.4f "%(phase_cavity_new *180./math.pi)
#s += " phase_start = %8.4f "%(phase_start*180./math.pi)
#s += " eKin[MeV]= %9.5f "%(syncPart.kinEnergy()*1000.)
#s += " dE[MeV]= %9.6f "%(syncPart.kinEnergy()*1000. - 2.5)
#print "debug "+s
phase_start -= 0.8*(phase_cavity_new - phase_cavity)
#---- undo the last change in the while loop
phase_start += 0.8*(phase_cavity_new - phase_cavity)
#print "debug phase_start=",phase_start*180./math.pi
return phase_start
|
mit
|
krvss/django-social-auth
|
social_auth/db/mongoengine_models.py
|
2
|
3328
|
"""
MongoEngine models for Social Auth
Requires MongoEngine 0.6.10
"""
try:
from django.contrib.auth.hashers import UNUSABLE_PASSWORD
_ = UNUSABLE_PASSWORD # to quiet flake
except (ImportError, AttributeError):
UNUSABLE_PASSWORD = '!'
from django.db.models import get_model
from django.utils.importlib import import_module
from mongoengine import DictField, Document, IntField, ReferenceField, \
StringField
from mongoengine.queryset import OperationError
from social_auth.utils import setting
from social_auth.db.base import UserSocialAuthMixin, AssociationMixin, \
NonceMixin
USER_MODEL_APP = setting('SOCIAL_AUTH_USER_MODEL') or \
setting('AUTH_USER_MODEL')
if USER_MODEL_APP:
USER_MODEL = get_model(*USER_MODEL_APP.rsplit('.', 1))
else:
USER_MODEL_MODULE, USER_MODEL_NAME = \
'mongoengine.django.auth.User'.rsplit('.', 1)
USER_MODEL = getattr(import_module(USER_MODEL_MODULE), USER_MODEL_NAME)
class UserSocialAuth(Document, UserSocialAuthMixin):
"""Social Auth association model"""
user = ReferenceField(USER_MODEL, dbref=True)
provider = StringField(max_length=32)
uid = StringField(max_length=255, unique_with='provider')
extra_data = DictField()
@classmethod
def get_social_auth_for_user(cls, user):
return cls.objects(user=user)
@classmethod
def create_social_auth(cls, user, uid, provider):
if not isinstance(type(uid), basestring):
uid = str(uid)
return cls.objects.create(user=user, uid=uid, provider=provider)
@classmethod
def username_max_length(cls):
return UserSocialAuth.user_model().username.max_length
@classmethod
def email_max_length(cls):
return UserSocialAuth.user_model().email.max_length
@classmethod
def user_model(cls):
return USER_MODEL
@classmethod
def create_user(cls, *args, **kwargs):
# Empty string makes email regex validation fail
if kwargs.get('email') == '':
kwargs['email'] = None
kwargs.setdefault('password', UNUSABLE_PASSWORD)
return cls.user_model().create_user(*args, **kwargs)
@classmethod
def allowed_to_disconnect(cls, user, backend_name, association_id=None):
if association_id is not None:
qs = cls.objects.filter(id__ne=association_id)
else:
qs = cls.objects.filter(provider__ne=backend_name)
qs = qs.filter(user=user)
if hasattr(user, 'has_usable_password'):
valid_password = user.has_usable_password()
else:
valid_password = True
return valid_password or qs.count() > 0
class Nonce(Document, NonceMixin):
"""One use numbers"""
server_url = StringField(max_length=255)
timestamp = IntField()
salt = StringField(max_length=40)
class Association(Document, AssociationMixin):
"""OpenId account association"""
server_url = StringField(max_length=255)
handle = StringField(max_length=255)
secret = StringField(max_length=255) # Stored base64 encoded
issued = IntField()
lifetime = IntField()
assoc_type = StringField(max_length=64)
def is_integrity_error(exc):
return exc.__class__ is OperationError and 'E11000' in exc.message
|
bsd-3-clause
|
glibersat/firmware
|
platform/spark/firmware/user/tests/app/rsakeygen/verify_keys.py
|
14
|
1408
|
#!/usr/bin/env python
import sys
import os
import subprocess
def usage():
print '%s [input_file or tty]' % (sys.argv[0])
if len(sys.argv) != 2:
usage()
sys.exit(1)
f = open(sys.argv[1], 'r')
i = -1
def hex2bin(hex):
return hex.decode('hex')
def run(cmd, stdin):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
out, err = p.communicate(stdin)
return (out.strip(), err.strip())
for l in f:
l = l.strip()
if l.startswith('done'):
print 'Done'
sys.exit(0)
if l.startswith('keys:'):
l = l[5:]
else:
continue
priv, pub = l.split(":")
i += 1
privb = hex2bin(priv)
pubb = hex2bin(pub)
privcheck = run(['openssl', 'rsa', '-inform', 'DER', '-noout', '-check'], privb)
privmod = run(['openssl', 'rsa', '-inform', 'DER', '-noout', '-modulus'], privb)
pubmod = run(['openssl', 'rsa', '-pubin', '-inform', 'DER', '-noout', '-modulus'], pubb)
if not privcheck[1] and (privmod[0] == pubmod[0]) and not privmod[1] and not pubmod[1]:
print '%d:OK' % (i, )
else:
print '%d:FAIL' % (i, )
sys.stderr.write(privcheck[0])
sys.stderr.write(privcheck[1])
sys.stderr.write(privmod[0])
sys.stderr.write(privmod[1])
sys.stderr.write(pubmod[0])
sys.stderr.write(pubmod[1])
print 'Done'
sys.exit(0)
|
agpl-3.0
|
mancoast/CPythonPyc_test
|
cpython/276_test_univnewlines2k.py
|
136
|
3848
|
# Tests universal newline support for both reading and parsing files.
import unittest
import os
import sys
from test import test_support
if not hasattr(sys.stdin, 'newlines'):
raise unittest.SkipTest, \
"This Python does not have universal newline support"
FATX = 'x' * (2**14)
DATA_TEMPLATE = [
"line1=1",
"line2='this is a very long line designed to go past the magic " +
"hundred character limit that is inside fileobject.c and which " +
"is meant to speed up the common case, but we also want to test " +
"the uncommon case, naturally.'",
"def line3():pass",
"line4 = '%s'" % FATX,
]
DATA_LF = "\n".join(DATA_TEMPLATE) + "\n"
DATA_CR = "\r".join(DATA_TEMPLATE) + "\r"
DATA_CRLF = "\r\n".join(DATA_TEMPLATE) + "\r\n"
# Note that DATA_MIXED also tests the ability to recognize a lone \r
# before end-of-file.
DATA_MIXED = "\n".join(DATA_TEMPLATE) + "\r"
DATA_SPLIT = [x + "\n" for x in DATA_TEMPLATE]
del x
class TestGenericUnivNewlines(unittest.TestCase):
# use a class variable DATA to define the data to write to the file
# and a class variable NEWLINE to set the expected newlines value
READMODE = 'U'
WRITEMODE = 'wb'
def setUp(self):
with open(test_support.TESTFN, self.WRITEMODE) as fp:
fp.write(self.DATA)
def tearDown(self):
try:
os.unlink(test_support.TESTFN)
except:
pass
def test_read(self):
with open(test_support.TESTFN, self.READMODE) as fp:
data = fp.read()
self.assertEqual(data, DATA_LF)
self.assertEqual(repr(fp.newlines), repr(self.NEWLINE))
def test_readlines(self):
with open(test_support.TESTFN, self.READMODE) as fp:
data = fp.readlines()
self.assertEqual(data, DATA_SPLIT)
self.assertEqual(repr(fp.newlines), repr(self.NEWLINE))
def test_readline(self):
with open(test_support.TESTFN, self.READMODE) as fp:
data = []
d = fp.readline()
while d:
data.append(d)
d = fp.readline()
self.assertEqual(data, DATA_SPLIT)
self.assertEqual(repr(fp.newlines), repr(self.NEWLINE))
def test_seek(self):
with open(test_support.TESTFN, self.READMODE) as fp:
fp.readline()
pos = fp.tell()
data = fp.readlines()
self.assertEqual(data, DATA_SPLIT[1:])
fp.seek(pos)
data = fp.readlines()
self.assertEqual(data, DATA_SPLIT[1:])
def test_execfile(self):
namespace = {}
with test_support.check_py3k_warnings():
execfile(test_support.TESTFN, namespace)
func = namespace['line3']
self.assertEqual(func.func_code.co_firstlineno, 3)
self.assertEqual(namespace['line4'], FATX)
class TestNativeNewlines(TestGenericUnivNewlines):
NEWLINE = None
DATA = DATA_LF
READMODE = 'r'
WRITEMODE = 'w'
class TestCRNewlines(TestGenericUnivNewlines):
NEWLINE = '\r'
DATA = DATA_CR
class TestLFNewlines(TestGenericUnivNewlines):
NEWLINE = '\n'
DATA = DATA_LF
class TestCRLFNewlines(TestGenericUnivNewlines):
NEWLINE = '\r\n'
DATA = DATA_CRLF
def test_tell(self):
with open(test_support.TESTFN, self.READMODE) as fp:
self.assertEqual(repr(fp.newlines), repr(None))
data = fp.readline()
pos = fp.tell()
self.assertEqual(repr(fp.newlines), repr(self.NEWLINE))
class TestMixedNewlines(TestGenericUnivNewlines):
NEWLINE = ('\r', '\n')
DATA = DATA_MIXED
def test_main():
test_support.run_unittest(
TestNativeNewlines,
TestCRNewlines,
TestLFNewlines,
TestCRLFNewlines,
TestMixedNewlines
)
if __name__ == '__main__':
test_main()
|
gpl-3.0
|
Shaps/ansible
|
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/connection/netconf.py
|
47
|
13891
|
# (c) 2016 Red Hat Inc.
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """author: Ansible Networking Team
connection: netconf
short_description: Provides a persistent connection using the netconf protocol
description:
- This connection plugin provides a connection to remote devices over the SSH NETCONF
subsystem. This connection plugin is typically used by network devices for sending
and receiving RPC calls over NETCONF.
- Note this connection plugin requires ncclient to be installed on the local Ansible
controller.
requirements:
- ncclient
options:
host:
description:
- Specifies the remote device FQDN or IP address to establish the SSH connection
to.
default: inventory_hostname
vars:
- name: ansible_host
port:
type: int
description:
- Specifies the port on the remote device that listens for connections when establishing
the SSH connection.
default: 830
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
network_os:
description:
- Configures the device platform network operating system. This value is used
to load a device specific netconf plugin. If this option is not configured
(or set to C(auto)), then Ansible will attempt to guess the correct network_os
to use. If it can not guess a network_os correctly it will use C(default).
vars:
- name: ansible_network_os
remote_user:
description:
- The username used to authenticate to the remote device when the SSH connection
is first established. If the remote_user is not specified, the connection will
use the username of the logged in user.
- Can be configured from the CLI via the C(--user) or C(-u) options.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
password:
description:
- Configures the user password used to authenticate to the remote device when
first establishing the SSH connection.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
- name: ansible_netconf_password
private_key_file:
description:
- The private SSH key or certificate file used to authenticate to the remote device
when first establishing the SSH connection.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
look_for_keys:
default: true
description:
- Enables looking for ssh keys in the usual locations for ssh keys (e.g. :file:`~/.ssh/id_*`).
env:
- name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS
ini:
- section: paramiko_connection
key: look_for_keys
type: boolean
host_key_checking:
description: Set this to "False" if you want to avoid host key checking by the
underlying tools Ansible uses to connect to the host
type: boolean
default: true
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
- name: ANSIBLE_NETCONF_HOST_KEY_CHECKING
ini:
- section: defaults
key: host_key_checking
- section: paramiko_connection
key: host_key_checking
vars:
- name: ansible_host_key_checking
- name: ansible_ssh_host_key_checking
- name: ansible_netconf_host_key_checking
persistent_connect_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait when trying to initially
establish a persistent connection. If this value expires before the connection
to the remote device is completed, the connection will fail.
default: 30
ini:
- section: persistent_connection
key: connect_timeout
env:
- name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT
vars:
- name: ansible_connect_timeout
persistent_command_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait for a command to return from
the remote device. If this timer is exceeded before the command returns, the
connection plugin will raise an exception and close.
default: 30
ini:
- section: persistent_connection
key: command_timeout
env:
- name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT
vars:
- name: ansible_command_timeout
netconf_ssh_config:
description:
- This variable is used to enable bastion/jump host with netconf connection. If
set to True the bastion/jump host ssh settings should be present in ~/.ssh/config
file, alternatively it can be set to custom ssh configuration file path to read
the bastion/jump host settings.
ini:
- section: netconf_connection
key: ssh_config
version_added: '2.7'
env:
- name: ANSIBLE_NETCONF_SSH_CONFIG
vars:
- name: ansible_netconf_ssh_config
version_added: '2.7'
persistent_log_messages:
type: boolean
description:
- This flag will enable logging the command executed and response received from
target device in the ansible log file. For this option to work 'log_path' ansible
configuration option is required to be set to a file path with write access.
- Be sure to fully understand the security implications of enabling this option
as it could create a security vulnerability by logging sensitive information
in log file.
default: false
ini:
- section: persistent_connection
key: log_messages
env:
- name: ANSIBLE_PERSISTENT_LOG_MESSAGES
vars:
- name: ansible_persistent_log_messages
"""
import os
import logging
import json
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import missing_required_lib
from ansible.module_utils.parsing.convert_bool import (
BOOLEANS_TRUE,
BOOLEANS_FALSE,
)
from ansible.plugins.loader import netconf_loader
from ansible.plugins.connection import NetworkConnectionBase, ensure_connect
try:
from ncclient import manager
from ncclient.operations import RPCError
from ncclient.transport.errors import SSHUnknownHostError
from ncclient.xml_ import to_ele, to_xml
HAS_NCCLIENT = True
NCCLIENT_IMP_ERR = None
except (
ImportError,
AttributeError,
) as err: # paramiko and gssapi are incompatible and raise AttributeError not ImportError
HAS_NCCLIENT = False
NCCLIENT_IMP_ERR = err
logging.getLogger("ncclient").setLevel(logging.INFO)
class Connection(NetworkConnectionBase):
"""NetConf connections"""
transport = "ansible.netcommon.netconf"
has_pipelining = False
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(
play_context, new_stdin, *args, **kwargs
)
# If network_os is not specified then set the network os to auto
# This will be used to trigger the use of guess_network_os when connecting.
self._network_os = self._network_os or "auto"
self.netconf = netconf_loader.get(self._network_os, self)
if self.netconf:
self._sub_plugin = {
"type": "netconf",
"name": self.netconf._load_name,
"obj": self.netconf,
}
self.queue_message(
"vvvv",
"loaded netconf plugin %s from path %s for network_os %s"
% (
self.netconf._load_name,
self.netconf._original_path,
self._network_os,
),
)
else:
self.netconf = netconf_loader.get("default", self)
self._sub_plugin = {
"type": "netconf",
"name": "default",
"obj": self.netconf,
}
self.queue_message(
"display",
"unable to load netconf plugin for network_os %s, falling back to default plugin"
% self._network_os,
)
self.queue_message("log", "network_os is set to %s" % self._network_os)
self._manager = None
self.key_filename = None
self._ssh_config = None
def exec_command(self, cmd, in_data=None, sudoable=True):
"""Sends the request to the node and returns the reply
The method accepts two forms of request. The first form is as a byte
string that represents xml string be send over netconf session.
The second form is a json-rpc (2.0) byte string.
"""
if self._manager:
# to_ele operates on native strings
request = to_ele(to_native(cmd, errors="surrogate_or_strict"))
if request is None:
return "unable to parse request"
try:
reply = self._manager.rpc(request)
except RPCError as exc:
error = self.internal_error(
data=to_text(to_xml(exc.xml), errors="surrogate_or_strict")
)
return json.dumps(error)
return reply.data_xml
else:
return super(Connection, self).exec_command(cmd, in_data, sudoable)
@property
@ensure_connect
def manager(self):
return self._manager
def _connect(self):
if not HAS_NCCLIENT:
raise AnsibleError(
"%s: %s"
% (
missing_required_lib("ncclient"),
to_native(NCCLIENT_IMP_ERR),
)
)
self.queue_message("log", "ssh connection done, starting ncclient")
allow_agent = True
if self._play_context.password is not None:
allow_agent = False
setattr(self._play_context, "allow_agent", allow_agent)
self.key_filename = (
self._play_context.private_key_file
or self.get_option("private_key_file")
)
if self.key_filename:
self.key_filename = str(os.path.expanduser(self.key_filename))
self._ssh_config = self.get_option("netconf_ssh_config")
if self._ssh_config in BOOLEANS_TRUE:
self._ssh_config = True
elif self._ssh_config in BOOLEANS_FALSE:
self._ssh_config = None
# Try to guess the network_os if the network_os is set to auto
if self._network_os == "auto":
for cls in netconf_loader.all(class_only=True):
network_os = cls.guess_network_os(self)
if network_os:
self.queue_message(
"vvv", "discovered network_os %s" % network_os
)
self._network_os = network_os
# If we have tried to detect the network_os but were unable to i.e. network_os is still 'auto'
# then use default as the network_os
if self._network_os == "auto":
# Network os not discovered. Set it to default
self.queue_message(
"vvv",
"Unable to discover network_os. Falling back to default.",
)
self._network_os = "default"
try:
ncclient_device_handler = self.netconf.get_option(
"ncclient_device_handler"
)
except KeyError:
ncclient_device_handler = "default"
self.queue_message(
"vvv",
"identified ncclient device handler: %s."
% ncclient_device_handler,
)
device_params = {"name": ncclient_device_handler}
try:
port = self._play_context.port or 830
self.queue_message(
"vvv",
"ESTABLISH NETCONF SSH CONNECTION FOR USER: %s on PORT %s TO %s WITH SSH_CONFIG = %s"
% (
self._play_context.remote_user,
port,
self._play_context.remote_addr,
self._ssh_config,
),
)
self._manager = manager.connect(
host=self._play_context.remote_addr,
port=port,
username=self._play_context.remote_user,
password=self._play_context.password,
key_filename=self.key_filename,
hostkey_verify=self.get_option("host_key_checking"),
look_for_keys=self.get_option("look_for_keys"),
device_params=device_params,
allow_agent=self._play_context.allow_agent,
timeout=self.get_option("persistent_connect_timeout"),
ssh_config=self._ssh_config,
)
self._manager._timeout = self.get_option(
"persistent_command_timeout"
)
except SSHUnknownHostError as exc:
raise AnsibleConnectionFailure(to_native(exc))
except ImportError:
raise AnsibleError(
"connection=netconf is not supported on {0}".format(
self._network_os
)
)
if not self._manager.connected:
return 1, b"", b"not connected"
self.queue_message(
"log", "ncclient manager object created successfully"
)
self._connected = True
super(Connection, self)._connect()
return (
0,
to_bytes(self._manager.session_id, errors="surrogate_or_strict"),
b"",
)
def close(self):
if self._manager:
self._manager.close_session()
super(Connection, self).close()
|
gpl-3.0
|
tqchen/tvm
|
tutorials/get_started/relay_quick_start.py
|
1
|
5928
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
.. _tutorial-relay-quick-start:
Quick Start Tutorial for Compiling Deep Learning Models
=======================================================
**Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Truman Tian <https://github.com/SiNZeRo>`_
This example shows how to build a neural network with Relay python frontend and
generates a runtime library for Nvidia GPU with TVM.
Notice that you need to build TVM with cuda and llvm enabled.
"""
######################################################################
# Overview for Supported Hardware Backend of TVM
# ----------------------------------------------
# The image below shows hardware backend currently supported by TVM:
#
# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png
# :align: center
#
# In this tutorial, we'll choose cuda and llvm as target backends.
# To begin with, let's import Relay and TVM.
import numpy as np
from tvm import relay
from tvm.relay import testing
import tvm
from tvm import te
from tvm.contrib import graph_runtime
######################################################################
# Define Neural Network in Relay
# ------------------------------
# First, let's define a neural network with relay python frontend.
# For simplicity, we'll use pre-defined resnet-18 network in Relay.
# Parameters are initialized with Xavier initializer.
# Relay also supports other model formats such as MXNet, CoreML, ONNX and
# Tensorflow.
#
# In this tutorial, we assume we will do inference on our device
# and the batch size is set to be 1. Input images are RGB color
# images of size 224 * 224. We can call the :any:`tvm.relay.TupleWrapper.astext()`
# to show the network structure.
batch_size = 1
num_class = 1000
image_shape = (3, 224, 224)
data_shape = (batch_size,) + image_shape
out_shape = (batch_size, num_class)
mod, params = relay.testing.resnet.get_workload(
num_layers=18, batch_size=batch_size, image_shape=image_shape
)
# set show_meta_data=True if you want to show meta data
print(mod.astext(show_meta_data=False))
######################################################################
# Compilation
# -----------
# Next step is to compile the model using the Relay/TVM pipeline.
# Users can specify the optimization level of the compilation.
# Currently this value can be 0 to 3. The optimization passes include
# operator fusion, pre-computation, layout transformation and so on.
#
# :py:func:`relay.build` returns three components: the execution graph in
# json format, the TVM module library of compiled functions specifically
# for this graph on the target hardware, and the parameter blobs of
# the model. During the compilation, Relay does the graph-level
# optimization while TVM does the tensor-level optimization, resulting
# in an optimized runtime module for model serving.
#
# We'll first compile for Nvidia GPU. Behind the scene, :py:func:`relay.build`
# first does a number of graph-level optimizations, e.g. pruning, fusing, etc.,
# then registers the operators (i.e. the nodes of the optimized graphs) to
# TVM implementations to generate a `tvm.module`.
# To generate the module library, TVM will first transfer the high level IR
# into the lower intrinsic IR of the specified target backend, which is CUDA
# in this example. Then the machine code will be generated as the module library.
opt_level = 3
target = tvm.target.cuda()
with tvm.transform.PassContext(opt_level=opt_level):
lib = relay.build(mod, target, params=params)
#####################################################################
# Run the generate library
# ------------------------
# Now we can create graph runtime and run the module on Nvidia GPU.
# create random input
ctx = tvm.gpu()
data = np.random.uniform(-1, 1, size=data_shape).astype("float32")
# create module
module = graph_runtime.GraphModule(lib["default"](ctx))
# set input and parameters
module.set_input("data", data)
# run
module.run()
# get output
out = module.get_output(0, tvm.nd.empty(out_shape)).asnumpy()
# Print first 10 elements of output
print(out.flatten()[0:10])
######################################################################
# Save and Load Compiled Module
# -----------------------------
# We can also save the graph, lib and parameters into files and load them
# back in deploy environment.
####################################################
# save the graph, lib and params into separate files
from tvm.contrib import util
temp = util.tempdir()
path_lib = temp.relpath("deploy_lib.tar")
lib.export_library(path_lib)
print(temp.listdir())
####################################################
# load the module back.
loaded_lib = tvm.runtime.load_module(path_lib)
input_data = tvm.nd.array(np.random.uniform(size=data_shape).astype("float32"))
module = graph_runtime.GraphModule(loaded_lib["default"](ctx))
module.run(data=input_data)
out_deploy = module.get_output(0).asnumpy()
# Print first 10 elements of output
print(out_deploy.flatten()[0:10])
# check whether the output from deployed module is consistent with original one
tvm.testing.assert_allclose(out_deploy, out, atol=1e-3)
|
apache-2.0
|
CheckYourScreen/Arsenic.Kernel_onyx-oos
|
tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/SchedGui.py
|
12980
|
5411
|
# SchedGui.py - Python extension for perf script, basic GUI code for
# traces drawing and overview.
#
# Copyright (C) 2010 by Frederic Weisbecker <[email protected]>
#
# This software is distributed under the terms of the GNU General
# Public License ("GPL") version 2 as published by the Free Software
# Foundation.
try:
import wx
except ImportError:
raise ImportError, "You need to install the wxpython lib for this script"
class RootFrame(wx.Frame):
Y_OFFSET = 100
RECT_HEIGHT = 100
RECT_SPACE = 50
EVENT_MARKING_WIDTH = 5
def __init__(self, sched_tracer, title, parent = None, id = -1):
wx.Frame.__init__(self, parent, id, title)
(self.screen_width, self.screen_height) = wx.GetDisplaySize()
self.screen_width -= 10
self.screen_height -= 10
self.zoom = 0.5
self.scroll_scale = 20
self.sched_tracer = sched_tracer
self.sched_tracer.set_root_win(self)
(self.ts_start, self.ts_end) = sched_tracer.interval()
self.update_width_virtual()
self.nr_rects = sched_tracer.nr_rectangles() + 1
self.height_virtual = RootFrame.Y_OFFSET + (self.nr_rects * (RootFrame.RECT_HEIGHT + RootFrame.RECT_SPACE))
# whole window panel
self.panel = wx.Panel(self, size=(self.screen_width, self.screen_height))
# scrollable container
self.scroll = wx.ScrolledWindow(self.panel)
self.scroll.SetScrollbars(self.scroll_scale, self.scroll_scale, self.width_virtual / self.scroll_scale, self.height_virtual / self.scroll_scale)
self.scroll.EnableScrolling(True, True)
self.scroll.SetFocus()
# scrollable drawing area
self.scroll_panel = wx.Panel(self.scroll, size=(self.screen_width - 15, self.screen_height / 2))
self.scroll_panel.Bind(wx.EVT_PAINT, self.on_paint)
self.scroll_panel.Bind(wx.EVT_KEY_DOWN, self.on_key_press)
self.scroll_panel.Bind(wx.EVT_LEFT_DOWN, self.on_mouse_down)
self.scroll.Bind(wx.EVT_PAINT, self.on_paint)
self.scroll.Bind(wx.EVT_KEY_DOWN, self.on_key_press)
self.scroll.Bind(wx.EVT_LEFT_DOWN, self.on_mouse_down)
self.scroll.Fit()
self.Fit()
self.scroll_panel.SetDimensions(-1, -1, self.width_virtual, self.height_virtual, wx.SIZE_USE_EXISTING)
self.txt = None
self.Show(True)
def us_to_px(self, val):
return val / (10 ** 3) * self.zoom
def px_to_us(self, val):
return (val / self.zoom) * (10 ** 3)
def scroll_start(self):
(x, y) = self.scroll.GetViewStart()
return (x * self.scroll_scale, y * self.scroll_scale)
def scroll_start_us(self):
(x, y) = self.scroll_start()
return self.px_to_us(x)
def paint_rectangle_zone(self, nr, color, top_color, start, end):
offset_px = self.us_to_px(start - self.ts_start)
width_px = self.us_to_px(end - self.ts_start)
offset_py = RootFrame.Y_OFFSET + (nr * (RootFrame.RECT_HEIGHT + RootFrame.RECT_SPACE))
width_py = RootFrame.RECT_HEIGHT
dc = self.dc
if top_color is not None:
(r, g, b) = top_color
top_color = wx.Colour(r, g, b)
brush = wx.Brush(top_color, wx.SOLID)
dc.SetBrush(brush)
dc.DrawRectangle(offset_px, offset_py, width_px, RootFrame.EVENT_MARKING_WIDTH)
width_py -= RootFrame.EVENT_MARKING_WIDTH
offset_py += RootFrame.EVENT_MARKING_WIDTH
(r ,g, b) = color
color = wx.Colour(r, g, b)
brush = wx.Brush(color, wx.SOLID)
dc.SetBrush(brush)
dc.DrawRectangle(offset_px, offset_py, width_px, width_py)
def update_rectangles(self, dc, start, end):
start += self.ts_start
end += self.ts_start
self.sched_tracer.fill_zone(start, end)
def on_paint(self, event):
dc = wx.PaintDC(self.scroll_panel)
self.dc = dc
width = min(self.width_virtual, self.screen_width)
(x, y) = self.scroll_start()
start = self.px_to_us(x)
end = self.px_to_us(x + width)
self.update_rectangles(dc, start, end)
def rect_from_ypixel(self, y):
y -= RootFrame.Y_OFFSET
rect = y / (RootFrame.RECT_HEIGHT + RootFrame.RECT_SPACE)
height = y % (RootFrame.RECT_HEIGHT + RootFrame.RECT_SPACE)
if rect < 0 or rect > self.nr_rects - 1 or height > RootFrame.RECT_HEIGHT:
return -1
return rect
def update_summary(self, txt):
if self.txt:
self.txt.Destroy()
self.txt = wx.StaticText(self.panel, -1, txt, (0, (self.screen_height / 2) + 50))
def on_mouse_down(self, event):
(x, y) = event.GetPositionTuple()
rect = self.rect_from_ypixel(y)
if rect == -1:
return
t = self.px_to_us(x) + self.ts_start
self.sched_tracer.mouse_down(rect, t)
def update_width_virtual(self):
self.width_virtual = self.us_to_px(self.ts_end - self.ts_start)
def __zoom(self, x):
self.update_width_virtual()
(xpos, ypos) = self.scroll.GetViewStart()
xpos = self.us_to_px(x) / self.scroll_scale
self.scroll.SetScrollbars(self.scroll_scale, self.scroll_scale, self.width_virtual / self.scroll_scale, self.height_virtual / self.scroll_scale, xpos, ypos)
self.Refresh()
def zoom_in(self):
x = self.scroll_start_us()
self.zoom *= 2
self.__zoom(x)
def zoom_out(self):
x = self.scroll_start_us()
self.zoom /= 2
self.__zoom(x)
def on_key_press(self, event):
key = event.GetRawKeyCode()
if key == ord("+"):
self.zoom_in()
return
if key == ord("-"):
self.zoom_out()
return
key = event.GetKeyCode()
(x, y) = self.scroll.GetViewStart()
if key == wx.WXK_RIGHT:
self.scroll.Scroll(x + 1, y)
elif key == wx.WXK_LEFT:
self.scroll.Scroll(x - 1, y)
elif key == wx.WXK_DOWN:
self.scroll.Scroll(x, y + 1)
elif key == wx.WXK_UP:
self.scroll.Scroll(x, y - 1)
|
gpl-2.0
|
Greennut/ostproject
|
django/middleware/gzip.py
|
98
|
1747
|
import re
from django.utils.text import compress_string
from django.utils.cache import patch_vary_headers
re_accepts_gzip = re.compile(r'\bgzip\b')
class GZipMiddleware(object):
"""
This middleware compresses content if the browser allows gzip compression.
It sets the Vary header accordingly, so that caches will base their storage
on the Accept-Encoding header.
"""
def process_response(self, request, response):
# It's not worth attempting to compress really short responses.
if len(response.content) < 200:
return response
patch_vary_headers(response, ('Accept-Encoding',))
# Avoid gzipping if we've already got a content-encoding.
if response.has_header('Content-Encoding'):
return response
# MSIE have issues with gzipped response of various content types.
if "msie" in request.META.get('HTTP_USER_AGENT', '').lower():
ctype = response.get('Content-Type', '').lower()
if not ctype.startswith("text/") or "javascript" in ctype:
return response
ae = request.META.get('HTTP_ACCEPT_ENCODING', '')
if not re_accepts_gzip.search(ae):
return response
# Return the compressed content only if it's actually shorter.
compressed_content = compress_string(response.content)
if len(compressed_content) >= len(response.content):
return response
if response.has_header('ETag'):
response['ETag'] = re.sub('"$', ';gzip"', response['ETag'])
response.content = compressed_content
response['Content-Encoding'] = 'gzip'
response['Content-Length'] = str(len(response.content))
return response
|
bsd-3-clause
|
Lawrence-Liu/scikit-learn
|
sklearn/metrics/base.py
|
231
|
4378
|
"""
Common code for all metrics
"""
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Olivier Grisel <[email protected]>
# Arnaud Joly <[email protected]>
# Jochen Wersdorfer <[email protected]>
# Lars Buitinck <[email protected]>
# Joel Nothman <[email protected]>
# Noel Dawe <[email protected]>
# License: BSD 3 clause
from __future__ import division
import numpy as np
from ..utils import check_array, check_consistent_length
from ..utils.multiclass import type_of_target
class UndefinedMetricWarning(UserWarning):
pass
def _average_binary_score(binary_metric, y_true, y_score, average,
sample_weight=None):
"""Average a binary metric for multilabel classification
Parameters
----------
y_true : array, shape = [n_samples] or [n_samples, n_classes]
True binary labels in binary label indicators.
y_score : array, shape = [n_samples] or [n_samples, n_classes]
Target scores, can either be probability estimates of the positive
class, confidence values, or binary decisions.
average : string, [None, 'micro', 'macro' (default), 'samples', 'weighted']
If ``None``, the scores for each class are returned. Otherwise,
this determines the type of averaging performed on the data:
``'micro'``:
Calculate metrics globally by considering each element of the label
indicator matrix as a label.
``'macro'``:
Calculate metrics for each label, and find their unweighted
mean. This does not take label imbalance into account.
``'weighted'``:
Calculate metrics for each label, and find their average, weighted
by support (the number of true instances for each label).
``'samples'``:
Calculate metrics for each instance, and find their average.
sample_weight : array-like of shape = [n_samples], optional
Sample weights.
binary_metric : callable, returns shape [n_classes]
The binary metric function to use.
Returns
-------
score : float or array of shape [n_classes]
If not ``None``, average the score, else return the score for each
classes.
"""
average_options = (None, 'micro', 'macro', 'weighted', 'samples')
if average not in average_options:
raise ValueError('average has to be one of {0}'
''.format(average_options))
y_type = type_of_target(y_true)
if y_type not in ("binary", "multilabel-indicator"):
raise ValueError("{0} format is not supported".format(y_type))
if y_type == "binary":
return binary_metric(y_true, y_score, sample_weight=sample_weight)
check_consistent_length(y_true, y_score, sample_weight)
y_true = check_array(y_true)
y_score = check_array(y_score)
not_average_axis = 1
score_weight = sample_weight
average_weight = None
if average == "micro":
if score_weight is not None:
score_weight = np.repeat(score_weight, y_true.shape[1])
y_true = y_true.ravel()
y_score = y_score.ravel()
elif average == 'weighted':
if score_weight is not None:
average_weight = np.sum(np.multiply(
y_true, np.reshape(score_weight, (-1, 1))), axis=0)
else:
average_weight = np.sum(y_true, axis=0)
if average_weight.sum() == 0:
return 0
elif average == 'samples':
# swap average_weight <-> score_weight
average_weight = score_weight
score_weight = None
not_average_axis = 0
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_score.ndim == 1:
y_score = y_score.reshape((-1, 1))
n_classes = y_score.shape[not_average_axis]
score = np.zeros((n_classes,))
for c in range(n_classes):
y_true_c = y_true.take([c], axis=not_average_axis).ravel()
y_score_c = y_score.take([c], axis=not_average_axis).ravel()
score[c] = binary_metric(y_true_c, y_score_c,
sample_weight=score_weight)
# Average the results
if average is not None:
return np.average(score, weights=average_weight)
else:
return score
|
bsd-3-clause
|
oasiswork/odoo
|
addons/hr_payroll/hr_payroll.py
|
144
|
49776
|
#-*- coding:utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). All Rights Reserved
# d$
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import time
from datetime import date
from datetime import datetime
from datetime import timedelta
from dateutil import relativedelta
from openerp import api, tools
from openerp.osv import fields, osv
from openerp.tools.translate import _
import openerp.addons.decimal_precision as dp
from openerp.tools.safe_eval import safe_eval as eval
class hr_payroll_structure(osv.osv):
"""
Salary structure used to defined
- Basic
- Allowances
- Deductions
"""
_name = 'hr.payroll.structure'
_description = 'Salary Structure'
_columns = {
'name':fields.char('Name', required=True),
'code':fields.char('Reference', size=64, required=True),
'company_id':fields.many2one('res.company', 'Company', required=True, copy=False),
'note': fields.text('Description'),
'parent_id':fields.many2one('hr.payroll.structure', 'Parent'),
'children_ids':fields.one2many('hr.payroll.structure', 'parent_id', 'Children', copy=True),
'rule_ids':fields.many2many('hr.salary.rule', 'hr_structure_salary_rule_rel', 'struct_id', 'rule_id', 'Salary Rules'),
}
def _get_parent(self, cr, uid, context=None):
obj_model = self.pool.get('ir.model.data')
res = False
data_id = obj_model.search(cr, uid, [('model', '=', 'hr.payroll.structure'), ('name', '=', 'structure_base')])
if data_id:
res = obj_model.browse(cr, uid, data_id[0], context=context).res_id
return res
_defaults = {
'company_id': lambda self, cr, uid, context: \
self.pool.get('res.users').browse(cr, uid, uid,
context=context).company_id.id,
'parent_id': _get_parent,
}
_constraints = [
(osv.osv._check_recursion, 'Error ! You cannot create a recursive Salary Structure.', ['parent_id'])
]
def copy(self, cr, uid, id, default=None, context=None):
default = dict(default or {},
code=_("%s (copy)") % (self.browse(cr, uid, id, context=context).code))
return super(hr_payroll_structure, self).copy(cr, uid, id, default, context=context)
@api.cr_uid_ids_context
def get_all_rules(self, cr, uid, structure_ids, context=None):
"""
@param structure_ids: list of structure
@return: returns a list of tuple (id, sequence) of rules that are maybe to apply
"""
all_rules = []
for struct in self.browse(cr, uid, structure_ids, context=context):
all_rules += self.pool.get('hr.salary.rule')._recursive_search_of_rules(cr, uid, struct.rule_ids, context=context)
return all_rules
@api.cr_uid_ids_context
def _get_parent_structure(self, cr, uid, struct_ids, context=None):
if not struct_ids:
return []
parent = []
for struct in self.browse(cr, uid, struct_ids, context=context):
if struct.parent_id:
parent.append(struct.parent_id.id)
if parent:
parent = self._get_parent_structure(cr, uid, parent, context)
return parent + struct_ids
class hr_contract(osv.osv):
"""
Employee contract based on the visa, work permits
allows to configure different Salary structure
"""
_inherit = 'hr.contract'
_description = 'Employee Contract'
_columns = {
'struct_id': fields.many2one('hr.payroll.structure', 'Salary Structure'),
'schedule_pay': fields.selection([
('monthly', 'Monthly'),
('quarterly', 'Quarterly'),
('semi-annually', 'Semi-annually'),
('annually', 'Annually'),
('weekly', 'Weekly'),
('bi-weekly', 'Bi-weekly'),
('bi-monthly', 'Bi-monthly'),
], 'Scheduled Pay', select=True),
}
_defaults = {
'schedule_pay': 'monthly',
}
@api.cr_uid_ids_context
def get_all_structures(self, cr, uid, contract_ids, context=None):
"""
@param contract_ids: list of contracts
@return: the structures linked to the given contracts, ordered by hierachy (parent=False first, then first level children and so on) and without duplicata
"""
structure_ids = [contract.struct_id.id for contract in self.browse(cr, uid, contract_ids, context=context) if contract.struct_id]
if not structure_ids:
return []
return list(set(self.pool.get('hr.payroll.structure')._get_parent_structure(cr, uid, structure_ids, context=context)))
class contrib_register(osv.osv):
'''
Contribution Register
'''
_name = 'hr.contribution.register'
_description = 'Contribution Register'
_columns = {
'company_id':fields.many2one('res.company', 'Company'),
'partner_id':fields.many2one('res.partner', 'Partner'),
'name':fields.char('Name', required=True, readonly=False),
'register_line_ids':fields.one2many('hr.payslip.line', 'register_id', 'Register Line', readonly=True),
'note': fields.text('Description'),
}
_defaults = {
'company_id': lambda self, cr, uid, context: \
self.pool.get('res.users').browse(cr, uid, uid,
context=context).company_id.id,
}
class hr_salary_rule_category(osv.osv):
"""
HR Salary Rule Category
"""
_name = 'hr.salary.rule.category'
_description = 'Salary Rule Category'
_columns = {
'name':fields.char('Name', required=True, readonly=False),
'code':fields.char('Code', size=64, required=True, readonly=False),
'parent_id':fields.many2one('hr.salary.rule.category', 'Parent', help="Linking a salary category to its parent is used only for the reporting purpose."),
'children_ids': fields.one2many('hr.salary.rule.category', 'parent_id', 'Children'),
'note': fields.text('Description'),
'company_id':fields.many2one('res.company', 'Company', required=False),
}
_defaults = {
'company_id': lambda self, cr, uid, context: \
self.pool.get('res.users').browse(cr, uid, uid,
context=context).company_id.id,
}
class one2many_mod2(fields.one2many):
def get(self, cr, obj, ids, name, user=None, offset=0, context=None, values=None):
if context is None:
context = {}
if not values:
values = {}
res = {}
for id in ids:
res[id] = []
ids2 = obj.pool[self._obj].search(cr, user, [(self._fields_id,'in',ids), ('appears_on_payslip', '=', True)], limit=self._limit)
for r in obj.pool[self._obj].read(cr, user, ids2, [self._fields_id], context=context, load='_classic_write'):
key = r[self._fields_id]
if isinstance(key, tuple):
# Read return a tuple in the case where the field is a many2one
# but we want to get the id of this field.
key = key[0]
res[key].append( r['id'] )
return res
class hr_payslip_run(osv.osv):
_name = 'hr.payslip.run'
_description = 'Payslip Batches'
_columns = {
'name': fields.char('Name', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'slip_ids': fields.one2many('hr.payslip', 'payslip_run_id', 'Payslips', required=False, readonly=True, states={'draft': [('readonly', False)]}),
'state': fields.selection([
('draft', 'Draft'),
('close', 'Close'),
], 'Status', select=True, readonly=True, copy=False),
'date_start': fields.date('Date From', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'date_end': fields.date('Date To', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'credit_note': fields.boolean('Credit Note', readonly=True, states={'draft': [('readonly', False)]}, help="If its checked, indicates that all payslips generated from here are refund payslips."),
}
_defaults = {
'state': 'draft',
'date_start': lambda *a: time.strftime('%Y-%m-01'),
'date_end': lambda *a: str(datetime.now() + relativedelta.relativedelta(months=+1, day=1, days=-1))[:10],
}
def draft_payslip_run(self, cr, uid, ids, context=None):
return self.write(cr, uid, ids, {'state': 'draft'}, context=context)
def close_payslip_run(self, cr, uid, ids, context=None):
return self.write(cr, uid, ids, {'state': 'close'}, context=context)
class hr_payslip(osv.osv):
'''
Pay Slip
'''
_name = 'hr.payslip'
_description = 'Pay Slip'
def _get_lines_salary_rule_category(self, cr, uid, ids, field_names, arg=None, context=None):
result = {}
if not ids: return result
for id in ids:
result.setdefault(id, [])
cr.execute('''SELECT pl.slip_id, pl.id FROM hr_payslip_line AS pl \
LEFT JOIN hr_salary_rule_category AS sh on (pl.category_id = sh.id) \
WHERE pl.slip_id in %s \
GROUP BY pl.slip_id, pl.sequence, pl.id ORDER BY pl.sequence''',(tuple(ids),))
res = cr.fetchall()
for r in res:
result[r[0]].append(r[1])
return result
def _count_detail_payslip(self, cr, uid, ids, field_name, arg, context=None):
res = {}
for details in self.browse(cr, uid, ids, context=context):
res[details.id] = len(details.line_ids)
return res
_columns = {
'struct_id': fields.many2one('hr.payroll.structure', 'Structure', readonly=True, states={'draft': [('readonly', False)]}, help='Defines the rules that have to be applied to this payslip, accordingly to the contract chosen. If you let empty the field contract, this field isn\'t mandatory anymore and thus the rules applied will be all the rules set on the structure of all contracts of the employee valid for the chosen period'),
'name': fields.char('Payslip Name', required=False, readonly=True, states={'draft': [('readonly', False)]}),
'number': fields.char('Reference', required=False, readonly=True, states={'draft': [('readonly', False)]}, copy=False),
'employee_id': fields.many2one('hr.employee', 'Employee', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'date_from': fields.date('Date From', readonly=True, states={'draft': [('readonly', False)]}, required=True),
'date_to': fields.date('Date To', readonly=True, states={'draft': [('readonly', False)]}, required=True),
'state': fields.selection([
('draft', 'Draft'),
('verify', 'Waiting'),
('done', 'Done'),
('cancel', 'Rejected'),
], 'Status', select=True, readonly=True, copy=False,
help='* When the payslip is created the status is \'Draft\'.\
\n* If the payslip is under verification, the status is \'Waiting\'. \
\n* If the payslip is confirmed then status is set to \'Done\'.\
\n* When user cancel payslip the status is \'Rejected\'.'),
'line_ids': one2many_mod2('hr.payslip.line', 'slip_id', 'Payslip Lines', readonly=True, states={'draft':[('readonly',False)]}),
'company_id': fields.many2one('res.company', 'Company', required=False, readonly=True, states={'draft': [('readonly', False)]}, copy=False),
'worked_days_line_ids': fields.one2many('hr.payslip.worked_days', 'payslip_id', 'Payslip Worked Days', copy=True, required=False, readonly=True, states={'draft': [('readonly', False)]}),
'input_line_ids': fields.one2many('hr.payslip.input', 'payslip_id', 'Payslip Inputs', required=False, readonly=True, states={'draft': [('readonly', False)]}),
'paid': fields.boolean('Made Payment Order ? ', required=False, readonly=True, states={'draft': [('readonly', False)]}, copy=False),
'note': fields.text('Internal Note', readonly=True, states={'draft':[('readonly',False)]}),
'contract_id': fields.many2one('hr.contract', 'Contract', required=False, readonly=True, states={'draft': [('readonly', False)]}),
'details_by_salary_rule_category': fields.function(_get_lines_salary_rule_category, method=True, type='one2many', relation='hr.payslip.line', string='Details by Salary Rule Category'),
'credit_note': fields.boolean('Credit Note', help="Indicates this payslip has a refund of another", readonly=True, states={'draft': [('readonly', False)]}),
'payslip_run_id': fields.many2one('hr.payslip.run', 'Payslip Batches', readonly=True, states={'draft': [('readonly', False)]}, copy=False),
'payslip_count': fields.function(_count_detail_payslip, type='integer', string="Payslip Computation Details"),
}
_defaults = {
'date_from': lambda *a: time.strftime('%Y-%m-01'),
'date_to': lambda *a: str(datetime.now() + relativedelta.relativedelta(months=+1, day=1, days=-1))[:10],
'state': 'draft',
'credit_note': False,
'company_id': lambda self, cr, uid, context: \
self.pool.get('res.users').browse(cr, uid, uid,
context=context).company_id.id,
}
def _check_dates(self, cr, uid, ids, context=None):
for payslip in self.browse(cr, uid, ids, context=context):
if payslip.date_from > payslip.date_to:
return False
return True
_constraints = [(_check_dates, "Payslip 'Date From' must be before 'Date To'.", ['date_from', 'date_to'])]
def cancel_sheet(self, cr, uid, ids, context=None):
return self.write(cr, uid, ids, {'state': 'cancel'}, context=context)
def process_sheet(self, cr, uid, ids, context=None):
return self.write(cr, uid, ids, {'paid': True, 'state': 'done'}, context=context)
def hr_verify_sheet(self, cr, uid, ids, context=None):
self.compute_sheet(cr, uid, ids, context)
return self.write(cr, uid, ids, {'state': 'verify'}, context=context)
def refund_sheet(self, cr, uid, ids, context=None):
mod_obj = self.pool.get('ir.model.data')
for payslip in self.browse(cr, uid, ids, context=context):
id_copy = self.copy(cr, uid, payslip.id, {'credit_note': True, 'name': _('Refund: ')+payslip.name}, context=context)
self.signal_workflow(cr, uid, [id_copy], 'hr_verify_sheet')
self.signal_workflow(cr, uid, [id_copy], 'process_sheet')
form_id = mod_obj.get_object_reference(cr, uid, 'hr_payroll', 'view_hr_payslip_form')
form_res = form_id and form_id[1] or False
tree_id = mod_obj.get_object_reference(cr, uid, 'hr_payroll', 'view_hr_payslip_tree')
tree_res = tree_id and tree_id[1] or False
return {
'name':_("Refund Payslip"),
'view_mode': 'tree, form',
'view_id': False,
'view_type': 'form',
'res_model': 'hr.payslip',
'type': 'ir.actions.act_window',
'nodestroy': True,
'target': 'current',
'domain': "[('id', 'in', %s)]" % [id_copy],
'views': [(tree_res, 'tree'), (form_res, 'form')],
'context': {}
}
def check_done(self, cr, uid, ids, context=None):
return True
def unlink(self, cr, uid, ids, context=None):
for payslip in self.browse(cr, uid, ids, context=context):
if payslip.state not in ['draft','cancel']:
raise osv.except_osv(_('Warning!'),_('You cannot delete a payslip which is not draft or cancelled!'))
return super(hr_payslip, self).unlink(cr, uid, ids, context)
#TODO move this function into hr_contract module, on hr.employee object
def get_contract(self, cr, uid, employee, date_from, date_to, context=None):
"""
@param employee: browse record of employee
@param date_from: date field
@param date_to: date field
@return: returns the ids of all the contracts for the given employee that need to be considered for the given dates
"""
contract_obj = self.pool.get('hr.contract')
clause = []
#a contract is valid if it ends between the given dates
clause_1 = ['&',('date_end', '<=', date_to),('date_end','>=', date_from)]
#OR if it starts between the given dates
clause_2 = ['&',('date_start', '<=', date_to),('date_start','>=', date_from)]
#OR if it starts before the date_from and finish after the date_end (or never finish)
clause_3 = ['&',('date_start','<=', date_from),'|',('date_end', '=', False),('date_end','>=', date_to)]
clause_final = [('employee_id', '=', employee.id),'|','|'] + clause_1 + clause_2 + clause_3
contract_ids = contract_obj.search(cr, uid, clause_final, context=context)
return contract_ids
def compute_sheet(self, cr, uid, ids, context=None):
slip_line_pool = self.pool.get('hr.payslip.line')
sequence_obj = self.pool.get('ir.sequence')
for payslip in self.browse(cr, uid, ids, context=context):
number = payslip.number or sequence_obj.get(cr, uid, 'salary.slip')
#delete old payslip lines
old_slipline_ids = slip_line_pool.search(cr, uid, [('slip_id', '=', payslip.id)], context=context)
# old_slipline_ids
if old_slipline_ids:
slip_line_pool.unlink(cr, uid, old_slipline_ids, context=context)
if payslip.contract_id:
#set the list of contract for which the rules have to be applied
contract_ids = [payslip.contract_id.id]
else:
#if we don't give the contract, then the rules to apply should be for all current contracts of the employee
contract_ids = self.get_contract(cr, uid, payslip.employee_id, payslip.date_from, payslip.date_to, context=context)
lines = [(0,0,line) for line in self.pool.get('hr.payslip').get_payslip_lines(cr, uid, contract_ids, payslip.id, context=context)]
self.write(cr, uid, [payslip.id], {'line_ids': lines, 'number': number,}, context=context)
return True
def get_worked_day_lines(self, cr, uid, contract_ids, date_from, date_to, context=None):
"""
@param contract_ids: list of contract id
@return: returns a list of dict containing the input that should be applied for the given contract between date_from and date_to
"""
def was_on_leave(employee_id, datetime_day, context=None):
res = False
day = datetime_day.strftime("%Y-%m-%d")
holiday_ids = self.pool.get('hr.holidays').search(cr, uid, [('state','=','validate'),('employee_id','=',employee_id),('type','=','remove'),('date_from','<=',day),('date_to','>=',day)])
if holiday_ids:
res = self.pool.get('hr.holidays').browse(cr, uid, holiday_ids, context=context)[0].holiday_status_id.name
return res
res = []
for contract in self.pool.get('hr.contract').browse(cr, uid, contract_ids, context=context):
if not contract.working_hours:
#fill only if the contract as a working schedule linked
continue
attendances = {
'name': _("Normal Working Days paid at 100%"),
'sequence': 1,
'code': 'WORK100',
'number_of_days': 0.0,
'number_of_hours': 0.0,
'contract_id': contract.id,
}
leaves = {}
day_from = datetime.strptime(date_from,"%Y-%m-%d")
day_to = datetime.strptime(date_to,"%Y-%m-%d")
nb_of_days = (day_to - day_from).days + 1
for day in range(0, nb_of_days):
working_hours_on_day = self.pool.get('resource.calendar').working_hours_on_day(cr, uid, contract.working_hours, day_from + timedelta(days=day), context)
if working_hours_on_day:
#the employee had to work
leave_type = was_on_leave(contract.employee_id.id, day_from + timedelta(days=day), context=context)
if leave_type:
#if he was on leave, fill the leaves dict
if leave_type in leaves:
leaves[leave_type]['number_of_days'] += 1.0
leaves[leave_type]['number_of_hours'] += working_hours_on_day
else:
leaves[leave_type] = {
'name': leave_type,
'sequence': 5,
'code': leave_type,
'number_of_days': 1.0,
'number_of_hours': working_hours_on_day,
'contract_id': contract.id,
}
else:
#add the input vals to tmp (increment if existing)
attendances['number_of_days'] += 1.0
attendances['number_of_hours'] += working_hours_on_day
leaves = [value for key,value in leaves.items()]
res += [attendances] + leaves
return res
def get_inputs(self, cr, uid, contract_ids, date_from, date_to, context=None):
res = []
contract_obj = self.pool.get('hr.contract')
rule_obj = self.pool.get('hr.salary.rule')
structure_ids = contract_obj.get_all_structures(cr, uid, contract_ids, context=context)
rule_ids = self.pool.get('hr.payroll.structure').get_all_rules(cr, uid, structure_ids, context=context)
sorted_rule_ids = [id for id, sequence in sorted(rule_ids, key=lambda x:x[1])]
for contract in contract_obj.browse(cr, uid, contract_ids, context=context):
for rule in rule_obj.browse(cr, uid, sorted_rule_ids, context=context):
if rule.input_ids:
for input in rule.input_ids:
inputs = {
'name': input.name,
'code': input.code,
'contract_id': contract.id,
}
res += [inputs]
return res
def get_payslip_lines(self, cr, uid, contract_ids, payslip_id, context):
def _sum_salary_rule_category(localdict, category, amount):
if category.parent_id:
localdict = _sum_salary_rule_category(localdict, category.parent_id, amount)
localdict['categories'].dict[category.code] = category.code in localdict['categories'].dict and localdict['categories'].dict[category.code] + amount or amount
return localdict
class BrowsableObject(object):
def __init__(self, pool, cr, uid, employee_id, dict):
self.pool = pool
self.cr = cr
self.uid = uid
self.employee_id = employee_id
self.dict = dict
def __getattr__(self, attr):
return attr in self.dict and self.dict.__getitem__(attr) or 0.0
class InputLine(BrowsableObject):
"""a class that will be used into the python code, mainly for usability purposes"""
def sum(self, code, from_date, to_date=None):
if to_date is None:
to_date = datetime.now().strftime('%Y-%m-%d')
result = 0.0
self.cr.execute("SELECT sum(amount) as sum\
FROM hr_payslip as hp, hr_payslip_input as pi \
WHERE hp.employee_id = %s AND hp.state = 'done' \
AND hp.date_from >= %s AND hp.date_to <= %s AND hp.id = pi.payslip_id AND pi.code = %s",
(self.employee_id, from_date, to_date, code))
res = self.cr.fetchone()[0]
return res or 0.0
class WorkedDays(BrowsableObject):
"""a class that will be used into the python code, mainly for usability purposes"""
def _sum(self, code, from_date, to_date=None):
if to_date is None:
to_date = datetime.now().strftime('%Y-%m-%d')
result = 0.0
self.cr.execute("SELECT sum(number_of_days) as number_of_days, sum(number_of_hours) as number_of_hours\
FROM hr_payslip as hp, hr_payslip_worked_days as pi \
WHERE hp.employee_id = %s AND hp.state = 'done'\
AND hp.date_from >= %s AND hp.date_to <= %s AND hp.id = pi.payslip_id AND pi.code = %s",
(self.employee_id, from_date, to_date, code))
return self.cr.fetchone()
def sum(self, code, from_date, to_date=None):
res = self._sum(code, from_date, to_date)
return res and res[0] or 0.0
def sum_hours(self, code, from_date, to_date=None):
res = self._sum(code, from_date, to_date)
return res and res[1] or 0.0
class Payslips(BrowsableObject):
"""a class that will be used into the python code, mainly for usability purposes"""
def sum(self, code, from_date, to_date=None):
if to_date is None:
to_date = datetime.now().strftime('%Y-%m-%d')
self.cr.execute("SELECT sum(case when hp.credit_note = False then (pl.total) else (-pl.total) end)\
FROM hr_payslip as hp, hr_payslip_line as pl \
WHERE hp.employee_id = %s AND hp.state = 'done' \
AND hp.date_from >= %s AND hp.date_to <= %s AND hp.id = pl.slip_id AND pl.code = %s",
(self.employee_id, from_date, to_date, code))
res = self.cr.fetchone()
return res and res[0] or 0.0
#we keep a dict with the result because a value can be overwritten by another rule with the same code
result_dict = {}
rules = {}
categories_dict = {}
blacklist = []
payslip_obj = self.pool.get('hr.payslip')
inputs_obj = self.pool.get('hr.payslip.worked_days')
obj_rule = self.pool.get('hr.salary.rule')
payslip = payslip_obj.browse(cr, uid, payslip_id, context=context)
worked_days = {}
for worked_days_line in payslip.worked_days_line_ids:
worked_days[worked_days_line.code] = worked_days_line
inputs = {}
for input_line in payslip.input_line_ids:
inputs[input_line.code] = input_line
categories_obj = BrowsableObject(self.pool, cr, uid, payslip.employee_id.id, categories_dict)
input_obj = InputLine(self.pool, cr, uid, payslip.employee_id.id, inputs)
worked_days_obj = WorkedDays(self.pool, cr, uid, payslip.employee_id.id, worked_days)
payslip_obj = Payslips(self.pool, cr, uid, payslip.employee_id.id, payslip)
rules_obj = BrowsableObject(self.pool, cr, uid, payslip.employee_id.id, rules)
baselocaldict = {'categories': categories_obj, 'rules': rules_obj, 'payslip': payslip_obj, 'worked_days': worked_days_obj, 'inputs': input_obj}
#get the ids of the structures on the contracts and their parent id as well
structure_ids = self.pool.get('hr.contract').get_all_structures(cr, uid, contract_ids, context=context)
#get the rules of the structure and thier children
rule_ids = self.pool.get('hr.payroll.structure').get_all_rules(cr, uid, structure_ids, context=context)
#run the rules by sequence
sorted_rule_ids = [id for id, sequence in sorted(rule_ids, key=lambda x:x[1])]
for contract in self.pool.get('hr.contract').browse(cr, uid, contract_ids, context=context):
employee = contract.employee_id
localdict = dict(baselocaldict, employee=employee, contract=contract)
for rule in obj_rule.browse(cr, uid, sorted_rule_ids, context=context):
key = rule.code + '-' + str(contract.id)
localdict['result'] = None
localdict['result_qty'] = 1.0
localdict['result_rate'] = 100
#check if the rule can be applied
if obj_rule.satisfy_condition(cr, uid, rule.id, localdict, context=context) and rule.id not in blacklist:
#compute the amount of the rule
amount, qty, rate = obj_rule.compute_rule(cr, uid, rule.id, localdict, context=context)
#check if there is already a rule computed with that code
previous_amount = rule.code in localdict and localdict[rule.code] or 0.0
#set/overwrite the amount computed for this rule in the localdict
tot_rule = amount * qty * rate / 100.0
localdict[rule.code] = tot_rule
rules[rule.code] = rule
#sum the amount for its salary category
localdict = _sum_salary_rule_category(localdict, rule.category_id, tot_rule - previous_amount)
#create/overwrite the rule in the temporary results
result_dict[key] = {
'salary_rule_id': rule.id,
'contract_id': contract.id,
'name': rule.name,
'code': rule.code,
'category_id': rule.category_id.id,
'sequence': rule.sequence,
'appears_on_payslip': rule.appears_on_payslip,
'condition_select': rule.condition_select,
'condition_python': rule.condition_python,
'condition_range': rule.condition_range,
'condition_range_min': rule.condition_range_min,
'condition_range_max': rule.condition_range_max,
'amount_select': rule.amount_select,
'amount_fix': rule.amount_fix,
'amount_python_compute': rule.amount_python_compute,
'amount_percentage': rule.amount_percentage,
'amount_percentage_base': rule.amount_percentage_base,
'register_id': rule.register_id.id,
'amount': amount,
'employee_id': contract.employee_id.id,
'quantity': qty,
'rate': rate,
}
else:
#blacklist this rule and its children
blacklist += [id for id, seq in self.pool.get('hr.salary.rule')._recursive_search_of_rules(cr, uid, [rule], context=context)]
result = [value for code, value in result_dict.items()]
return result
def onchange_employee_id(self, cr, uid, ids, date_from, date_to, employee_id=False, contract_id=False, context=None):
empolyee_obj = self.pool.get('hr.employee')
contract_obj = self.pool.get('hr.contract')
worked_days_obj = self.pool.get('hr.payslip.worked_days')
input_obj = self.pool.get('hr.payslip.input')
if context is None:
context = {}
#delete old worked days lines
old_worked_days_ids = ids and worked_days_obj.search(cr, uid, [('payslip_id', '=', ids[0])], context=context) or False
if old_worked_days_ids:
worked_days_obj.unlink(cr, uid, old_worked_days_ids, context=context)
#delete old input lines
old_input_ids = ids and input_obj.search(cr, uid, [('payslip_id', '=', ids[0])], context=context) or False
if old_input_ids:
input_obj.unlink(cr, uid, old_input_ids, context=context)
#defaults
res = {'value':{
'line_ids':[],
'input_line_ids': [],
'worked_days_line_ids': [],
#'details_by_salary_head':[], TODO put me back
'name':'',
'contract_id': False,
'struct_id': False,
}
}
if (not employee_id) or (not date_from) or (not date_to):
return res
ttyme = datetime.fromtimestamp(time.mktime(time.strptime(date_from, "%Y-%m-%d")))
employee_id = empolyee_obj.browse(cr, uid, employee_id, context=context)
res['value'].update({
'name': _('Salary Slip of %s for %s') % (employee_id.name, tools.ustr(ttyme.strftime('%B-%Y'))),
'company_id': employee_id.company_id.id
})
if not context.get('contract', False):
#fill with the first contract of the employee
contract_ids = self.get_contract(cr, uid, employee_id, date_from, date_to, context=context)
else:
if contract_id:
#set the list of contract for which the input have to be filled
contract_ids = [contract_id]
else:
#if we don't give the contract, then the input to fill should be for all current contracts of the employee
contract_ids = self.get_contract(cr, uid, employee_id, date_from, date_to, context=context)
if not contract_ids:
return res
contract_record = contract_obj.browse(cr, uid, contract_ids[0], context=context)
res['value'].update({
'contract_id': contract_record and contract_record.id or False
})
struct_record = contract_record and contract_record.struct_id or False
if not struct_record:
return res
res['value'].update({
'struct_id': struct_record.id,
})
#computation of the salary input
worked_days_line_ids = self.get_worked_day_lines(cr, uid, contract_ids, date_from, date_to, context=context)
input_line_ids = self.get_inputs(cr, uid, contract_ids, date_from, date_to, context=context)
res['value'].update({
'worked_days_line_ids': worked_days_line_ids,
'input_line_ids': input_line_ids,
})
return res
def onchange_contract_id(self, cr, uid, ids, date_from, date_to, employee_id=False, contract_id=False, context=None):
#TODO it seems to be the mess in the onchanges, we should have onchange_employee => onchange_contract => doing all the things
res = {'value':{
'line_ids': [],
'name': '',
}
}
context = dict(context or {}, contract=True)
if not contract_id:
res['value'].update({'struct_id': False})
return self.onchange_employee_id(cr, uid, ids, date_from=date_from, date_to=date_to, employee_id=employee_id, contract_id=contract_id, context=context)
class hr_payslip_worked_days(osv.osv):
'''
Payslip Worked Days
'''
_name = 'hr.payslip.worked_days'
_description = 'Payslip Worked Days'
_columns = {
'name': fields.char('Description', required=True),
'payslip_id': fields.many2one('hr.payslip', 'Pay Slip', required=True, ondelete='cascade', select=True),
'sequence': fields.integer('Sequence', required=True, select=True),
'code': fields.char('Code', size=52, required=True, help="The code that can be used in the salary rules"),
'number_of_days': fields.float('Number of Days'),
'number_of_hours': fields.float('Number of Hours'),
'contract_id': fields.many2one('hr.contract', 'Contract', required=True, help="The contract for which applied this input"),
}
_order = 'payslip_id, sequence'
_defaults = {
'sequence': 10,
}
class hr_payslip_input(osv.osv):
'''
Payslip Input
'''
_name = 'hr.payslip.input'
_description = 'Payslip Input'
_columns = {
'name': fields.char('Description', required=True),
'payslip_id': fields.many2one('hr.payslip', 'Pay Slip', required=True, ondelete='cascade', select=True),
'sequence': fields.integer('Sequence', required=True, select=True),
'code': fields.char('Code', size=52, required=True, help="The code that can be used in the salary rules"),
'amount': fields.float('Amount', help="It is used in computation. For e.g. A rule for sales having 1% commission of basic salary for per product can defined in expression like result = inputs.SALEURO.amount * contract.wage*0.01."),
'contract_id': fields.many2one('hr.contract', 'Contract', required=True, help="The contract for which applied this input"),
}
_order = 'payslip_id, sequence'
_defaults = {
'sequence': 10,
'amount': 0.0,
}
class hr_salary_rule(osv.osv):
_name = 'hr.salary.rule'
_columns = {
'name':fields.char('Name', required=True, readonly=False),
'code':fields.char('Code', size=64, required=True, help="The code of salary rules can be used as reference in computation of other rules. In that case, it is case sensitive."),
'sequence': fields.integer('Sequence', required=True, help='Use to arrange calculation sequence', select=True),
'quantity': fields.char('Quantity', help="It is used in computation for percentage and fixed amount.For e.g. A rule for Meal Voucher having fixed amount of 1€ per worked day can have its quantity defined in expression like worked_days.WORK100.number_of_days."),
'category_id':fields.many2one('hr.salary.rule.category', 'Category', required=True),
'active':fields.boolean('Active', help="If the active field is set to false, it will allow you to hide the salary rule without removing it."),
'appears_on_payslip': fields.boolean('Appears on Payslip', help="Used to display the salary rule on payslip."),
'parent_rule_id':fields.many2one('hr.salary.rule', 'Parent Salary Rule', select=True),
'company_id':fields.many2one('res.company', 'Company', required=False),
'condition_select': fields.selection([('none', 'Always True'),('range', 'Range'), ('python', 'Python Expression')], "Condition Based on", required=True),
'condition_range':fields.char('Range Based on', readonly=False, help='This will be used to compute the % fields values; in general it is on basic, but you can also use categories code fields in lowercase as a variable names (hra, ma, lta, etc.) and the variable basic.'),
'condition_python':fields.text('Python Condition', required=True, readonly=False, help='Applied this rule for calculation if condition is true. You can specify condition like basic > 1000.'),
'condition_range_min': fields.float('Minimum Range', required=False, help="The minimum amount, applied for this rule."),
'condition_range_max': fields.float('Maximum Range', required=False, help="The maximum amount, applied for this rule."),
'amount_select':fields.selection([
('percentage','Percentage (%)'),
('fix','Fixed Amount'),
('code','Python Code'),
],'Amount Type', select=True, required=True, help="The computation method for the rule amount."),
'amount_fix': fields.float('Fixed Amount', digits_compute=dp.get_precision('Payroll'),),
'amount_percentage': fields.float('Percentage (%)', digits_compute=dp.get_precision('Payroll Rate'), help='For example, enter 50.0 to apply a percentage of 50%'),
'amount_python_compute':fields.text('Python Code'),
'amount_percentage_base': fields.char('Percentage based on', required=False, readonly=False, help='result will be affected to a variable'),
'child_ids':fields.one2many('hr.salary.rule', 'parent_rule_id', 'Child Salary Rule', copy=True),
'register_id':fields.many2one('hr.contribution.register', 'Contribution Register', help="Eventual third party involved in the salary payment of the employees."),
'input_ids': fields.one2many('hr.rule.input', 'input_id', 'Inputs', copy=True),
'note':fields.text('Description'),
}
_defaults = {
'amount_python_compute': '''
# Available variables:
#----------------------
# payslip: object containing the payslips
# employee: hr.employee object
# contract: hr.contract object
# rules: object containing the rules code (previously computed)
# categories: object containing the computed salary rule categories (sum of amount of all rules belonging to that category).
# worked_days: object containing the computed worked days.
# inputs: object containing the computed inputs.
# Note: returned value have to be set in the variable 'result'
result = contract.wage * 0.10''',
'condition_python':
'''
# Available variables:
#----------------------
# payslip: object containing the payslips
# employee: hr.employee object
# contract: hr.contract object
# rules: object containing the rules code (previously computed)
# categories: object containing the computed salary rule categories (sum of amount of all rules belonging to that category).
# worked_days: object containing the computed worked days
# inputs: object containing the computed inputs
# Note: returned value have to be set in the variable 'result'
result = rules.NET > categories.NET * 0.10''',
'condition_range': 'contract.wage',
'sequence': 5,
'appears_on_payslip': True,
'active': True,
'company_id': lambda self, cr, uid, context: \
self.pool.get('res.users').browse(cr, uid, uid,
context=context).company_id.id,
'condition_select': 'none',
'amount_select': 'fix',
'amount_fix': 0.0,
'amount_percentage': 0.0,
'quantity': '1.0',
}
@api.cr_uid_ids_context
def _recursive_search_of_rules(self, cr, uid, rule_ids, context=None):
"""
@param rule_ids: list of browse record
@return: returns a list of tuple (id, sequence) which are all the children of the passed rule_ids
"""
children_rules = []
for rule in rule_ids:
if rule.child_ids:
children_rules += self._recursive_search_of_rules(cr, uid, rule.child_ids, context=context)
return [(r.id, r.sequence) for r in rule_ids] + children_rules
#TODO should add some checks on the type of result (should be float)
def compute_rule(self, cr, uid, rule_id, localdict, context=None):
"""
:param rule_id: id of rule to compute
:param localdict: dictionary containing the environement in which to compute the rule
:return: returns a tuple build as the base/amount computed, the quantity and the rate
:rtype: (float, float, float)
"""
rule = self.browse(cr, uid, rule_id, context=context)
if rule.amount_select == 'fix':
try:
return rule.amount_fix, float(eval(rule.quantity, localdict)), 100.0
except:
raise osv.except_osv(_('Error!'), _('Wrong quantity defined for salary rule %s (%s).')% (rule.name, rule.code))
elif rule.amount_select == 'percentage':
try:
return (float(eval(rule.amount_percentage_base, localdict)),
float(eval(rule.quantity, localdict)),
rule.amount_percentage)
except:
raise osv.except_osv(_('Error!'), _('Wrong percentage base or quantity defined for salary rule %s (%s).')% (rule.name, rule.code))
else:
try:
eval(rule.amount_python_compute, localdict, mode='exec', nocopy=True)
return float(localdict['result']), 'result_qty' in localdict and localdict['result_qty'] or 1.0, 'result_rate' in localdict and localdict['result_rate'] or 100.0
except:
raise osv.except_osv(_('Error!'), _('Wrong python code defined for salary rule %s (%s).')% (rule.name, rule.code))
def satisfy_condition(self, cr, uid, rule_id, localdict, context=None):
"""
@param rule_id: id of hr.salary.rule to be tested
@param contract_id: id of hr.contract to be tested
@return: returns True if the given rule match the condition for the given contract. Return False otherwise.
"""
rule = self.browse(cr, uid, rule_id, context=context)
if rule.condition_select == 'none':
return True
elif rule.condition_select == 'range':
try:
result = eval(rule.condition_range, localdict)
return rule.condition_range_min <= result and result <= rule.condition_range_max or False
except:
raise osv.except_osv(_('Error!'), _('Wrong range condition defined for salary rule %s (%s).')% (rule.name, rule.code))
else: #python code
try:
eval(rule.condition_python, localdict, mode='exec', nocopy=True)
return 'result' in localdict and localdict['result'] or False
except:
raise osv.except_osv(_('Error!'), _('Wrong python condition defined for salary rule %s (%s).')% (rule.name, rule.code))
class hr_rule_input(osv.osv):
'''
Salary Rule Input
'''
_name = 'hr.rule.input'
_description = 'Salary Rule Input'
_columns = {
'name': fields.char('Description', required=True),
'code': fields.char('Code', size=52, required=True, help="The code that can be used in the salary rules"),
'input_id': fields.many2one('hr.salary.rule', 'Salary Rule Input', required=True)
}
class hr_payslip_line(osv.osv):
'''
Payslip Line
'''
_name = 'hr.payslip.line'
_inherit = 'hr.salary.rule'
_description = 'Payslip Line'
_order = 'contract_id, sequence'
def _calculate_total(self, cr, uid, ids, name, args, context):
if not ids: return {}
res = {}
for line in self.browse(cr, uid, ids, context=context):
res[line.id] = float(line.quantity) * line.amount * line.rate / 100
return res
_columns = {
'slip_id':fields.many2one('hr.payslip', 'Pay Slip', required=True, ondelete='cascade'),
'salary_rule_id':fields.many2one('hr.salary.rule', 'Rule', required=True),
'employee_id':fields.many2one('hr.employee', 'Employee', required=True),
'contract_id':fields.many2one('hr.contract', 'Contract', required=True, select=True),
'rate': fields.float('Rate (%)', digits_compute=dp.get_precision('Payroll Rate')),
'amount': fields.float('Amount', digits_compute=dp.get_precision('Payroll')),
'quantity': fields.float('Quantity', digits_compute=dp.get_precision('Payroll')),
'total': fields.function(_calculate_total, method=True, type='float', string='Total', digits_compute=dp.get_precision('Payroll'),store=True ),
}
_defaults = {
'quantity': 1.0,
'rate': 100.0,
}
class hr_employee(osv.osv):
'''
Employee
'''
_inherit = 'hr.employee'
_description = 'Employee'
def _calculate_total_wage(self, cr, uid, ids, name, args, context):
if not ids: return {}
res = {}
current_date = datetime.now().strftime('%Y-%m-%d')
for employee in self.browse(cr, uid, ids, context=context):
if not employee.contract_ids:
res[employee.id] = {'basic': 0.0}
continue
cr.execute( 'SELECT SUM(wage) '\
'FROM hr_contract '\
'WHERE employee_id = %s '\
'AND date_start <= %s '\
'AND (date_end > %s OR date_end is NULL)',
(employee.id, current_date, current_date))
result = dict(cr.dictfetchone())
res[employee.id] = {'basic': result['sum']}
return res
def _payslip_count(self, cr, uid, ids, field_name, arg, context=None):
Payslip = self.pool['hr.payslip']
return {
employee_id: Payslip.search_count(cr,uid, [('employee_id', '=', employee_id)], context=context)
for employee_id in ids
}
_columns = {
'slip_ids':fields.one2many('hr.payslip', 'employee_id', 'Payslips', required=False, readonly=True),
'total_wage': fields.function(_calculate_total_wage, method=True, type='float', string='Total Basic Salary', digits_compute=dp.get_precision('Payroll'), help="Sum of all current contract's wage of employee."),
'payslip_count': fields.function(_payslip_count, type='integer', string='Payslips'),
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
laurentgo/pants
|
src/python/pants/base/build_file_aliases.py
|
8
|
3196
|
# coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
import functools
from collections import namedtuple
class BuildFileAliases(namedtuple('BuildFileAliases',
['targets',
'objects',
'context_aware_object_factories',
'addressables'])):
"""A structure containing set of symbols to be exposed in BUILD files.
There are three types of symbols that can be exposed:
- targets: These are Target subclasses.
- objects: These are any python object, from constants to types.
- addressables: Exposed objects which optionally establish an alias via the AddressMapper
for themselves. Notably all Target aliases in BUILD files are actually exposed as proxy
objects via Target.get_addressable_type.
- context_aware_object_factories: These are object factories that are passed a ParseContext and
produce some object that uses data from the context to enable some feature or utility. Common
uses include objects that must be aware of the current BUILD file path or functions that need
to be able to create targets or objects from within the BUILD file parse.
"""
@classmethod
def create(cls,
targets=None,
objects=None,
context_aware_object_factories=None,
addressables=None):
"""A convenience constructor that can accept zero to all alias types."""
def copy(orig):
return orig.copy() if orig else {}
return cls(copy(targets),
copy(objects),
copy(context_aware_object_factories),
copy(addressables))
@classmethod
def curry_context(cls, wrappee):
"""Curry a function with a build file context.
Given a function foo(ctx, bar) that you want to expose in BUILD files
as foo(bar), use::
context_aware_object_factories={
'foo': BuildFileAliases.curry_context(foo),
}
"""
# You might wonder: why not just use lambda and functools.partial?
# That loses the __doc__, thus messing up the BUILD dictionary.
wrapper = lambda ctx: functools.partial(wrappee, ctx)
wrapper.__doc__ = wrappee.__doc__
wrapper.__name__ = str(".".join(["curry_context",
wrappee.__module__,
wrappee.__name__]))
return wrapper
def merge(self, other):
"""Merges a set of build file aliases and returns a new set of aliases containing both.
Any duplicate aliases from `other` will trump.
"""
if not isinstance(other, BuildFileAliases):
raise TypeError('Can only merge other BuildFileAliases, given {0}'.format(other))
all_aliases = self._asdict()
other_aliases = other._asdict()
for alias_type, alias_map in all_aliases.items():
alias_map.update(other_aliases[alias_type])
return BuildFileAliases(**all_aliases)
|
apache-2.0
|
sublime1809/django
|
django/views/debug.py
|
17
|
46756
|
from __future__ import unicode_literals
import datetime
import os
import re
import sys
import types
from django.conf import settings
from django.core.urlresolvers import resolve, Resolver404
from django.http import (HttpResponse, HttpResponseNotFound, HttpRequest,
build_request_repr)
from django.template import Template, Context, TemplateDoesNotExist
from django.template.defaultfilters import force_escape, pprint
from django.utils.datastructures import MultiValueDict
from django.utils.html import escape
from django.utils.encoding import force_bytes, smart_text
from django.utils.module_loading import import_string
from django.utils import six
from django.utils.translation import ugettext as _
HIDDEN_SETTINGS = re.compile('API|TOKEN|KEY|SECRET|PASS|SIGNATURE')
CLEANSED_SUBSTITUTE = '********************'
def linebreak_iter(template_source):
yield 0
p = template_source.find('\n')
while p >= 0:
yield p + 1
p = template_source.find('\n', p + 1)
yield len(template_source) + 1
class CallableSettingWrapper(object):
""" Object to wrap callable appearing in settings
* Not to call in the debug page (#21345).
* Not to break the debug page if the callable forbidding to set attributes (#23070).
"""
def __init__(self, callable_setting):
self._wrapped = callable_setting
def __repr__(self):
return repr(self._wrapped)
def cleanse_setting(key, value):
"""Cleanse an individual setting key/value of sensitive content.
If the value is a dictionary, recursively cleanse the keys in
that dictionary.
"""
try:
if HIDDEN_SETTINGS.search(key):
cleansed = CLEANSED_SUBSTITUTE
else:
if isinstance(value, dict):
cleansed = dict((k, cleanse_setting(k, v)) for k, v in value.items())
else:
cleansed = value
except TypeError:
# If the key isn't regex-able, just return as-is.
cleansed = value
if callable(cleansed):
# For fixing #21345 and #23070
cleansed = CallableSettingWrapper(cleansed)
return cleansed
def get_safe_settings():
"Returns a dictionary of the settings module, with sensitive settings blurred out."
settings_dict = {}
for k in dir(settings):
if k.isupper():
settings_dict[k] = cleanse_setting(k, getattr(settings, k))
return settings_dict
def technical_500_response(request, exc_type, exc_value, tb, status_code=500):
"""
Create a technical server error response. The last three arguments are
the values returned from sys.exc_info() and friends.
"""
reporter = ExceptionReporter(request, exc_type, exc_value, tb)
if request.is_ajax():
text = reporter.get_traceback_text()
return HttpResponse(text, status=status_code, content_type='text/plain')
else:
html = reporter.get_traceback_html()
return HttpResponse(html, status=status_code, content_type='text/html')
# Cache for the default exception reporter filter instance.
default_exception_reporter_filter = None
def get_exception_reporter_filter(request):
global default_exception_reporter_filter
if default_exception_reporter_filter is None:
# Load the default filter for the first time and cache it.
default_exception_reporter_filter = import_string(
settings.DEFAULT_EXCEPTION_REPORTER_FILTER)()
if request:
return getattr(request, 'exception_reporter_filter', default_exception_reporter_filter)
else:
return default_exception_reporter_filter
class ExceptionReporterFilter(object):
"""
Base for all exception reporter filter classes. All overridable hooks
contain lenient default behaviors.
"""
def get_request_repr(self, request):
if request is None:
return repr(None)
else:
return build_request_repr(request, POST_override=self.get_post_parameters(request))
def get_post_parameters(self, request):
if request is None:
return {}
else:
return request.POST
def get_traceback_frame_variables(self, request, tb_frame):
return list(six.iteritems(tb_frame.f_locals))
class SafeExceptionReporterFilter(ExceptionReporterFilter):
"""
Use annotations made by the sensitive_post_parameters and
sensitive_variables decorators to filter out sensitive information.
"""
def is_active(self, request):
"""
This filter is to add safety in production environments (i.e. DEBUG
is False). If DEBUG is True then your site is not safe anyway.
This hook is provided as a convenience to easily activate or
deactivate the filter on a per request basis.
"""
return settings.DEBUG is False
def get_cleansed_multivaluedict(self, request, multivaluedict):
"""
Replaces the keys in a MultiValueDict marked as sensitive with stars.
This mitigates leaking sensitive POST parameters if something like
request.POST['nonexistent_key'] throws an exception (#21098).
"""
sensitive_post_parameters = getattr(request, 'sensitive_post_parameters', [])
if self.is_active(request) and sensitive_post_parameters:
multivaluedict = multivaluedict.copy()
for param in sensitive_post_parameters:
if param in multivaluedict:
multivaluedict[param] = CLEANSED_SUBSTITUTE
return multivaluedict
def get_post_parameters(self, request):
"""
Replaces the values of POST parameters marked as sensitive with
stars (*********).
"""
if request is None:
return {}
else:
sensitive_post_parameters = getattr(request, 'sensitive_post_parameters', [])
if self.is_active(request) and sensitive_post_parameters:
cleansed = request.POST.copy()
if sensitive_post_parameters == '__ALL__':
# Cleanse all parameters.
for k, v in cleansed.items():
cleansed[k] = CLEANSED_SUBSTITUTE
return cleansed
else:
# Cleanse only the specified parameters.
for param in sensitive_post_parameters:
if param in cleansed:
cleansed[param] = CLEANSED_SUBSTITUTE
return cleansed
else:
return request.POST
def cleanse_special_types(self, request, value):
if isinstance(value, HttpRequest):
# Cleanse the request's POST parameters.
value = self.get_request_repr(value)
elif isinstance(value, MultiValueDict):
# Cleanse MultiValueDicts (request.POST is the one we usually care about)
value = self.get_cleansed_multivaluedict(request, value)
return value
def get_traceback_frame_variables(self, request, tb_frame):
"""
Replaces the values of variables marked as sensitive with
stars (*********).
"""
# Loop through the frame's callers to see if the sensitive_variables
# decorator was used.
current_frame = tb_frame.f_back
sensitive_variables = None
while current_frame is not None:
if (current_frame.f_code.co_name == 'sensitive_variables_wrapper'
and 'sensitive_variables_wrapper' in current_frame.f_locals):
# The sensitive_variables decorator was used, so we take note
# of the sensitive variables' names.
wrapper = current_frame.f_locals['sensitive_variables_wrapper']
sensitive_variables = getattr(wrapper, 'sensitive_variables', None)
break
current_frame = current_frame.f_back
cleansed = {}
if self.is_active(request) and sensitive_variables:
if sensitive_variables == '__ALL__':
# Cleanse all variables
for name, value in tb_frame.f_locals.items():
cleansed[name] = CLEANSED_SUBSTITUTE
else:
# Cleanse specified variables
for name, value in tb_frame.f_locals.items():
if name in sensitive_variables:
value = CLEANSED_SUBSTITUTE
else:
value = self.cleanse_special_types(request, value)
cleansed[name] = value
else:
# Potentially cleanse the request and any MultiValueDicts if they
# are one of the frame variables.
for name, value in tb_frame.f_locals.items():
cleansed[name] = self.cleanse_special_types(request, value)
if (tb_frame.f_code.co_name == 'sensitive_variables_wrapper'
and 'sensitive_variables_wrapper' in tb_frame.f_locals):
# For good measure, obfuscate the decorated function's arguments in
# the sensitive_variables decorator's frame, in case the variables
# associated with those arguments were meant to be obfuscated from
# the decorated function's frame.
cleansed['func_args'] = CLEANSED_SUBSTITUTE
cleansed['func_kwargs'] = CLEANSED_SUBSTITUTE
return cleansed.items()
class ExceptionReporter(object):
"""
A class to organize and coordinate reporting on exceptions.
"""
def __init__(self, request, exc_type, exc_value, tb, is_email=False):
self.request = request
self.filter = get_exception_reporter_filter(self.request)
self.exc_type = exc_type
self.exc_value = exc_value
self.tb = tb
self.is_email = is_email
self.template_info = None
self.template_does_not_exist = False
self.loader_debug_info = None
# Handle deprecated string exceptions
if isinstance(self.exc_type, six.string_types):
self.exc_value = Exception('Deprecated String Exception: %r' % self.exc_type)
self.exc_type = type(self.exc_value)
def format_path_status(self, path):
if not os.path.exists(path):
return "File does not exist"
if not os.path.isfile(path):
return "Not a file"
if not os.access(path, os.R_OK):
return "File is not readable"
return "File exists"
def get_traceback_data(self):
"""Return a dictionary containing traceback information."""
if self.exc_type and issubclass(self.exc_type, TemplateDoesNotExist):
from django.template.loader import template_source_loaders
self.template_does_not_exist = True
self.loader_debug_info = []
# If the template_source_loaders haven't been populated yet, you need
# to provide an empty list for this for loop to not fail.
if template_source_loaders is None:
template_source_loaders = []
for loader in template_source_loaders:
try:
source_list_func = loader.get_template_sources
# NOTE: This assumes exc_value is the name of the template that
# the loader attempted to load.
template_list = [{
'name': t,
'status': self.format_path_status(t),
} for t in source_list_func(str(self.exc_value))]
except AttributeError:
template_list = []
loader_name = loader.__module__ + '.' + loader.__class__.__name__
self.loader_debug_info.append({
'loader': loader_name,
'templates': template_list,
})
if (settings.TEMPLATE_DEBUG and
hasattr(self.exc_value, 'django_template_source')):
self.get_template_exception_info()
frames = self.get_traceback_frames()
for i, frame in enumerate(frames):
if 'vars' in frame:
frame_vars = []
for k, v in frame['vars']:
v = pprint(v)
# The force_escape filter assume unicode, make sure that works
if isinstance(v, six.binary_type):
v = v.decode('utf-8', 'replace') # don't choke on non-utf-8 input
# Trim large blobs of data
if len(v) > 4096:
v = '%s... <trimmed %d bytes string>' % (v[0:4096], len(v))
frame_vars.append((k, force_escape(v)))
frame['vars'] = frame_vars
frames[i] = frame
unicode_hint = ''
if self.exc_type and issubclass(self.exc_type, UnicodeError):
start = getattr(self.exc_value, 'start', None)
end = getattr(self.exc_value, 'end', None)
if start is not None and end is not None:
unicode_str = self.exc_value.args[1]
unicode_hint = smart_text(
unicode_str[max(start - 5, 0):min(end + 5, len(unicode_str))],
'ascii', errors='replace'
)
from django import get_version
c = {
'is_email': self.is_email,
'unicode_hint': unicode_hint,
'frames': frames,
'request': self.request,
'filtered_POST': self.filter.get_post_parameters(self.request),
'settings': get_safe_settings(),
'sys_executable': sys.executable,
'sys_version_info': '%d.%d.%d' % sys.version_info[0:3],
'server_time': datetime.datetime.now(),
'django_version_info': get_version(),
'sys_path': sys.path,
'template_info': self.template_info,
'template_does_not_exist': self.template_does_not_exist,
'loader_debug_info': self.loader_debug_info,
}
# Check whether exception info is available
if self.exc_type:
c['exception_type'] = self.exc_type.__name__
if self.exc_value:
c['exception_value'] = smart_text(self.exc_value, errors='replace')
if frames:
c['lastframe'] = frames[-1]
return c
def get_traceback_html(self):
"Return HTML version of debug 500 HTTP error page."
t = Template(TECHNICAL_500_TEMPLATE, name='Technical 500 template')
c = Context(self.get_traceback_data(), use_l10n=False)
return t.render(c)
def get_traceback_text(self):
"Return plain text version of debug 500 HTTP error page."
t = Template(TECHNICAL_500_TEXT_TEMPLATE, name='Technical 500 template')
c = Context(self.get_traceback_data(), autoescape=False, use_l10n=False)
return t.render(c)
def get_template_exception_info(self):
origin, (start, end) = self.exc_value.django_template_source
template_source = origin.reload()
context_lines = 10
line = 0
upto = 0
source_lines = []
before = during = after = ""
for num, next in enumerate(linebreak_iter(template_source)):
if start >= upto and end <= next:
line = num
before = escape(template_source[upto:start])
during = escape(template_source[start:end])
after = escape(template_source[end:next])
source_lines.append((num, escape(template_source[upto:next])))
upto = next
total = len(source_lines)
top = max(1, line - context_lines)
bottom = min(total, line + 1 + context_lines)
# In some rare cases, exc_value.args might be empty.
try:
message = self.exc_value.args[0]
except IndexError:
message = '(Could not get exception message)'
self.template_info = {
'message': message,
'source_lines': source_lines[top:bottom],
'before': before,
'during': during,
'after': after,
'top': top,
'bottom': bottom,
'total': total,
'line': line,
'name': origin.name,
}
def _get_lines_from_file(self, filename, lineno, context_lines, loader=None, module_name=None):
"""
Returns context_lines before and after lineno from file.
Returns (pre_context_lineno, pre_context, context_line, post_context).
"""
source = None
if loader is not None and hasattr(loader, "get_source"):
try:
source = loader.get_source(module_name)
except ImportError:
pass
if source is not None:
source = source.splitlines()
if source is None:
try:
with open(filename, 'rb') as fp:
source = fp.read().splitlines()
except (OSError, IOError):
pass
if source is None:
return None, [], None, []
# If we just read the source from a file, or if the loader did not
# apply tokenize.detect_encoding to decode the source into a Unicode
# string, then we should do that ourselves.
if isinstance(source[0], six.binary_type):
encoding = 'ascii'
for line in source[:2]:
# File coding may be specified. Match pattern from PEP-263
# (http://www.python.org/dev/peps/pep-0263/)
match = re.search(br'coding[:=]\s*([-\w.]+)', line)
if match:
encoding = match.group(1).decode('ascii')
break
source = [six.text_type(sline, encoding, 'replace') for sline in source]
lower_bound = max(0, lineno - context_lines)
upper_bound = lineno + context_lines
pre_context = source[lower_bound:lineno]
context_line = source[lineno]
post_context = source[lineno + 1:upper_bound]
return lower_bound, pre_context, context_line, post_context
def get_traceback_frames(self):
frames = []
tb = self.tb
while tb is not None:
# Support for __traceback_hide__ which is used by a few libraries
# to hide internal frames.
if tb.tb_frame.f_locals.get('__traceback_hide__'):
tb = tb.tb_next
continue
filename = tb.tb_frame.f_code.co_filename
function = tb.tb_frame.f_code.co_name
lineno = tb.tb_lineno - 1
loader = tb.tb_frame.f_globals.get('__loader__')
module_name = tb.tb_frame.f_globals.get('__name__') or ''
pre_context_lineno, pre_context, context_line, post_context = self._get_lines_from_file(
filename, lineno, 7, loader, module_name,
)
if pre_context_lineno is not None:
frames.append({
'tb': tb,
'type': 'django' if module_name.startswith('django.') else 'user',
'filename': filename,
'function': function,
'lineno': lineno + 1,
'vars': self.filter.get_traceback_frame_variables(self.request, tb.tb_frame),
'id': id(tb),
'pre_context': pre_context,
'context_line': context_line,
'post_context': post_context,
'pre_context_lineno': pre_context_lineno + 1,
})
tb = tb.tb_next
return frames
def format_exception(self):
"""
Return the same data as from traceback.format_exception.
"""
import traceback
frames = self.get_traceback_frames()
tb = [(f['filename'], f['lineno'], f['function'], f['context_line']) for f in frames]
list = ['Traceback (most recent call last):\n']
list += traceback.format_list(tb)
list += traceback.format_exception_only(self.exc_type, self.exc_value)
return list
def technical_404_response(request, exception):
"Create a technical 404 error response. The exception should be the Http404."
try:
error_url = exception.args[0]['path']
except (IndexError, TypeError, KeyError):
error_url = request.path_info[1:] # Trim leading slash
try:
tried = exception.args[0]['tried']
except (IndexError, TypeError, KeyError):
tried = []
else:
if (not tried # empty URLconf
or (request.path == '/'
and len(tried) == 1 # default URLconf
and len(tried[0]) == 1
and getattr(tried[0][0], 'app_name', '') == getattr(tried[0][0], 'namespace', '') == 'admin')):
return default_urlconf(request)
urlconf = getattr(request, 'urlconf', settings.ROOT_URLCONF)
if isinstance(urlconf, types.ModuleType):
urlconf = urlconf.__name__
caller = ''
try:
resolver_match = resolve(request.path)
except Resolver404:
pass
else:
obj = resolver_match.func
if hasattr(obj, '__name__'):
caller = obj.__name__
elif hasattr(obj, '__class__') and hasattr(obj.__class__, '__name__'):
caller = obj.__class__.__name__
if hasattr(obj, '__module__'):
module = obj.__module__
caller = '%s.%s' % (module, caller)
t = Template(TECHNICAL_404_TEMPLATE, name='Technical 404 template')
c = Context({
'urlconf': urlconf,
'root_urlconf': settings.ROOT_URLCONF,
'request_path': error_url,
'urlpatterns': tried,
'reason': force_bytes(exception, errors='replace'),
'request': request,
'settings': get_safe_settings(),
'raising_view_name': caller,
})
return HttpResponseNotFound(t.render(c), content_type='text/html')
def default_urlconf(request):
"Create an empty URLconf 404 error response."
t = Template(DEFAULT_URLCONF_TEMPLATE, name='Default URLconf template')
c = Context({
"title": _("Welcome to Django"),
"heading": _("It worked!"),
"subheading": _("Congratulations on your first Django-powered page."),
"instructions": _("Of course, you haven't actually done any work yet. "
"Next, start your first app by running <code>python manage.py startapp [app_label]</code>."),
"explanation": _("You're seeing this message because you have <code>DEBUG = True</code> in your "
"Django settings file and you haven't configured any URLs. Get to work!"),
})
return HttpResponse(t.render(c), content_type='text/html')
#
# Templates are embedded in the file so that we know the error handler will
# always work even if the template loader is broken.
#
TECHNICAL_500_TEMPLATE = ("""
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="robots" content="NONE,NOARCHIVE">
<title>{% if exception_type %}{{ exception_type }}{% else %}Report{% endif %}"""
"""{% if request %} at {{ request.path_info|escape }}{% endif %}</title>
<style type="text/css">
html * { padding:0; margin:0; }
body * { padding:10px 20px; }
body * * { padding:0; }
body { font:small sans-serif; }
body>div { border-bottom:1px solid #ddd; }
h1 { font-weight:normal; }
h2 { margin-bottom:.8em; }
h2 span { font-size:80%; color:#666; font-weight:normal; }
h3 { margin:1em 0 .5em 0; }
h4 { margin:0 0 .5em 0; font-weight: normal; }
code, pre { font-size: 100%; white-space: pre-wrap; }
table { border:1px solid #ccc; border-collapse: collapse; width:100%; background:white; }
tbody td, tbody th { vertical-align:top; padding:2px 3px; }
thead th {
padding:1px 6px 1px 3px; background:#fefefe; text-align:left;
font-weight:normal; font-size:11px; border:1px solid #ddd;
}
tbody th { width:12em; text-align:right; color:#666; padding-right:.5em; }
table.vars { margin:5px 0 2px 40px; }
table.vars td, table.req td { font-family:monospace; }
table td.code { width:100%; }
table td.code pre { overflow:hidden; }
table.source th { color:#666; }
table.source td { font-family:monospace; white-space:pre; border-bottom:1px solid #eee; }
ul.traceback { list-style-type:none; color: #222; }
ul.traceback li.frame { padding-bottom:1em; color:#666; }
ul.traceback li.user { background-color:#e0e0e0; color:#000 }
div.context { padding:10px 0; overflow:hidden; }
div.context ol { padding-left:30px; margin:0 10px; list-style-position: inside; }
div.context ol li { font-family:monospace; white-space:pre; color:#777; cursor:pointer; }
div.context ol li pre { display:inline; }
div.context ol.context-line li { color:#505050; background-color:#dfdfdf; }
div.context ol.context-line li span { position:absolute; right:32px; }
.user div.context ol.context-line li { background-color:#bbb; color:#000; }
.user div.context ol li { color:#666; }
div.commands { margin-left: 40px; }
div.commands a { color:#555; text-decoration:none; }
.user div.commands a { color: black; }
#summary { background: #ffc; }
#summary h2 { font-weight: normal; color: #666; }
#explanation { background:#eee; }
#template, #template-not-exist { background:#f6f6f6; }
#template-not-exist ul { margin: 0 0 0 20px; }
#unicode-hint { background:#eee; }
#traceback { background:#eee; }
#requestinfo { background:#f6f6f6; padding-left:120px; }
#summary table { border:none; background:transparent; }
#requestinfo h2, #requestinfo h3 { position:relative; margin-left:-100px; }
#requestinfo h3 { margin-bottom:-1em; }
.error { background: #ffc; }
.specific { color:#cc3300; font-weight:bold; }
h2 span.commands { font-size:.7em;}
span.commands a:link {color:#5E5694;}
pre.exception_value { font-family: sans-serif; color: #666; font-size: 1.5em; margin: 10px 0 10px 0; }
</style>
{% if not is_email %}
<script type="text/javascript">
//<!--
function getElementsByClassName(oElm, strTagName, strClassName){
// Written by Jonathan Snook, http://www.snook.ca/jon; Add-ons by Robert Nyman, http://www.robertnyman.com
var arrElements = (strTagName == "*" && document.all)? document.all :
oElm.getElementsByTagName(strTagName);
var arrReturnElements = new Array();
strClassName = strClassName.replace(/\-/g, "\\-");
var oRegExp = new RegExp("(^|\\s)" + strClassName + "(\\s|$)");
var oElement;
for(var i=0; i<arrElements.length; i++){
oElement = arrElements[i];
if(oRegExp.test(oElement.className)){
arrReturnElements.push(oElement);
}
}
return (arrReturnElements)
}
function hideAll(elems) {
for (var e = 0; e < elems.length; e++) {
elems[e].style.display = 'none';
}
}
window.onload = function() {
hideAll(getElementsByClassName(document, 'table', 'vars'));
hideAll(getElementsByClassName(document, 'ol', 'pre-context'));
hideAll(getElementsByClassName(document, 'ol', 'post-context'));
hideAll(getElementsByClassName(document, 'div', 'pastebin'));
}
function toggle() {
for (var i = 0; i < arguments.length; i++) {
var e = document.getElementById(arguments[i]);
if (e) {
e.style.display = e.style.display == 'none' ? 'block': 'none';
}
}
return false;
}
function varToggle(link, id) {
toggle('v' + id);
var s = link.getElementsByTagName('span')[0];
var uarr = String.fromCharCode(0x25b6);
var darr = String.fromCharCode(0x25bc);
s.innerHTML = s.innerHTML == uarr ? darr : uarr;
return false;
}
function switchPastebinFriendly(link) {
s1 = "Switch to copy-and-paste view";
s2 = "Switch back to interactive view";
link.innerHTML = link.innerHTML == s1 ? s2: s1;
toggle('browserTraceback', 'pastebinTraceback');
return false;
}
//-->
</script>
{% endif %}
</head>
<body>
<div id="summary">
<h1>{% if exception_type %}{{ exception_type }}{% else %}Report{% endif %}"""
"""{% if request %} at {{ request.path_info|escape }}{% endif %}</h1>
<pre class="exception_value">"""
"""{% if exception_value %}{{ exception_value|force_escape }}{% else %}No exception message supplied{% endif %}"""
"""</pre>
<table class="meta">
{% if request %}
<tr>
<th>Request Method:</th>
<td>{{ request.META.REQUEST_METHOD }}</td>
</tr>
<tr>
<th>Request URL:</th>
<td>{{ request.build_absolute_uri|escape }}</td>
</tr>
{% endif %}
<tr>
<th>Django Version:</th>
<td>{{ django_version_info }}</td>
</tr>
{% if exception_type %}
<tr>
<th>Exception Type:</th>
<td>{{ exception_type }}</td>
</tr>
{% endif %}
{% if exception_type and exception_value %}
<tr>
<th>Exception Value:</th>
<td><pre>{{ exception_value|force_escape }}</pre></td>
</tr>
{% endif %}
{% if lastframe %}
<tr>
<th>Exception Location:</th>
<td>{{ lastframe.filename|escape }} in {{ lastframe.function|escape }}, line {{ lastframe.lineno }}</td>
</tr>
{% endif %}
<tr>
<th>Python Executable:</th>
<td>{{ sys_executable|escape }}</td>
</tr>
<tr>
<th>Python Version:</th>
<td>{{ sys_version_info }}</td>
</tr>
<tr>
<th>Python Path:</th>
<td><pre>{{ sys_path|pprint }}</pre></td>
</tr>
<tr>
<th>Server time:</th>
<td>{{server_time|date:"r"}}</td>
</tr>
</table>
</div>
{% if unicode_hint %}
<div id="unicode-hint">
<h2>Unicode error hint</h2>
<p>The string that could not be encoded/decoded was: <strong>{{ unicode_hint|force_escape }}</strong></p>
</div>
{% endif %}
{% if template_does_not_exist %}
<div id="template-not-exist">
<h2>Template-loader postmortem</h2>
{% if loader_debug_info %}
<p>Django tried loading these templates, in this order:</p>
<ul>
{% for loader in loader_debug_info %}
<li>Using loader <code>{{ loader.loader }}</code>:
<ul>
{% for t in loader.templates %}<li><code>{{ t.name }}</code> ({{ t.status }})</li>{% endfor %}
</ul>
</li>
{% endfor %}
</ul>
{% else %}
<p>Django couldn't find any templates because your <code>TEMPLATE_LOADERS</code> setting is empty!</p>
{% endif %}
</div>
{% endif %}
{% if template_info %}
<div id="template">
<h2>Error during template rendering</h2>
<p>In template <code>{{ template_info.name }}</code>, error at line <strong>{{ template_info.line }}</strong></p>
<h3>{{ template_info.message }}</h3>
<table class="source{% if template_info.top %} cut-top{% endif %}
{% ifnotequal template_info.bottom template_info.total %} cut-bottom{% endifnotequal %}">
{% for source_line in template_info.source_lines %}
{% ifequal source_line.0 template_info.line %}
<tr class="error"><th>{{ source_line.0 }}</th>
<td>
{{ template_info.before }}
<span class="specific">{{ template_info.during }}</span>
{{ template_info.after }}
</td>
</tr>
{% else %}
<tr><th>{{ source_line.0 }}</th>
<td>{{ source_line.1 }}</td></tr>
{% endifequal %}
{% endfor %}
</table>
</div>
{% endif %}
{% if frames %}
<div id="traceback">
<h2>Traceback <span class="commands">{% if not is_email %}<a href="#" onclick="return switchPastebinFriendly(this);">
Switch to copy-and-paste view</a></span>{% endif %}
</h2>
{% autoescape off %}
<div id="browserTraceback">
<ul class="traceback">
{% for frame in frames %}
<li class="frame {{ frame.type }}">
<code>{{ frame.filename|escape }}</code> in <code>{{ frame.function|escape }}</code>
{% if frame.context_line %}
<div class="context" id="c{{ frame.id }}">
{% if frame.pre_context and not is_email %}
<ol start="{{ frame.pre_context_lineno }}" class="pre-context" id="pre{{ frame.id }}">
{% for line in frame.pre_context %}
<li onclick="toggle('pre{{ frame.id }}', 'post{{ frame.id }}')"><pre>{{ line|escape }}</pre></li>
{% endfor %}
</ol>
{% endif %}
<ol start="{{ frame.lineno }}" class="context-line">
<li onclick="toggle('pre{{ frame.id }}', 'post{{ frame.id }}')"><pre>
{{ frame.context_line|escape }}</pre>{% if not is_email %} <span>...</span>{% endif %}</li></ol>
{% if frame.post_context and not is_email %}
<ol start='{{ frame.lineno|add:"1" }}' class="post-context" id="post{{ frame.id }}">
{% for line in frame.post_context %}
<li onclick="toggle('pre{{ frame.id }}', 'post{{ frame.id }}')"><pre>{{ line|escape }}</pre></li>
{% endfor %}
</ol>
{% endif %}
</div>
{% endif %}
{% if frame.vars %}
<div class="commands">
{% if is_email %}
<h2>Local Vars</h2>
{% else %}
<a href="#" onclick="return varToggle(this, '{{ frame.id }}')"><span>▶</span> Local vars</a>
{% endif %}
</div>
<table class="vars" id="v{{ frame.id }}">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in frame.vars|dictsort:"0" %}
<tr>
<td>{{ var.0|force_escape }}</td>
<td class="code"><pre>{{ var.1 }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% endif %}
</li>
{% endfor %}
</ul>
</div>
{% endautoescape %}
<form action="http://dpaste.com/" name="pasteform" id="pasteform" method="post">
{% if not is_email %}
<div id="pastebinTraceback" class="pastebin">
<input type="hidden" name="language" value="PythonConsole">
<input type="hidden" name="title"
value="{{ exception_type|escape }}{% if request %} at {{ request.path_info|escape }}{% endif %}">
<input type="hidden" name="source" value="Django Dpaste Agent">
<input type="hidden" name="poster" value="Django">
<textarea name="content" id="traceback_area" cols="140" rows="25">
Environment:
{% if request %}
Request Method: {{ request.META.REQUEST_METHOD }}
Request URL: {{ request.build_absolute_uri|escape }}
{% endif %}
Django Version: {{ django_version_info }}
Python Version: {{ sys_version_info }}
Installed Applications:
{{ settings.INSTALLED_APPS|pprint }}
Installed Middleware:
{{ settings.MIDDLEWARE_CLASSES|pprint }}
{% if template_does_not_exist %}Template Loader Error:
{% if loader_debug_info %}Django tried loading these templates, in this order:
{% for loader in loader_debug_info %}Using loader {{ loader.loader }}:
{% for t in loader.templates %}{{ t.name }} ({{ t.status }})
{% endfor %}{% endfor %}
{% else %}Django couldn't find any templates because your TEMPLATE_LOADERS setting is empty!
{% endif %}
{% endif %}{% if template_info %}
Template error:
In template {{ template_info.name }}, error at line {{ template_info.line }}
{{ template_info.message }}{% for source_line in template_info.source_lines %}
{% ifequal source_line.0 template_info.line %}
{{ source_line.0 }} : {{ template_info.before }} {{ template_info.during }} {{ template_info.after }}
{% else %}
{{ source_line.0 }} : {{ source_line.1 }}
{% endifequal %}{% endfor %}{% endif %}
Traceback:
{% for frame in frames %}File "{{ frame.filename|escape }}" in {{ frame.function|escape }}
{% if frame.context_line %} {{ frame.lineno }}. {{ frame.context_line|escape }}{% endif %}
{% endfor %}
Exception Type: {{ exception_type|escape }}{% if request %} at {{ request.path_info|escape }}{% endif %}
Exception Value: {{ exception_value|force_escape }}
</textarea>
<br><br>
<input type="submit" value="Share this traceback on a public Web site">
</div>
</form>
</div>
{% endif %}
{% endif %}
<div id="requestinfo">
<h2>Request information</h2>
{% if request %}
<h3 id="get-info">GET</h3>
{% if request.GET %}
<table class="req">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in request.GET.items %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p>No GET data</p>
{% endif %}
<h3 id="post-info">POST</h3>
{% if filtered_POST %}
<table class="req">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in filtered_POST.items %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p>No POST data</p>
{% endif %}
<h3 id="files-info">FILES</h3>
{% if request.FILES %}
<table class="req">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in request.FILES.items %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p>No FILES data</p>
{% endif %}
<h3 id="cookie-info">COOKIES</h3>
{% if request.COOKIES %}
<table class="req">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in request.COOKIES.items %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p>No cookie data</p>
{% endif %}
<h3 id="meta-info">META</h3>
<table class="req">
<thead>
<tr>
<th>Variable</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in request.META.items|dictsort:"0" %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p>Request data not supplied</p>
{% endif %}
<h3 id="settings-info">Settings</h3>
<h4>Using settings module <code>{{ settings.SETTINGS_MODULE }}</code></h4>
<table class="req">
<thead>
<tr>
<th>Setting</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for var in settings.items|dictsort:"0" %}
<tr>
<td>{{ var.0 }}</td>
<td class="code"><pre>{{ var.1|pprint }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% if not is_email %}
<div id="explanation">
<p>
You're seeing this error because you have <code>DEBUG = True</code> in your
Django settings file. Change that to <code>False</code>, and Django will
display a standard page generated by the handler for this status code.
</p>
</div>
{% endif %}
</body>
</html>
""")
TECHNICAL_500_TEXT_TEMPLATE = """{% firstof exception_type 'Report' %}{% if request %} at {{ request.path_info }}{% endif %}
{% firstof exception_value 'No exception message supplied' %}
{% if request %}
Request Method: {{ request.META.REQUEST_METHOD }}
Request URL: {{ request.build_absolute_uri }}{% endif %}
Django Version: {{ django_version_info }}
Python Executable: {{ sys_executable }}
Python Version: {{ sys_version_info }}
Python Path: {{ sys_path }}
Server time: {{server_time|date:"r"}}
Installed Applications:
{{ settings.INSTALLED_APPS|pprint }}
Installed Middleware:
{{ settings.MIDDLEWARE_CLASSES|pprint }}
{% if template_does_not_exist %}Template loader Error:
{% if loader_debug_info %}Django tried loading these templates, in this order:
{% for loader in loader_debug_info %}Using loader {{ loader.loader }}:
{% for t in loader.templates %}{{ t.name }} ({{ t.status }})
{% endfor %}{% endfor %}
{% else %}Django couldn't find any templates because your TEMPLATE_LOADERS setting is empty!
{% endif %}
{% endif %}{% if template_info %}
Template error:
In template {{ template_info.name }}, error at line {{ template_info.line }}
{{ template_info.message }}{% for source_line in template_info.source_lines %}
{% ifequal source_line.0 template_info.line %}
{{ source_line.0 }} : {{ template_info.before }} {{ template_info.during }} {{ template_info.after }}
{% else %}
{{ source_line.0 }} : {{ source_line.1 }}
{% endifequal %}{% endfor %}{% endif %}{% if frames %}
Traceback:
{% for frame in frames %}File "{{ frame.filename }}" in {{ frame.function }}
{% if frame.context_line %} {{ frame.lineno }}. {{ frame.context_line }}{% endif %}
{% endfor %}
{% if exception_type %}Exception Type: {{ exception_type }}{% if request %} at {{ request.path_info }}{% endif %}
{% if exception_value %}Exception Value: {{ exception_value }}{% endif %}{% endif %}{% endif %}
{% if request %}Request information:
GET:{% for k, v in request.GET.items %}
{{ k }} = {{ v|stringformat:"r" }}{% empty %} No GET data{% endfor %}
POST:{% for k, v in filtered_POST.items %}
{{ k }} = {{ v|stringformat:"r" }}{% empty %} No POST data{% endfor %}
FILES:{% for k, v in request.FILES.items %}
{{ k }} = {{ v|stringformat:"r" }}{% empty %} No FILES data{% endfor %}
COOKIES:{% for k, v in request.COOKIES.items %}
{{ k }} = {{ v|stringformat:"r" }}{% empty %} No cookie data{% endfor %}
META:{% for k, v in request.META.items|dictsort:"0" %}
{{ k }} = {{ v|stringformat:"r" }}{% endfor %}
{% else %}Request data not supplied
{% endif %}
Settings:
Using settings module {{ settings.SETTINGS_MODULE }}{% for k, v in settings.items|dictsort:"0" %}
{{ k }} = {{ v|stringformat:"r" }}{% endfor %}
You're seeing this error because you have DEBUG = True in your
Django settings file. Change that to False, and Django will
display a standard page generated by the handler for this status code.
"""
TECHNICAL_404_TEMPLATE = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title>Page not found at {{ request.path_info|escape }}</title>
<meta name="robots" content="NONE,NOARCHIVE">
<style type="text/css">
html * { padding:0; margin:0; }
body * { padding:10px 20px; }
body * * { padding:0; }
body { font:small sans-serif; background:#eee; }
body>div { border-bottom:1px solid #ddd; }
h1 { font-weight:normal; margin-bottom:.4em; }
h1 span { font-size:60%; color:#666; font-weight:normal; }
table { border:none; border-collapse: collapse; width:100%; }
td, th { vertical-align:top; padding:2px 3px; }
th { width:12em; text-align:right; color:#666; padding-right:.5em; }
#info { background:#f6f6f6; }
#info ol { margin: 0.5em 4em; }
#info ol li { font-family: monospace; }
#summary { background: #ffc; }
#explanation { background:#eee; border-bottom: 0px none; }
</style>
</head>
<body>
<div id="summary">
<h1>Page not found <span>(404)</span></h1>
<table class="meta">
<tr>
<th>Request Method:</th>
<td>{{ request.META.REQUEST_METHOD }}</td>
</tr>
<tr>
<th>Request URL:</th>
<td>{{ request.build_absolute_uri|escape }}</td>
</tr>
{% if raising_view_name %}
<tr>
<th>Raised by:</th>
<td>{{ raising_view_name }}</td>
</tr>
{% endif %}
</table>
</div>
<div id="info">
{% if urlpatterns %}
<p>
Using the URLconf defined in <code>{{ urlconf }}</code>,
Django tried these URL patterns, in this order:
</p>
<ol>
{% for pattern in urlpatterns %}
<li>
{% for pat in pattern %}
{{ pat.regex.pattern }}
{% if forloop.last and pat.name %}[name='{{ pat.name }}']{% endif %}
{% endfor %}
</li>
{% endfor %}
</ol>
<p>The current URL, <code>{{ request_path|escape }}</code>, didn't match any of these.</p>
{% else %}
<p>{{ reason }}</p>
{% endif %}
</div>
<div id="explanation">
<p>
You're seeing this error because you have <code>DEBUG = True</code> in
your Django settings file. Change that to <code>False</code>, and Django
will display a standard 404 page.
</p>
</div>
</body>
</html>
"""
DEFAULT_URLCONF_TEMPLATE = """
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="robots" content="NONE,NOARCHIVE"><title>{{ title }}</title>
<style type="text/css">
html * { padding:0; margin:0; }
body * { padding:10px 20px; }
body * * { padding:0; }
body { font:small sans-serif; }
body>div { border-bottom:1px solid #ddd; }
h1 { font-weight:normal; }
h2 { margin-bottom:.8em; }
h2 span { font-size:80%; color:#666; font-weight:normal; }
h3 { margin:1em 0 .5em 0; }
h4 { margin:0 0 .5em 0; font-weight: normal; }
table { border:1px solid #ccc; border-collapse: collapse; width:100%; background:white; }
tbody td, tbody th { vertical-align:top; padding:2px 3px; }
thead th {
padding:1px 6px 1px 3px; background:#fefefe; text-align:left;
font-weight:normal; font-size:11px; border:1px solid #ddd;
}
tbody th { width:12em; text-align:right; color:#666; padding-right:.5em; }
#summary { background: #e0ebff; }
#summary h2 { font-weight: normal; color: #666; }
#explanation { background:#eee; }
#instructions { background:#f6f6f6; }
#summary table { border:none; background:transparent; }
</style>
</head>
<body>
<div id="summary">
<h1>{{ heading }}</h1>
<h2>{{ subheading }}</h2>
</div>
<div id="instructions">
<p>
{{ instructions|safe }}
</p>
</div>
<div id="explanation">
<p>
{{ explanation|safe }}
</p>
</div>
</body></html>
"""
|
bsd-3-clause
|
mxOBS/deb-pkg_trusty_chromium-browser
|
tools/telemetry/telemetry/decorators_unittest.py
|
20
|
2837
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
from telemetry import decorators
class FakePlatform(object):
def GetOSName(self):
return 'os_name'
def GetOSVersionName(self):
return 'os_version_name'
class FakePossibleBrowser(object):
def __init__(self):
self.browser_type = 'browser_type'
self.platform = FakePlatform()
self.supports_tab_control = False
class FakeTest(object):
def SetEnabledStrings(self, enabled_strings):
# pylint: disable=W0201
self._enabled_strings = enabled_strings
def SetDisabledStrings(self, disabled_strings):
# pylint: disable=W0201
self._disabled_strings = disabled_strings
class TestShouldSkip(unittest.TestCase):
def testEnabledStrings(self):
test = FakeTest()
possible_browser = FakePossibleBrowser()
# When no enabled_strings is given, everything should be enabled.
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_version_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_name', 'another_os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name', 'os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name', 'another_os_version_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
def testDisabledStrings(self):
test = FakeTest()
possible_browser = FakePossibleBrowser()
# When no disabled_strings is given, nothing should be disabled.
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_version_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_name', 'another_os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name', 'os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name', 'another_os_version_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
|
bsd-3-clause
|
mxOBS/deb-pkg_trusty_chromium-browser
|
tools/site_compare/operators/equals_with_mask.py
|
189
|
1589
|
# Copyright (c) 2011 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Compare two images for equality, subject to a mask."""
from PIL import Image
from PIL import ImageChops
import os.path
def Compare(file1, file2, **kwargs):
"""Compares two images to see if they're identical subject to a mask.
An optional directory containing masks is supplied. If a mask exists
which matches file1's name, areas under the mask where it's black
are ignored.
Args:
file1: path to first image to compare
file2: path to second image to compare
kwargs: ["maskdir"] contains the directory holding the masks
Returns:
None if the images are identical
A tuple of (errorstring, image) if they're not
"""
maskdir = None
if "maskdir" in kwargs:
maskdir = kwargs["maskdir"]
im1 = Image.open(file1)
im2 = Image.open(file2)
if im1.size != im2.size:
return ("The images are of different size (%r vs %r)" %
(im1.size, im2.size), im1)
diff = ImageChops.difference(im1, im2)
if maskdir:
maskfile = os.path.join(maskdir, os.path.basename(file1))
if os.path.exists(maskfile):
mask = Image.open(maskfile)
if mask.size != im1.size:
return ("The mask is of a different size than the images (%r vs %r)" %
(mask.size, im1.size), mask)
diff = ImageChops.multiply(diff, mask.convert(diff.mode))
if max(diff.getextrema()) != (0, 0):
return ("The images differ", diff)
else:
return None
|
bsd-3-clause
|
jaceknl6666/arxiv-server
|
arxiv/oai.py
|
1
|
3871
|
# -*- coding: utf-8 -*-
from __future__ import division, print_function
__all__ = ["download", "xml_to_json"]
import re
import time
import logging
import requests
import xml.etree.cElementTree as ET
# Download constants
resume_re = re.compile(r".*<resumptionToken.*?>(.*?)</resumptionToken>.*")
url = "http://export.arxiv.org/oai2"
# Parse constant
base_tag = ".//{{http://www.openarchives.org/OAI/2.0/}}{0}".format
arxiv_tag = ".//{{http://arxiv.org/OAI/{0}/}}{0}".format
def download(start_date=None, prefix="arXiv", max_tries=10):
"""
This is a generator that downloads pages from the ArXiv OAI.
"""
# Set up the request parameters.
params = dict(verb="ListRecords", metadataPrefix=prefix)
if start_date is not None:
params["from"] = start_date
# Keep going until we run out of pages.
failures = 0
while True:
# Send the request.
print(params)
r = requests.post(url, data=params)
code = r.status_code
# Asked to retry
if code == 503:
to = int(r.headers["retry-after"])
logging.info("Got 503. Retrying after {0:d} seconds.".format(to))
time.sleep(to)
failures += 1
if failures >= max_tries:
logging.warn("Failed too many times...")
break
elif code == 200:
failures = 0
# Grab the XML content.
content = r.text
for doc in xml_to_json(content, prefix):
yield doc
# Look for a resumption token.
token = resume_re.search(content)
if token is None:
break
token = token.groups()[0]
# If there isn't one, we're all done.
if token == "":
logging.info("All done.")
break
logging.info("Resumption token: {0}.".format(token))
# If there is a resumption token, rebuild the request.
params = {"verb": "ListRecords", "resumptionToken": token}
# Pause so as not to get banned.
to = 20
logging.info("Sleeping for {0:d} seconds so as not to get banned."
.format(to))
time.sleep(to)
else:
# Wha happen'?
r.raise_for_status()
def xml_to_json(xml_data, prefix):
"""
A generator that parses through an XML listing from OAI and yields the
documents as dictionaries.
"""
tree = ET.fromstring(xml_data)
date = tree.find(base_tag("responseDate")).text
for r in tree.findall(base_tag("metadata")):
doc = _parse_node(r.find(arxiv_tag(prefix)), prefix)
# Special case the category list.
doc["categories"] = doc.get("categories", "").split()
# Save the full author names too.
doc["authors"] = [
dict(a, fullname=" ".join(a[k] for k in ("forenames", "keyname")
if k in a))
for a in doc.get("authors", [])
]
doc["first_author"] = doc["authors"][0]
# Deal with dates.
doc["updated"] = doc.get("updated", doc["created"])
doc["fetched"] = date
# Yield this document.
yield doc
def _parse_node(node, prefix):
# Get the actual name of the tag.
nm = node.tag.split("}")[-1]
# Check if the node has children.
if len(node):
# Recursively parse the children.
lst = [_parse_node(n, prefix) for n in node]
# If there are different keys or only one key, return it as a dict.
if not isinstance(lst[0], dict) and (
len(lst) == 1 or len(set(k[0] for k in lst)) > 1):
return dict(lst)
# Otherwise return it as a list.
return (nm, lst)
# This is a leaf node.
return (nm, node.text)
|
mit
|
geminy/aidear
|
oss/qt/qt-everywhere-opensource-src-5.9.0/qtwebengine/src/3rdparty/chromium/tools/grit/grit/node/message_unittest.py
|
7
|
4021
|
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
'''Unit tests for grit.node.message'''
import os
import sys
import unittest
import StringIO
if __name__ == '__main__':
# When executed as the main module, the first entry in sys.path will be
# the directory contain this module. This entry causes the io.py file in this
# directory to be selected whenever any file does "import io", rather than the
# system "io" module. As a work-around, remove the first sys.path entry.
sys.path[0] = os.path.join(os.path.dirname(__file__), '../..')
from grit import tclib
from grit import util
from grit.node import message
class MessageUnittest(unittest.TestCase):
def testMessage(self):
root = util.ParseGrdForUnittest('''
<messages>
<message name="IDS_GREETING"
desc="Printed to greet the currently logged in user">
Hello <ph name="USERNAME">%s<ex>Joi</ex></ph>, how are you doing today?
</message>
</messages>''')
msg, = root.GetChildrenOfType(message.MessageNode)
cliques = msg.GetCliques()
content = cliques[0].GetMessage().GetPresentableContent()
self.failUnless(content == 'Hello USERNAME, how are you doing today?')
def testMessageWithWhitespace(self):
root = util.ParseGrdForUnittest("""\
<messages>
<message name="IDS_BLA" desc="">
''' Hello there <ph name="USERNAME">%s</ph> '''
</message>
</messages>""")
msg, = root.GetChildrenOfType(message.MessageNode)
content = msg.GetCliques()[0].GetMessage().GetPresentableContent()
self.failUnless(content == 'Hello there USERNAME')
self.failUnless(msg.ws_at_start == ' ')
self.failUnless(msg.ws_at_end == ' ')
def testConstruct(self):
msg = tclib.Message(text=" Hello USERNAME, how are you? BINGO\t\t",
placeholders=[tclib.Placeholder('USERNAME', '%s', 'Joi'),
tclib.Placeholder('BINGO', '%d', '11')])
msg_node = message.MessageNode.Construct(None, msg, 'BINGOBONGO')
self.failUnless(msg_node.children[0].name == 'ph')
self.failUnless(msg_node.children[0].children[0].name == 'ex')
self.failUnless(msg_node.children[0].children[0].GetCdata() == 'Joi')
self.failUnless(msg_node.children[1].children[0].GetCdata() == '11')
self.failUnless(msg_node.ws_at_start == ' ')
self.failUnless(msg_node.ws_at_end == '\t\t')
def testUnicodeConstruct(self):
text = u'Howdie \u00fe'
msg = tclib.Message(text=text)
msg_node = message.MessageNode.Construct(None, msg, 'BINGOBONGO')
msg_from_node = msg_node.GetCdata()
self.failUnless(msg_from_node == text)
def testFormatterData(self):
root = util.ParseGrdForUnittest("""\
<messages>
<message name="IDS_BLA" desc="" formatter_data=" foo=123 bar qux=low">
Text
</message>
</messages>""")
msg, = root.GetChildrenOfType(message.MessageNode)
expected_formatter_data = {
'foo': '123',
'bar': '',
'qux': 'low'}
# Can't use assertDictEqual, not available in Python 2.6, so do it
# by hand.
self.failUnlessEqual(len(expected_formatter_data),
len(msg.formatter_data))
for key in expected_formatter_data:
self.failUnlessEqual(expected_formatter_data[key],
msg.formatter_data[key])
def testReplaceEllipsis(self):
root = util.ParseGrdForUnittest('''
<messages>
<message name="IDS_GREETING" desc="">
A...B.... <ph name="PH">%s<ex>A</ex></ph>... B... C...
</message>
</messages>''')
msg, = root.GetChildrenOfType(message.MessageNode)
msg.SetReplaceEllipsis(True)
content = msg.Translate('en')
self.failUnlessEqual(u'A...B.... %s\u2026 B\u2026 C\u2026', content)
if __name__ == '__main__':
unittest.main()
|
gpl-3.0
|
darjus-amzn/boto
|
boto/vpc/vpc.py
|
135
|
7868
|
# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
"""
Represents a Virtual Private Cloud.
"""
from boto.ec2.ec2object import TaggedEC2Object
class VPC(TaggedEC2Object):
def __init__(self, connection=None):
"""
Represents a VPC.
:ivar id: The unique ID of the VPC.
:ivar dhcp_options_id: The ID of the set of DHCP options you've associated with the VPC
(or default if the default options are associated with the VPC).
:ivar state: The current state of the VPC.
:ivar cidr_block: The CIDR block for the VPC.
:ivar is_default: Indicates whether the VPC is the default VPC.
:ivar instance_tenancy: The allowed tenancy of instances launched into the VPC.
:ivar classic_link_enabled: Indicates whether ClassicLink is enabled.
"""
super(VPC, self).__init__(connection)
self.id = None
self.dhcp_options_id = None
self.state = None
self.cidr_block = None
self.is_default = None
self.instance_tenancy = None
self.classic_link_enabled = None
def __repr__(self):
return 'VPC:%s' % self.id
def endElement(self, name, value, connection):
if name == 'vpcId':
self.id = value
elif name == 'dhcpOptionsId':
self.dhcp_options_id = value
elif name == 'state':
self.state = value
elif name == 'cidrBlock':
self.cidr_block = value
elif name == 'isDefault':
self.is_default = True if value == 'true' else False
elif name == 'instanceTenancy':
self.instance_tenancy = value
elif name == 'classicLinkEnabled':
self.classic_link_enabled = value
else:
setattr(self, name, value)
def delete(self):
return self.connection.delete_vpc(self.id)
def _update(self, updated):
self.__dict__.update(updated.__dict__)
def _get_status_then_update_vpc(self, get_status_method, validate=False,
dry_run=False):
vpc_list = get_status_method(
[self.id],
dry_run=dry_run
)
if len(vpc_list):
updated_vpc = vpc_list[0]
self._update(updated_vpc)
elif validate:
raise ValueError('%s is not a valid VPC ID' % (self.id,))
def update(self, validate=False, dry_run=False):
self._get_status_then_update_vpc(
self.connection.get_all_vpcs,
validate=validate,
dry_run=dry_run
)
return self.state
def update_classic_link_enabled(self, validate=False, dry_run=False):
"""
Updates instance's classic_link_enabled attribute
:rtype: bool
:return: self.classic_link_enabled after update has occurred.
"""
self._get_status_then_update_vpc(
self.connection.get_all_classic_link_vpcs,
validate=validate,
dry_run=dry_run
)
return self.classic_link_enabled
def disable_classic_link(self, dry_run=False):
"""
Disables ClassicLink for a VPC. You cannot disable ClassicLink for a
VPC that has EC2-Classic instances linked to it.
:type dry_run: bool
:param dry_run: Set to True if the operation should not actually run.
:rtype: bool
:return: True if successful
"""
return self.connection.disable_vpc_classic_link(self.id,
dry_run=dry_run)
def enable_classic_link(self, dry_run=False):
"""
Enables a VPC for ClassicLink. You can then link EC2-Classic instances
to your ClassicLink-enabled VPC to allow communication over private IP
addresses. You cannot enable your VPC for ClassicLink if any of your
VPC's route tables have existing routes for address ranges within the
10.0.0.0/8 IP address range, excluding local routes for VPCs in the
10.0.0.0/16 and 10.1.0.0/16 IP address ranges.
:type dry_run: bool
:param dry_run: Set to True if the operation should not actually run.
:rtype: bool
:return: True if successful
"""
return self.connection.enable_vpc_classic_link(self.id,
dry_run=dry_run)
def attach_classic_instance(self, instance_id, groups, dry_run=False):
"""
Links an EC2-Classic instance to a ClassicLink-enabled VPC through one
or more of the VPC's security groups. You cannot link an EC2-Classic
instance to more than one VPC at a time. You can only link an instance
that's in the running state. An instance is automatically unlinked from
a VPC when it's stopped. You can link it to the VPC again when you
restart it.
After you've linked an instance, you cannot change the VPC security
groups that are associated with it. To change the security groups, you
must first unlink the instance, and then link it again.
Linking your instance to a VPC is sometimes referred to as attaching
your instance.
:type intance_id: str
:param instance_is: The ID of a ClassicLink-enabled VPC.
:tye groups: list
:param groups: The ID of one or more of the VPC's security groups.
You cannot specify security groups from a different VPC. The
members of the list can be
:class:`boto.ec2.securitygroup.SecurityGroup` objects or
strings of the id's of the security groups.
:type dry_run: bool
:param dry_run: Set to True if the operation should not actually run.
:rtype: bool
:return: True if successful
"""
return self.connection.attach_classic_link_vpc(
vpc_id=self.id,
instance_id=instance_id,
groups=groups,
dry_run=dry_run
)
def detach_classic_instance(self, instance_id, dry_run=False):
"""
Unlinks a linked EC2-Classic instance from a VPC. After the instance
has been unlinked, the VPC security groups are no longer associated
with it. An instance is automatically unlinked from a VPC when
it's stopped.
:type intance_id: str
:param instance_is: The ID of the VPC to which the instance is linked.
:type dry_run: bool
:param dry_run: Set to True if the operation should not actually run.
:rtype: bool
:return: True if successful
"""
return self.connection.detach_classic_link_vpc(
vpc_id=self.id,
instance_id=instance_id,
dry_run=dry_run
)
|
mit
|
Shaswat27/scipy
|
scipy/weave/tests/test_blitz_tools.py
|
91
|
7141
|
from __future__ import absolute_import, print_function
import time
import parser
import warnings
from numpy import (float32, float64, complex64, complex128,
zeros, random, array)
from numpy.testing import (TestCase, assert_equal,
assert_allclose, run_module_suite)
from scipy.weave import blitz_tools, blitz, BlitzWarning
from scipy.weave.ast_tools import harvest_variables
from weave_test_utils import remove_whitespace, debug_print, TempdirBlitz, dec
class TestAstToBlitzExpr(TestCase):
def generic_check(self,expr,desired):
ast = parser.suite(expr)
ast_list = ast.tolist()
actual = blitz_tools.ast_to_blitz_expr(ast_list)
actual = remove_whitespace(actual)
desired = remove_whitespace(desired)
assert_equal(actual,desired,expr)
def test_simple_expr(self):
# convert simple expr to blitz
expr = "a[:1:2] = b[:1+i+2:]"
desired = "a(blitz::Range(_beg,1-1,2))="\
"b(blitz::Range(_beg,1+i+2-1));"
self.generic_check(expr,desired)
def test_fdtd_expr(self):
# Convert fdtd equation to blitz.
# Note: This really should have "\" at the end of each line to
# indicate continuation.
expr = "ex[:,1:,1:] = ca_x[:,1:,1:] * ex[:,1:,1:]" \
"+ cb_y_x[:,1:,1:] * (hz[:,1:,1:] - hz[:,:-1,:])"\
"- cb_z_x[:,1:,1:] * (hy[:,1:,1:] - hy[:,1:,:-1])"
desired = 'ex(_all,blitz::Range(1,_end),blitz::Range(1,_end))='\
' ca_x(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
' *ex(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
'+cb_y_x(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
'*(hz(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
' -hz(_all,blitz::Range(_beg,Nhz(1)-1-1),_all))'\
' -cb_z_x(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
'*(hy(_all,blitz::Range(1,_end),blitz::Range(1,_end))'\
'-hy(_all,blitz::Range(1,_end),blitz::Range(_beg,Nhy(2)-1-1)));'
self.generic_check(expr,desired)
class TestBlitz(TestCase):
"""These are long running tests...
Would be useful to benchmark these things somehow.
"""
def generic_check(self, expr, arg_dict, type, size):
clean_result = array(arg_dict['result'],copy=1)
t1 = time.time()
exec(expr, globals(),arg_dict)
t2 = time.time()
standard = t2 - t1
desired = arg_dict['result']
arg_dict['result'] = clean_result
t1 = time.time()
blitz_tools.blitz(expr,arg_dict,{},verbose=0)
t2 = time.time()
compiled = t2 - t1
actual = arg_dict['result']
# TODO: this isn't very stringent. Need to tighten this up and
# learn where failures are occurring.
assert_allclose(abs(actual.ravel()), abs(desired.ravel()),
rtol=1e-4, atol=1e-6)
return standard, compiled
def generic_2d(self,expr,typ):
# The complex testing is pretty lame...
ast = parser.suite(expr)
arg_list = harvest_variables(ast.tolist())
all_sizes = [(10,10), (50,50), (100,100), (500,500), (1000,1000)]
debug_print('\nExpression:', expr)
with TempdirBlitz():
for size in all_sizes:
arg_dict = {}
for arg in arg_list:
arg_dict[arg] = random.normal(0,1,size).astype(typ)
# set imag part of complex values to non-zero value
try:
arg_dict[arg].imag = arg_dict[arg].real
except:
pass
debug_print('Run:', size,typ)
standard,compiled = self.generic_check(expr,arg_dict,type,size)
try:
speed_up = standard/compiled
except:
speed_up = -1.
debug_print("1st run(numpy,compiled,speed up): %3.4f, %3.4f, "
"%3.4f" % (standard,compiled,speed_up))
standard,compiled = self.generic_check(expr,arg_dict,type,size)
try:
speed_up = standard/compiled
except:
speed_up = -1.
debug_print("2nd run(numpy,compiled,speed up): %3.4f, %3.4f, "
"%3.4f" % (standard,compiled,speed_up))
@dec.slow
def test_5point_avg_2d_float(self):
expr = "result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \
"+ b[1:-1,2:] + b[1:-1,:-2]) / 5."
self.generic_2d(expr,float32)
@dec.slow
def test_5point_avg_2d_double(self):
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=BlitzWarning)
expr = "result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \
"+ b[1:-1,2:] + b[1:-1,:-2]) / 5."
self.generic_2d(expr,float64)
@dec.slow
def _check_5point_avg_2d_complex_float(self):
""" Note: THIS TEST is KNOWN TO FAIL ON GCC 3.x.
It will not adversely affect 99.99 percent of weave
result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]
+ b[1:-1,2:] + b[1:-1,:-2]) / 5.
Note: THIS TEST is KNOWN TO FAIL ON GCC 3.x. The reason is that
5. is a double and b is a complex32. blitz doesn't know
how to handle complex32/double. See:
http://www.oonumerics.org/MailArchives/blitz-support/msg00541.php
Unfortunately, the fix isn't trivial. Instead of fixing it, I
prefer to wait until we replace blitz++ with Pat Miller's code
that doesn't rely on blitz..
"""
expr = "result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \
"+ b[1:-1,2:] + b[1:-1,:-2]) / 5."
self.generic_2d(expr,complex64)
@dec.slow
def test_5point_avg_2d_complex_double(self):
expr = "result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \
"+ b[1:-1,2:] + b[1:-1,:-2]) / 5."
self.generic_2d(expr,complex128)
@dec.slow
def test_blitz_bug():
# Assignment to arr[i:] used to fail inside blitz expressions.
with TempdirBlitz():
N = 4
expr_buggy = 'arr_blitz_buggy[{0}:] = arr[{0}:]'
expr_not_buggy = 'arr_blitz_not_buggy[{0}:{1}] = arr[{0}:]'
random.seed(7)
arr = random.randn(N)
sh = arr.shape[0]
for lim in [0, 1, 2]:
arr_blitz_buggy = zeros(N)
arr_blitz_not_buggy = zeros(N)
arr_np = zeros(N)
blitz(expr_buggy.format(lim))
blitz(expr_not_buggy.format(lim, 'sh'))
arr_np[lim:] = arr[lim:]
assert_allclose(arr_blitz_buggy, arr_np)
assert_allclose(arr_blitz_not_buggy, arr_np)
if __name__ == "__main__":
run_module_suite()
|
bsd-3-clause
|
SimVascular/VTK
|
ThirdParty/Twisted/twisted/protocols/ftp.py
|
23
|
101280
|
# -*- test-case-name: twisted.test.test_ftp -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
An FTP protocol implementation
"""
# System Imports
import os
import time
import re
import operator
import stat
import errno
import fnmatch
try:
import pwd, grp
except ImportError:
pwd = grp = None
from zope.interface import Interface, implements
# Twisted Imports
from twisted import copyright
from twisted.internet import reactor, interfaces, protocol, error, defer
from twisted.protocols import basic, policies
from twisted.python import log, failure, filepath
from twisted.cred import error as cred_error, portal, credentials, checkers
# constants
# response codes
RESTART_MARKER_REPLY = "100"
SERVICE_READY_IN_N_MINUTES = "120"
DATA_CNX_ALREADY_OPEN_START_XFR = "125"
FILE_STATUS_OK_OPEN_DATA_CNX = "150"
CMD_OK = "200.1"
TYPE_SET_OK = "200.2"
ENTERING_PORT_MODE = "200.3"
CMD_NOT_IMPLMNTD_SUPERFLUOUS = "202"
SYS_STATUS_OR_HELP_REPLY = "211.1"
FEAT_OK = '211.2'
DIR_STATUS = "212"
FILE_STATUS = "213"
HELP_MSG = "214"
NAME_SYS_TYPE = "215"
SVC_READY_FOR_NEW_USER = "220.1"
WELCOME_MSG = "220.2"
SVC_CLOSING_CTRL_CNX = "221.1"
GOODBYE_MSG = "221.2"
DATA_CNX_OPEN_NO_XFR_IN_PROGRESS = "225"
CLOSING_DATA_CNX = "226.1"
TXFR_COMPLETE_OK = "226.2"
ENTERING_PASV_MODE = "227"
ENTERING_EPSV_MODE = "229"
USR_LOGGED_IN_PROCEED = "230.1" # v1 of code 230
GUEST_LOGGED_IN_PROCEED = "230.2" # v2 of code 230
REQ_FILE_ACTN_COMPLETED_OK = "250"
PWD_REPLY = "257.1"
MKD_REPLY = "257.2"
USR_NAME_OK_NEED_PASS = "331.1" # v1 of Code 331
GUEST_NAME_OK_NEED_EMAIL = "331.2" # v2 of code 331
NEED_ACCT_FOR_LOGIN = "332"
REQ_FILE_ACTN_PENDING_FURTHER_INFO = "350"
SVC_NOT_AVAIL_CLOSING_CTRL_CNX = "421.1"
TOO_MANY_CONNECTIONS = "421.2"
CANT_OPEN_DATA_CNX = "425"
CNX_CLOSED_TXFR_ABORTED = "426"
REQ_ACTN_ABRTD_FILE_UNAVAIL = "450"
REQ_ACTN_ABRTD_LOCAL_ERR = "451"
REQ_ACTN_ABRTD_INSUFF_STORAGE = "452"
SYNTAX_ERR = "500"
SYNTAX_ERR_IN_ARGS = "501"
CMD_NOT_IMPLMNTD = "502.1"
OPTS_NOT_IMPLEMENTED = '502.2'
BAD_CMD_SEQ = "503"
CMD_NOT_IMPLMNTD_FOR_PARAM = "504"
NOT_LOGGED_IN = "530.1" # v1 of code 530 - please log in
AUTH_FAILURE = "530.2" # v2 of code 530 - authorization failure
NEED_ACCT_FOR_STOR = "532"
FILE_NOT_FOUND = "550.1" # no such file or directory
PERMISSION_DENIED = "550.2" # permission denied
ANON_USER_DENIED = "550.3" # anonymous users can't alter filesystem
IS_NOT_A_DIR = "550.4" # rmd called on a path that is not a directory
REQ_ACTN_NOT_TAKEN = "550.5"
FILE_EXISTS = "550.6"
IS_A_DIR = "550.7"
PAGE_TYPE_UNK = "551"
EXCEEDED_STORAGE_ALLOC = "552"
FILENAME_NOT_ALLOWED = "553"
RESPONSE = {
# -- 100's --
RESTART_MARKER_REPLY: '110 MARK yyyy-mmmm', # TODO: this must be fixed
SERVICE_READY_IN_N_MINUTES: '120 service ready in %s minutes',
DATA_CNX_ALREADY_OPEN_START_XFR: '125 Data connection already open, starting transfer',
FILE_STATUS_OK_OPEN_DATA_CNX: '150 File status okay; about to open data connection.',
# -- 200's --
CMD_OK: '200 Command OK',
TYPE_SET_OK: '200 Type set to %s.',
ENTERING_PORT_MODE: '200 PORT OK',
CMD_NOT_IMPLMNTD_SUPERFLUOUS: '202 Command not implemented, superfluous at this site',
SYS_STATUS_OR_HELP_REPLY: '211 System status reply',
FEAT_OK: ['211-Features:','211 End'],
DIR_STATUS: '212 %s',
FILE_STATUS: '213 %s',
HELP_MSG: '214 help: %s',
NAME_SYS_TYPE: '215 UNIX Type: L8',
WELCOME_MSG: "220 %s",
SVC_READY_FOR_NEW_USER: '220 Service ready',
SVC_CLOSING_CTRL_CNX: '221 Service closing control connection',
GOODBYE_MSG: '221 Goodbye.',
DATA_CNX_OPEN_NO_XFR_IN_PROGRESS: '225 data connection open, no transfer in progress',
CLOSING_DATA_CNX: '226 Abort successful',
TXFR_COMPLETE_OK: '226 Transfer Complete.',
ENTERING_PASV_MODE: '227 Entering Passive Mode (%s).',
ENTERING_EPSV_MODE: '229 Entering Extended Passive Mode (|||%s|).', # where is epsv defined in the rfc's?
USR_LOGGED_IN_PROCEED: '230 User logged in, proceed',
GUEST_LOGGED_IN_PROCEED: '230 Anonymous login ok, access restrictions apply.',
REQ_FILE_ACTN_COMPLETED_OK: '250 Requested File Action Completed OK', #i.e. CWD completed ok
PWD_REPLY: '257 "%s"',
MKD_REPLY: '257 "%s" created',
# -- 300's --
USR_NAME_OK_NEED_PASS: '331 Password required for %s.',
GUEST_NAME_OK_NEED_EMAIL: '331 Guest login ok, type your email address as password.',
NEED_ACCT_FOR_LOGIN: '332 Need account for login.',
REQ_FILE_ACTN_PENDING_FURTHER_INFO: '350 Requested file action pending further information.',
# -- 400's --
SVC_NOT_AVAIL_CLOSING_CTRL_CNX: '421 Service not available, closing control connection.',
TOO_MANY_CONNECTIONS: '421 Too many users right now, try again in a few minutes.',
CANT_OPEN_DATA_CNX: "425 Can't open data connection.",
CNX_CLOSED_TXFR_ABORTED: '426 Transfer aborted. Data connection closed.',
REQ_ACTN_ABRTD_FILE_UNAVAIL: '450 Requested action aborted. File unavailable.',
REQ_ACTN_ABRTD_LOCAL_ERR: '451 Requested action aborted. Local error in processing.',
REQ_ACTN_ABRTD_INSUFF_STORAGE: '452 Requested action aborted. Insufficient storage.',
# -- 500's --
SYNTAX_ERR: "500 Syntax error: %s",
SYNTAX_ERR_IN_ARGS: '501 syntax error in argument(s) %s.',
CMD_NOT_IMPLMNTD: "502 Command '%s' not implemented",
OPTS_NOT_IMPLEMENTED: "502 Option '%s' not implemented.",
BAD_CMD_SEQ: '503 Incorrect sequence of commands: %s',
CMD_NOT_IMPLMNTD_FOR_PARAM: "504 Not implemented for parameter '%s'.",
NOT_LOGGED_IN: '530 Please login with USER and PASS.',
AUTH_FAILURE: '530 Sorry, Authentication failed.',
NEED_ACCT_FOR_STOR: '532 Need an account for storing files',
FILE_NOT_FOUND: '550 %s: No such file or directory.',
PERMISSION_DENIED: '550 %s: Permission denied.',
ANON_USER_DENIED: '550 Anonymous users are forbidden to change the filesystem',
IS_NOT_A_DIR: '550 Cannot rmd, %s is not a directory',
FILE_EXISTS: '550 %s: File exists',
IS_A_DIR: '550 %s: is a directory',
REQ_ACTN_NOT_TAKEN: '550 Requested action not taken: %s',
PAGE_TYPE_UNK: '551 Page type unknown',
EXCEEDED_STORAGE_ALLOC: '552 Requested file action aborted, exceeded file storage allocation',
FILENAME_NOT_ALLOWED: '553 Requested action not taken, file name not allowed'
}
class InvalidPath(Exception):
"""
Internal exception used to signify an error during parsing a path.
"""
def toSegments(cwd, path):
"""
Normalize a path, as represented by a list of strings each
representing one segment of the path.
"""
if path.startswith('/'):
segs = []
else:
segs = cwd[:]
for s in path.split('/'):
if s == '.' or s == '':
continue
elif s == '..':
if segs:
segs.pop()
else:
raise InvalidPath(cwd, path)
elif '\0' in s or '/' in s:
raise InvalidPath(cwd, path)
else:
segs.append(s)
return segs
def errnoToFailure(e, path):
"""
Map C{OSError} and C{IOError} to standard FTP errors.
"""
if e == errno.ENOENT:
return defer.fail(FileNotFoundError(path))
elif e == errno.EACCES or e == errno.EPERM:
return defer.fail(PermissionDeniedError(path))
elif e == errno.ENOTDIR:
return defer.fail(IsNotADirectoryError(path))
elif e == errno.EEXIST:
return defer.fail(FileExistsError(path))
elif e == errno.EISDIR:
return defer.fail(IsADirectoryError(path))
else:
return defer.fail()
def _isGlobbingExpression(segments=None):
"""
Helper for checking if a FTPShell `segments` contains a wildcard Unix
expression.
Only filename globbing is supported.
This means that wildcards can only be presents in the last element of
`segments`.
@type segments: C{list}
@param segments: List of path elements as used by the FTP server protocol.
@rtype: Boolean
@return: True if `segments` contains a globbing expression.
"""
if not segments:
return False
# To check that something is a glob expression, we convert it to
# Regular Expression. If the result is the same as the original expression
# then it contains no globbing expression.
globCandidate = segments[-1]
# A set of default regex rules is added to all strings.
emtpyTranslations = fnmatch.translate('')
globTranslations = fnmatch.translate(globCandidate)
if globCandidate + emtpyTranslations == globTranslations:
return False
else:
return True
class FTPCmdError(Exception):
"""
Generic exception for FTP commands.
"""
def __init__(self, *msg):
Exception.__init__(self, *msg)
self.errorMessage = msg
def response(self):
"""
Generate a FTP response message for this error.
"""
return RESPONSE[self.errorCode] % self.errorMessage
class FileNotFoundError(FTPCmdError):
"""
Raised when trying to access a non existent file or directory.
"""
errorCode = FILE_NOT_FOUND
class AnonUserDeniedError(FTPCmdError):
"""
Raised when an anonymous user issues a command that will alter the
filesystem
"""
errorCode = ANON_USER_DENIED
class PermissionDeniedError(FTPCmdError):
"""
Raised when access is attempted to a resource to which access is
not allowed.
"""
errorCode = PERMISSION_DENIED
class IsNotADirectoryError(FTPCmdError):
"""
Raised when RMD is called on a path that isn't a directory.
"""
errorCode = IS_NOT_A_DIR
class FileExistsError(FTPCmdError):
"""
Raised when attempted to override an existing resource.
"""
errorCode = FILE_EXISTS
class IsADirectoryError(FTPCmdError):
"""
Raised when DELE is called on a path that is a directory.
"""
errorCode = IS_A_DIR
class CmdSyntaxError(FTPCmdError):
"""
Raised when a command syntax is wrong.
"""
errorCode = SYNTAX_ERR
class CmdArgSyntaxError(FTPCmdError):
"""
Raised when a command is called with wrong value or a wrong number of
arguments.
"""
errorCode = SYNTAX_ERR_IN_ARGS
class CmdNotImplementedError(FTPCmdError):
"""
Raised when an unimplemented command is given to the server.
"""
errorCode = CMD_NOT_IMPLMNTD
class CmdNotImplementedForArgError(FTPCmdError):
"""
Raised when the handling of a parameter for a command is not implemented by
the server.
"""
errorCode = CMD_NOT_IMPLMNTD_FOR_PARAM
class FTPError(Exception):
pass
class PortConnectionError(Exception):
pass
class BadCmdSequenceError(FTPCmdError):
"""
Raised when a client sends a series of commands in an illogical sequence.
"""
errorCode = BAD_CMD_SEQ
class AuthorizationError(FTPCmdError):
"""
Raised when client authentication fails.
"""
errorCode = AUTH_FAILURE
def debugDeferred(self, *_):
log.msg('debugDeferred(): %s' % str(_), debug=True)
# -- DTP Protocol --
_months = [
None,
'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
class DTP(object, protocol.Protocol):
implements(interfaces.IConsumer)
isConnected = False
_cons = None
_onConnLost = None
_buffer = None
def connectionMade(self):
self.isConnected = True
self.factory.deferred.callback(None)
self._buffer = []
def connectionLost(self, reason):
self.isConnected = False
if self._onConnLost is not None:
self._onConnLost.callback(None)
def sendLine(self, line):
"""
Send a line to data channel.
@param line: The line to be sent.
@type line: L{bytes}
"""
self.transport.write(line + '\r\n')
def _formatOneListResponse(self, name, size, directory, permissions, hardlinks, modified, owner, group):
def formatMode(mode):
return ''.join([mode & (256 >> n) and 'rwx'[n % 3] or '-' for n in range(9)])
def formatDate(mtime):
now = time.gmtime()
info = {
'month': _months[mtime.tm_mon],
'day': mtime.tm_mday,
'year': mtime.tm_year,
'hour': mtime.tm_hour,
'minute': mtime.tm_min
}
if now.tm_year != mtime.tm_year:
return '%(month)s %(day)02d %(year)5d' % info
else:
return '%(month)s %(day)02d %(hour)02d:%(minute)02d' % info
format = ('%(directory)s%(permissions)s%(hardlinks)4d '
'%(owner)-9s %(group)-9s %(size)15d %(date)12s '
'%(name)s')
return format % {
'directory': directory and 'd' or '-',
'permissions': formatMode(permissions),
'hardlinks': hardlinks,
'owner': owner[:8],
'group': group[:8],
'size': size,
'date': formatDate(time.gmtime(modified)),
'name': name}
def sendListResponse(self, name, response):
self.sendLine(self._formatOneListResponse(name, *response))
# Proxy IConsumer to our transport
def registerProducer(self, producer, streaming):
return self.transport.registerProducer(producer, streaming)
def unregisterProducer(self):
self.transport.unregisterProducer()
self.transport.loseConnection()
def write(self, data):
if self.isConnected:
return self.transport.write(data)
raise Exception("Crap damn crap damn crap damn")
# Pretend to be a producer, too.
def _conswrite(self, bytes):
try:
self._cons.write(bytes)
except:
self._onConnLost.errback()
def dataReceived(self, bytes):
if self._cons is not None:
self._conswrite(bytes)
else:
self._buffer.append(bytes)
def _unregConsumer(self, ignored):
self._cons.unregisterProducer()
self._cons = None
del self._onConnLost
return ignored
def registerConsumer(self, cons):
assert self._cons is None
self._cons = cons
self._cons.registerProducer(self, True)
for chunk in self._buffer:
self._conswrite(chunk)
self._buffer = None
if self.isConnected:
self._onConnLost = d = defer.Deferred()
d.addBoth(self._unregConsumer)
return d
else:
self._cons.unregisterProducer()
self._cons = None
return defer.succeed(None)
def resumeProducing(self):
self.transport.resumeProducing()
def pauseProducing(self):
self.transport.pauseProducing()
def stopProducing(self):
self.transport.stopProducing()
class DTPFactory(protocol.ClientFactory):
"""
Client factory for I{data transfer process} protocols.
@ivar peerCheck: perform checks to make sure the ftp-pi's peer is the same
as the dtp's
@ivar pi: a reference to this factory's protocol interpreter
@ivar _state: Indicates the current state of the DTPFactory. Initially,
this is L{_IN_PROGRESS}. If the connection fails or times out, it is
L{_FAILED}. If the connection succeeds before the timeout, it is
L{_FINISHED}.
"""
_IN_PROGRESS = object()
_FAILED = object()
_FINISHED = object()
_state = _IN_PROGRESS
# -- configuration variables --
peerCheck = False
# -- class variables --
def __init__(self, pi, peerHost=None, reactor=None):
"""
Constructor
@param pi: this factory's protocol interpreter
@param peerHost: if peerCheck is True, this is the tuple that the
generated instance will use to perform security checks
"""
self.pi = pi # the protocol interpreter that is using this factory
self.peerHost = peerHost # the from FTP.transport.peerHost()
self.deferred = defer.Deferred() # deferred will fire when instance is connected
self.delayedCall = None
if reactor is None:
from twisted.internet import reactor
self._reactor = reactor
def buildProtocol(self, addr):
log.msg('DTPFactory.buildProtocol', debug=True)
if self._state is not self._IN_PROGRESS:
return None
self._state = self._FINISHED
self.cancelTimeout()
p = DTP()
p.factory = self
p.pi = self.pi
self.pi.dtpInstance = p
return p
def stopFactory(self):
log.msg('dtpFactory.stopFactory', debug=True)
self.cancelTimeout()
def timeoutFactory(self):
log.msg('timed out waiting for DTP connection')
if self._state is not self._IN_PROGRESS:
return
self._state = self._FAILED
d = self.deferred
self.deferred = None
d.errback(
PortConnectionError(defer.TimeoutError("DTPFactory timeout")))
def cancelTimeout(self):
if self.delayedCall is not None and self.delayedCall.active():
log.msg('cancelling DTP timeout', debug=True)
self.delayedCall.cancel()
def setTimeout(self, seconds):
log.msg('DTPFactory.setTimeout set to %s seconds' % seconds)
self.delayedCall = self._reactor.callLater(seconds, self.timeoutFactory)
def clientConnectionFailed(self, connector, reason):
if self._state is not self._IN_PROGRESS:
return
self._state = self._FAILED
d = self.deferred
self.deferred = None
d.errback(PortConnectionError(reason))
# -- FTP-PI (Protocol Interpreter) --
class ASCIIConsumerWrapper(object):
def __init__(self, cons):
self.cons = cons
self.registerProducer = cons.registerProducer
self.unregisterProducer = cons.unregisterProducer
assert os.linesep == "\r\n" or len(os.linesep) == 1, "Unsupported platform (yea right like this even exists)"
if os.linesep == "\r\n":
self.write = cons.write
def write(self, bytes):
return self.cons.write(bytes.replace(os.linesep, "\r\n"))
class FileConsumer(object):
"""
A consumer for FTP input that writes data to a file.
@ivar fObj: a file object opened for writing, used to write data received.
@type fObj: C{file}
"""
implements(interfaces.IConsumer)
def __init__(self, fObj):
self.fObj = fObj
def registerProducer(self, producer, streaming):
self.producer = producer
assert streaming
def unregisterProducer(self):
self.producer = None
self.fObj.close()
def write(self, bytes):
self.fObj.write(bytes)
class FTPOverflowProtocol(basic.LineReceiver):
"""FTP mini-protocol for when there are too many connections."""
def connectionMade(self):
self.sendLine(RESPONSE[TOO_MANY_CONNECTIONS])
self.transport.loseConnection()
class FTP(object, basic.LineReceiver, policies.TimeoutMixin):
"""
Protocol Interpreter for the File Transfer Protocol
@ivar state: The current server state. One of L{UNAUTH},
L{INAUTH}, L{AUTHED}, L{RENAMING}.
@ivar shell: The connected avatar
@ivar binary: The transfer mode. If false, ASCII.
@ivar dtpFactory: Generates a single DTP for this session
@ivar dtpPort: Port returned from listenTCP
@ivar listenFactory: A callable with the signature of
L{twisted.internet.interfaces.IReactorTCP.listenTCP} which will be used
to create Ports for passive connections (mainly for testing).
@ivar passivePortRange: iterator used as source of passive port numbers.
@type passivePortRange: C{iterator}
"""
disconnected = False
# States an FTP can be in
UNAUTH, INAUTH, AUTHED, RENAMING = range(4)
# how long the DTP waits for a connection
dtpTimeout = 10
portal = None
shell = None
dtpFactory = None
dtpPort = None
dtpInstance = None
binary = True
PUBLIC_COMMANDS = ['FEAT', 'QUIT']
FEATURES = ['FEAT', 'MDTM', 'PASV', 'SIZE', 'TYPE A;I']
passivePortRange = xrange(0, 1)
listenFactory = reactor.listenTCP
def reply(self, key, *args):
msg = RESPONSE[key] % args
self.sendLine(msg)
def connectionMade(self):
self.state = self.UNAUTH
self.setTimeout(self.timeOut)
self.reply(WELCOME_MSG, self.factory.welcomeMessage)
def connectionLost(self, reason):
# if we have a DTP protocol instance running and
# we lose connection to the client's PI, kill the
# DTP connection and close the port
if self.dtpFactory:
self.cleanupDTP()
self.setTimeout(None)
if hasattr(self.shell, 'logout') and self.shell.logout is not None:
self.shell.logout()
self.shell = None
self.transport = None
def timeoutConnection(self):
self.transport.loseConnection()
def lineReceived(self, line):
self.resetTimeout()
self.pauseProducing()
def processFailed(err):
if err.check(FTPCmdError):
self.sendLine(err.value.response())
elif (err.check(TypeError) and
err.value.args[0].find('takes exactly') != -1):
self.reply(SYNTAX_ERR, "%s requires an argument." % (cmd,))
else:
log.msg("Unexpected FTP error")
log.err(err)
self.reply(REQ_ACTN_NOT_TAKEN, "internal server error")
def processSucceeded(result):
if isinstance(result, tuple):
self.reply(*result)
elif result is not None:
self.reply(result)
def allDone(ignored):
if not self.disconnected:
self.resumeProducing()
spaceIndex = line.find(' ')
if spaceIndex != -1:
cmd = line[:spaceIndex]
args = (line[spaceIndex + 1:],)
else:
cmd = line
args = ()
d = defer.maybeDeferred(self.processCommand, cmd, *args)
d.addCallbacks(processSucceeded, processFailed)
d.addErrback(log.err)
# XXX It burnsss
# LineReceiver doesn't let you resumeProducing inside
# lineReceived atm
from twisted.internet import reactor
reactor.callLater(0, d.addBoth, allDone)
def processCommand(self, cmd, *params):
def call_ftp_command(command):
method = getattr(self, "ftp_" + command, None)
if method is not None:
return method(*params)
return defer.fail(CmdNotImplementedError(command))
cmd = cmd.upper()
if cmd in self.PUBLIC_COMMANDS:
return call_ftp_command(cmd)
elif self.state == self.UNAUTH:
if cmd == 'USER':
return self.ftp_USER(*params)
elif cmd == 'PASS':
return BAD_CMD_SEQ, "USER required before PASS"
else:
return NOT_LOGGED_IN
elif self.state == self.INAUTH:
if cmd == 'PASS':
return self.ftp_PASS(*params)
else:
return BAD_CMD_SEQ, "PASS required after USER"
elif self.state == self.AUTHED:
return call_ftp_command(cmd)
elif self.state == self.RENAMING:
if cmd == 'RNTO':
return self.ftp_RNTO(*params)
else:
return BAD_CMD_SEQ, "RNTO required after RNFR"
def getDTPPort(self, factory):
"""
Return a port for passive access, using C{self.passivePortRange}
attribute.
"""
for portn in self.passivePortRange:
try:
dtpPort = self.listenFactory(portn, factory)
except error.CannotListenError:
continue
else:
return dtpPort
raise error.CannotListenError('', portn,
"No port available in range %s" %
(self.passivePortRange,))
def ftp_USER(self, username):
"""
First part of login. Get the username the peer wants to
authenticate as.
"""
if not username:
return defer.fail(CmdSyntaxError('USER requires an argument'))
self._user = username
self.state = self.INAUTH
if self.factory.allowAnonymous and self._user == self.factory.userAnonymous:
return GUEST_NAME_OK_NEED_EMAIL
else:
return (USR_NAME_OK_NEED_PASS, username)
# TODO: add max auth try before timeout from ip...
# TODO: need to implement minimal ABOR command
def ftp_PASS(self, password):
"""
Second part of login. Get the password the peer wants to
authenticate with.
"""
if self.factory.allowAnonymous and self._user == self.factory.userAnonymous:
# anonymous login
creds = credentials.Anonymous()
reply = GUEST_LOGGED_IN_PROCEED
else:
# user login
creds = credentials.UsernamePassword(self._user, password)
reply = USR_LOGGED_IN_PROCEED
del self._user
def _cbLogin((interface, avatar, logout)):
assert interface is IFTPShell, "The realm is busted, jerk."
self.shell = avatar
self.logout = logout
self.workingDirectory = []
self.state = self.AUTHED
return reply
def _ebLogin(failure):
failure.trap(cred_error.UnauthorizedLogin, cred_error.UnhandledCredentials)
self.state = self.UNAUTH
raise AuthorizationError
d = self.portal.login(creds, None, IFTPShell)
d.addCallbacks(_cbLogin, _ebLogin)
return d
def ftp_PASV(self):
"""
Request for a passive connection
from the rfc::
This command requests the server-DTP to \"listen\" on a data port
(which is not its default data port) and to wait for a connection
rather than initiate one upon receipt of a transfer command. The
response to this command includes the host and port address this
server is listening on.
"""
# if we have a DTP port set up, lose it.
if self.dtpFactory is not None:
# cleanupDTP sets dtpFactory to none. Later we'll do
# cleanup here or something.
self.cleanupDTP()
self.dtpFactory = DTPFactory(pi=self)
self.dtpFactory.setTimeout(self.dtpTimeout)
self.dtpPort = self.getDTPPort(self.dtpFactory)
host = self.transport.getHost().host
port = self.dtpPort.getHost().port
self.reply(ENTERING_PASV_MODE, encodeHostPort(host, port))
return self.dtpFactory.deferred.addCallback(lambda ign: None)
def ftp_PORT(self, address):
addr = map(int, address.split(','))
ip = '%d.%d.%d.%d' % tuple(addr[:4])
port = addr[4] << 8 | addr[5]
# if we have a DTP port set up, lose it.
if self.dtpFactory is not None:
self.cleanupDTP()
self.dtpFactory = DTPFactory(pi=self, peerHost=self.transport.getPeer().host)
self.dtpFactory.setTimeout(self.dtpTimeout)
self.dtpPort = reactor.connectTCP(ip, port, self.dtpFactory)
def connected(ignored):
return ENTERING_PORT_MODE
def connFailed(err):
err.trap(PortConnectionError)
return CANT_OPEN_DATA_CNX
return self.dtpFactory.deferred.addCallbacks(connected, connFailed)
def _encodeName(self, name):
"""
Encode C{name} to be sent over the wire.
This encodes L{unicode} objects as UTF-8 and leaves L{bytes} as-is.
As described by U{RFC 3659 section
2.2<https://tools.ietf.org/html/rfc3659#section-2.2>}::
Various FTP commands take pathnames as arguments, or return
pathnames in responses. When the MLST command is supported, as
indicated in the response to the FEAT command, pathnames are to be
transferred in one of the following two formats.
pathname = utf-8-name / raw
utf-8-name = <a UTF-8 encoded Unicode string>
raw = <any string that is not a valid UTF-8 encoding>
Which format is used is at the option of the user-PI or server-PI
sending the pathname.
@param name: Name to be encoded.
@type name: L{bytes} or L{unicode}
@return: Wire format of C{name}.
@rtype: L{bytes}
"""
if isinstance(name, unicode):
return name.encode('utf-8')
return name
def ftp_LIST(self, path=''):
""" This command causes a list to be sent from the server to the
passive DTP. If the pathname specifies a directory or other
group of files, the server should transfer a list of files
in the specified directory. If the pathname specifies a
file then the server should send current information on the
file. A null argument implies the user's current working or
default directory.
"""
# Uh, for now, do this retarded thing.
if self.dtpInstance is None or not self.dtpInstance.isConnected:
return defer.fail(BadCmdSequenceError('must send PORT or PASV before RETR'))
# Various clients send flags like -L or -al etc. We just ignore them.
if path.lower() in ['-a', '-l', '-la', '-al']:
path = ''
def gotListing(results):
self.reply(DATA_CNX_ALREADY_OPEN_START_XFR)
for (name, attrs) in results:
name = self._encodeName(name)
self.dtpInstance.sendListResponse(name, attrs)
self.dtpInstance.transport.loseConnection()
return (TXFR_COMPLETE_OK,)
try:
segments = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
d = self.shell.list(
segments,
('size', 'directory', 'permissions', 'hardlinks',
'modified', 'owner', 'group'))
d.addCallback(gotListing)
return d
def ftp_NLST(self, path):
"""
This command causes a directory listing to be sent from the server to
the client. The pathname should specify a directory or other
system-specific file group descriptor. An empty path implies the current
working directory. If the path is non-existent, send nothing. If the
path is to a file, send only the file name.
@type path: C{str}
@param path: The path for which a directory listing should be returned.
@rtype: L{Deferred}
@return: a L{Deferred} which will be fired when the listing request
is finished.
"""
# XXX: why is this check different from ftp_RETR/ftp_STOR? See #4180
if self.dtpInstance is None or not self.dtpInstance.isConnected:
return defer.fail(
BadCmdSequenceError('must send PORT or PASV before RETR'))
try:
segments = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
def cbList(results, glob):
"""
Send, line by line, each matching file in the directory listing, and
then close the connection.
@type results: A C{list} of C{tuple}. The first element of each
C{tuple} is a C{str} and the second element is a C{list}.
@param results: The names of the files in the directory.
@param glob: A shell-style glob through which to filter results (see
U{http://docs.python.org/2/library/fnmatch.html}), or C{None}
for no filtering.
@type glob: L{str} or L{NoneType}
@return: A C{tuple} containing the status code for a successful
transfer.
@rtype: C{tuple}
"""
self.reply(DATA_CNX_ALREADY_OPEN_START_XFR)
for (name, ignored) in results:
if not glob or (glob and fnmatch.fnmatch(name, glob)):
name = self._encodeName(name)
self.dtpInstance.sendLine(name)
self.dtpInstance.transport.loseConnection()
return (TXFR_COMPLETE_OK,)
def listErr(results):
"""
RFC 959 specifies that an NLST request may only return directory
listings. Thus, send nothing and just close the connection.
@type results: L{Failure}
@param results: The L{Failure} wrapping a L{FileNotFoundError} that
occurred while trying to list the contents of a nonexistent
directory.
@returns: A C{tuple} containing the status code for a successful
transfer.
@rtype: C{tuple}
"""
self.dtpInstance.transport.loseConnection()
return (TXFR_COMPLETE_OK,)
if _isGlobbingExpression(segments):
# Remove globbing expression from path
# and keep to be used for filtering.
glob = segments.pop()
else:
glob = None
d = self.shell.list(segments)
d.addCallback(cbList, glob)
# self.shell.list will generate an error if the path is invalid
d.addErrback(listErr)
return d
def ftp_CWD(self, path):
try:
segments = toSegments(self.workingDirectory, path)
except InvalidPath:
# XXX Eh, what to fail with here?
return defer.fail(FileNotFoundError(path))
def accessGranted(result):
self.workingDirectory = segments
return (REQ_FILE_ACTN_COMPLETED_OK,)
return self.shell.access(segments).addCallback(accessGranted)
def ftp_CDUP(self):
return self.ftp_CWD('..')
def ftp_PWD(self):
return (PWD_REPLY, '/' + '/'.join(self.workingDirectory))
def ftp_RETR(self, path):
"""
This command causes the content of a file to be sent over the data
transfer channel. If the path is to a folder, an error will be raised.
@type path: C{str}
@param path: The path to the file which should be transferred over the
data transfer channel.
@rtype: L{Deferred}
@return: a L{Deferred} which will be fired when the transfer is done.
"""
if self.dtpInstance is None:
raise BadCmdSequenceError('PORT or PASV required before RETR')
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
# XXX For now, just disable the timeout. Later we'll want to
# leave it active and have the DTP connection reset it
# periodically.
self.setTimeout(None)
# Put it back later
def enableTimeout(result):
self.setTimeout(self.factory.timeOut)
return result
# And away she goes
if not self.binary:
cons = ASCIIConsumerWrapper(self.dtpInstance)
else:
cons = self.dtpInstance
def cbSent(result):
return (TXFR_COMPLETE_OK,)
def ebSent(err):
log.msg("Unexpected error attempting to transmit file to client:")
log.err(err)
if err.check(FTPCmdError):
return err
return (CNX_CLOSED_TXFR_ABORTED,)
def cbOpened(file):
# Tell them what to doooo
if self.dtpInstance.isConnected:
self.reply(DATA_CNX_ALREADY_OPEN_START_XFR)
else:
self.reply(FILE_STATUS_OK_OPEN_DATA_CNX)
d = file.send(cons)
d.addCallbacks(cbSent, ebSent)
return d
def ebOpened(err):
if not err.check(PermissionDeniedError, FileNotFoundError, IsADirectoryError):
log.msg("Unexpected error attempting to open file for transmission:")
log.err(err)
if err.check(FTPCmdError):
return (err.value.errorCode, '/'.join(newsegs))
return (FILE_NOT_FOUND, '/'.join(newsegs))
d = self.shell.openForReading(newsegs)
d.addCallbacks(cbOpened, ebOpened)
d.addBoth(enableTimeout)
# Pass back Deferred that fires when the transfer is done
return d
def ftp_STOR(self, path):
"""
STORE (STOR)
This command causes the server-DTP to accept the data
transferred via the data connection and to store the data as
a file at the server site. If the file specified in the
pathname exists at the server site, then its contents shall
be replaced by the data being transferred. A new file is
created at the server site if the file specified in the
pathname does not already exist.
"""
if self.dtpInstance is None:
raise BadCmdSequenceError('PORT or PASV required before STOR')
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
# XXX For now, just disable the timeout. Later we'll want to
# leave it active and have the DTP connection reset it
# periodically.
self.setTimeout(None)
# Put it back later
def enableTimeout(result):
self.setTimeout(self.factory.timeOut)
return result
def cbOpened(file):
"""
File was open for reading. Launch the data transfer channel via
the file consumer.
"""
d = file.receive()
d.addCallback(cbConsumer)
d.addCallback(lambda ignored: file.close())
d.addCallbacks(cbSent, ebSent)
return d
def ebOpened(err):
"""
Called when failed to open the file for reading.
For known errors, return the FTP error code.
For all other, return a file not found error.
"""
if isinstance(err.value, FTPCmdError):
return (err.value.errorCode, '/'.join(newsegs))
log.err(err, "Unexpected error received while opening file:")
return (FILE_NOT_FOUND, '/'.join(newsegs))
def cbConsumer(cons):
"""
Called after the file was opended for reading.
Prepare the data transfer channel and send the response
to the command channel.
"""
if not self.binary:
cons = ASCIIConsumerWrapper(cons)
d = self.dtpInstance.registerConsumer(cons)
# Tell them what to doooo
if self.dtpInstance.isConnected:
self.reply(DATA_CNX_ALREADY_OPEN_START_XFR)
else:
self.reply(FILE_STATUS_OK_OPEN_DATA_CNX)
return d
def cbSent(result):
"""
Called from data transport when tranfer is done.
"""
return (TXFR_COMPLETE_OK,)
def ebSent(err):
"""
Called from data transport when there are errors during the
transfer.
"""
log.err(err, "Unexpected error received during transfer:")
if err.check(FTPCmdError):
return err
return (CNX_CLOSED_TXFR_ABORTED,)
d = self.shell.openForWriting(newsegs)
d.addCallbacks(cbOpened, ebOpened)
d.addBoth(enableTimeout)
# Pass back Deferred that fires when the transfer is done
return d
def ftp_SIZE(self, path):
"""
File SIZE
The FTP command, SIZE OF FILE (SIZE), is used to obtain the transfer
size of a file from the server-FTP process. This is the exact number
of octets (8 bit bytes) that would be transmitted over the data
connection should that file be transmitted. This value will change
depending on the current STRUcture, MODE, and TYPE of the data
connection or of a data connection that would be created were one
created now. Thus, the result of the SIZE command is dependent on
the currently established STRU, MODE, and TYPE parameters.
The SIZE command returns how many octets would be transferred if the
file were to be transferred using the current transfer structure,
mode, and type. This command is normally used in conjunction with
the RESTART (REST) command when STORing a file to a remote server in
STREAM mode, to determine the restart point. The server-PI might
need to read the partially transferred file, do any appropriate
conversion, and count the number of octets that would be generated
when sending the file in order to correctly respond to this command.
Estimates of the file transfer size MUST NOT be returned; only
precise information is acceptable.
http://tools.ietf.org/html/rfc3659
"""
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
def cbStat((size,)):
return (FILE_STATUS, str(size))
return self.shell.stat(newsegs, ('size',)).addCallback(cbStat)
def ftp_MDTM(self, path):
"""
File Modification Time (MDTM)
The FTP command, MODIFICATION TIME (MDTM), can be used to determine
when a file in the server NVFS was last modified. This command has
existed in many FTP servers for many years, as an adjunct to the REST
command for STREAM mode, thus is widely available. However, where
supported, the "modify" fact that can be provided in the result from
the new MLST command is recommended as a superior alternative.
http://tools.ietf.org/html/rfc3659
"""
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
def cbStat((modified,)):
return (FILE_STATUS, time.strftime('%Y%m%d%H%M%S', time.gmtime(modified)))
return self.shell.stat(newsegs, ('modified',)).addCallback(cbStat)
def ftp_TYPE(self, type):
"""
REPRESENTATION TYPE (TYPE)
The argument specifies the representation type as described
in the Section on Data Representation and Storage. Several
types take a second parameter. The first parameter is
denoted by a single Telnet character, as is the second
Format parameter for ASCII and EBCDIC; the second parameter
for local byte is a decimal integer to indicate Bytesize.
The parameters are separated by a <SP> (Space, ASCII code
32).
"""
p = type.upper()
if p:
f = getattr(self, 'type_' + p[0], None)
if f is not None:
return f(p[1:])
return self.type_UNKNOWN(p)
return (SYNTAX_ERR,)
def type_A(self, code):
if code == '' or code == 'N':
self.binary = False
return (TYPE_SET_OK, 'A' + code)
else:
return defer.fail(CmdArgSyntaxError(code))
def type_I(self, code):
if code == '':
self.binary = True
return (TYPE_SET_OK, 'I')
else:
return defer.fail(CmdArgSyntaxError(code))
def type_UNKNOWN(self, code):
return defer.fail(CmdNotImplementedForArgError(code))
def ftp_SYST(self):
return NAME_SYS_TYPE
def ftp_STRU(self, structure):
p = structure.upper()
if p == 'F':
return (CMD_OK,)
return defer.fail(CmdNotImplementedForArgError(structure))
def ftp_MODE(self, mode):
p = mode.upper()
if p == 'S':
return (CMD_OK,)
return defer.fail(CmdNotImplementedForArgError(mode))
def ftp_MKD(self, path):
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
return self.shell.makeDirectory(newsegs).addCallback(lambda ign: (MKD_REPLY, path))
def ftp_RMD(self, path):
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
return self.shell.removeDirectory(newsegs).addCallback(lambda ign: (REQ_FILE_ACTN_COMPLETED_OK,))
def ftp_DELE(self, path):
try:
newsegs = toSegments(self.workingDirectory, path)
except InvalidPath:
return defer.fail(FileNotFoundError(path))
return self.shell.removeFile(newsegs).addCallback(lambda ign: (REQ_FILE_ACTN_COMPLETED_OK,))
def ftp_NOOP(self):
return (CMD_OK,)
def ftp_RNFR(self, fromName):
self._fromName = fromName
self.state = self.RENAMING
return (REQ_FILE_ACTN_PENDING_FURTHER_INFO,)
def ftp_RNTO(self, toName):
fromName = self._fromName
del self._fromName
self.state = self.AUTHED
try:
fromsegs = toSegments(self.workingDirectory, fromName)
tosegs = toSegments(self.workingDirectory, toName)
except InvalidPath:
return defer.fail(FileNotFoundError(fromName))
return self.shell.rename(fromsegs, tosegs).addCallback(lambda ign: (REQ_FILE_ACTN_COMPLETED_OK,))
def ftp_FEAT(self):
"""
Advertise the features supported by the server.
http://tools.ietf.org/html/rfc2389
"""
self.sendLine(RESPONSE[FEAT_OK][0])
for feature in self.FEATURES:
self.sendLine(' ' + feature)
self.sendLine(RESPONSE[FEAT_OK][1])
def ftp_OPTS(self, option):
"""
Handle OPTS command.
http://tools.ietf.org/html/draft-ietf-ftpext-utf-8-option-00
"""
return self.reply(OPTS_NOT_IMPLEMENTED, option)
def ftp_QUIT(self):
self.reply(GOODBYE_MSG)
self.transport.loseConnection()
self.disconnected = True
def cleanupDTP(self):
"""
Call when DTP connection exits
"""
log.msg('cleanupDTP', debug=True)
log.msg(self.dtpPort)
dtpPort, self.dtpPort = self.dtpPort, None
if interfaces.IListeningPort.providedBy(dtpPort):
dtpPort.stopListening()
elif interfaces.IConnector.providedBy(dtpPort):
dtpPort.disconnect()
else:
assert False, "dtpPort should be an IListeningPort or IConnector, instead is %r" % (dtpPort,)
self.dtpFactory.stopFactory()
self.dtpFactory = None
if self.dtpInstance is not None:
self.dtpInstance = None
class FTPFactory(policies.LimitTotalConnectionsFactory):
"""
A factory for producing ftp protocol instances
@ivar timeOut: the protocol interpreter's idle timeout time in seconds,
default is 600 seconds.
@ivar passivePortRange: value forwarded to C{protocol.passivePortRange}.
@type passivePortRange: C{iterator}
"""
protocol = FTP
overflowProtocol = FTPOverflowProtocol
allowAnonymous = True
userAnonymous = 'anonymous'
timeOut = 600
welcomeMessage = "Twisted %s FTP Server" % (copyright.version,)
passivePortRange = xrange(0, 1)
def __init__(self, portal=None, userAnonymous='anonymous'):
self.portal = portal
self.userAnonymous = userAnonymous
self.instances = []
def buildProtocol(self, addr):
p = policies.LimitTotalConnectionsFactory.buildProtocol(self, addr)
if p is not None:
p.wrappedProtocol.portal = self.portal
p.wrappedProtocol.timeOut = self.timeOut
p.wrappedProtocol.passivePortRange = self.passivePortRange
return p
def stopFactory(self):
# make sure ftp instance's timeouts are set to None
# to avoid reactor complaints
[p.setTimeout(None) for p in self.instances if p.timeOut is not None]
policies.LimitTotalConnectionsFactory.stopFactory(self)
# -- Cred Objects --
class IFTPShell(Interface):
"""
An abstraction of the shell commands used by the FTP protocol for
a given user account.
All path names must be absolute.
"""
def makeDirectory(path):
"""
Create a directory.
@param path: The path, as a list of segments, to create
@type path: C{list} of C{unicode}
@return: A Deferred which fires when the directory has been
created, or which fails if the directory cannot be created.
"""
def removeDirectory(path):
"""
Remove a directory.
@param path: The path, as a list of segments, to remove
@type path: C{list} of C{unicode}
@return: A Deferred which fires when the directory has been
removed, or which fails if the directory cannot be removed.
"""
def removeFile(path):
"""
Remove a file.
@param path: The path, as a list of segments, to remove
@type path: C{list} of C{unicode}
@return: A Deferred which fires when the file has been
removed, or which fails if the file cannot be removed.
"""
def rename(fromPath, toPath):
"""
Rename a file or directory.
@param fromPath: The current name of the path.
@type fromPath: C{list} of C{unicode}
@param toPath: The desired new name of the path.
@type toPath: C{list} of C{unicode}
@return: A Deferred which fires when the path has been
renamed, or which fails if the path cannot be renamed.
"""
def access(path):
"""
Determine whether access to the given path is allowed.
@param path: The path, as a list of segments
@return: A Deferred which fires with None if access is allowed
or which fails with a specific exception type if access is
denied.
"""
def stat(path, keys=()):
"""
Retrieve information about the given path.
This is like list, except it will never return results about
child paths.
"""
def list(path, keys=()):
"""
Retrieve information about the given path.
If the path represents a non-directory, the result list should
have only one entry with information about that non-directory.
Otherwise, the result list should have an element for each
child of the directory.
@param path: The path, as a list of segments, to list
@type path: C{list} of C{unicode} or C{bytes}
@param keys: A tuple of keys desired in the resulting
dictionaries.
@return: A Deferred which fires with a list of (name, list),
where the name is the name of the entry as a unicode string or
bytes and each list contains values corresponding to the requested
keys. The following are possible elements of keys, and the
values which should be returned for them:
- C{'size'}: size in bytes, as an integer (this is kinda required)
- C{'directory'}: boolean indicating the type of this entry
- C{'permissions'}: a bitvector (see os.stat(foo).st_mode)
- C{'hardlinks'}: Number of hard links to this entry
- C{'modified'}: number of seconds since the epoch since entry was
modified
- C{'owner'}: string indicating the user owner of this entry
- C{'group'}: string indicating the group owner of this entry
"""
def openForReading(path):
"""
@param path: The path, as a list of segments, to open
@type path: C{list} of C{unicode}
@rtype: C{Deferred} which will fire with L{IReadFile}
"""
def openForWriting(path):
"""
@param path: The path, as a list of segments, to open
@type path: C{list} of C{unicode}
@rtype: C{Deferred} which will fire with L{IWriteFile}
"""
class IReadFile(Interface):
"""
A file out of which bytes may be read.
"""
def send(consumer):
"""
Produce the contents of the given path to the given consumer. This
method may only be invoked once on each provider.
@type consumer: C{IConsumer}
@return: A Deferred which fires when the file has been
consumed completely.
"""
class IWriteFile(Interface):
"""
A file into which bytes may be written.
"""
def receive():
"""
Create a consumer which will write to this file. This method may
only be invoked once on each provider.
@rtype: C{Deferred} of C{IConsumer}
"""
def close():
"""
Perform any post-write work that needs to be done. This method may
only be invoked once on each provider, and will always be invoked
after receive().
@rtype: C{Deferred} of anything: the value is ignored. The FTP client
will not see their upload request complete until this Deferred has
been fired.
"""
def _getgroups(uid):
"""
Return the primary and supplementary groups for the given UID.
@type uid: C{int}
"""
result = []
pwent = pwd.getpwuid(uid)
result.append(pwent.pw_gid)
for grent in grp.getgrall():
if pwent.pw_name in grent.gr_mem:
result.append(grent.gr_gid)
return result
def _testPermissions(uid, gid, spath, mode='r'):
"""
checks to see if uid has proper permissions to access path with mode
@type uid: C{int}
@param uid: numeric user id
@type gid: C{int}
@param gid: numeric group id
@type spath: C{str}
@param spath: the path on the server to test
@type mode: C{str}
@param mode: 'r' or 'w' (read or write)
@rtype: C{bool}
@return: True if the given credentials have the specified form of
access to the given path
"""
if mode == 'r':
usr = stat.S_IRUSR
grp = stat.S_IRGRP
oth = stat.S_IROTH
amode = os.R_OK
elif mode == 'w':
usr = stat.S_IWUSR
grp = stat.S_IWGRP
oth = stat.S_IWOTH
amode = os.W_OK
else:
raise ValueError("Invalid mode %r: must specify 'r' or 'w'" % (mode,))
access = False
if os.path.exists(spath):
if uid == 0:
access = True
else:
s = os.stat(spath)
if usr & s.st_mode and uid == s.st_uid:
access = True
elif grp & s.st_mode and gid in _getgroups(uid):
access = True
elif oth & s.st_mode:
access = True
if access:
if not os.access(spath, amode):
access = False
log.msg("Filesystem grants permission to UID %d but it is inaccessible to me running as UID %d" % (
uid, os.getuid()))
return access
class FTPAnonymousShell(object):
"""
An anonymous implementation of IFTPShell
@type filesystemRoot: L{twisted.python.filepath.FilePath}
@ivar filesystemRoot: The path which is considered the root of
this shell.
"""
implements(IFTPShell)
def __init__(self, filesystemRoot):
self.filesystemRoot = filesystemRoot
def _path(self, path):
return self.filesystemRoot.descendant(path)
def makeDirectory(self, path):
return defer.fail(AnonUserDeniedError())
def removeDirectory(self, path):
return defer.fail(AnonUserDeniedError())
def removeFile(self, path):
return defer.fail(AnonUserDeniedError())
def rename(self, fromPath, toPath):
return defer.fail(AnonUserDeniedError())
def receive(self, path):
path = self._path(path)
return defer.fail(AnonUserDeniedError())
def openForReading(self, path):
"""
Open C{path} for reading.
@param path: The path, as a list of segments, to open.
@type path: C{list} of C{unicode}
@return: A L{Deferred} is returned that will fire with an object
implementing L{IReadFile} if the file is successfully opened. If
C{path} is a directory, or if an exception is raised while trying
to open the file, the L{Deferred} will fire with an error.
"""
p = self._path(path)
if p.isdir():
# Normally, we would only check for EISDIR in open, but win32
# returns EACCES in this case, so we check before
return defer.fail(IsADirectoryError(path))
try:
f = p.open('r')
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(_FileReader(f))
def openForWriting(self, path):
"""
Reject write attempts by anonymous users with
L{PermissionDeniedError}.
"""
return defer.fail(PermissionDeniedError("STOR not allowed"))
def access(self, path):
p = self._path(path)
if not p.exists():
# Again, win32 doesn't report a sane error after, so let's fail
# early if we can
return defer.fail(FileNotFoundError(path))
# For now, just see if we can os.listdir() it
try:
p.listdir()
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(None)
def stat(self, path, keys=()):
p = self._path(path)
if p.isdir():
try:
statResult = self._statNode(p, keys)
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(statResult)
else:
return self.list(path, keys).addCallback(lambda res: res[0][1])
def list(self, path, keys=()):
"""
Return the list of files at given C{path}, adding C{keys} stat
informations if specified.
@param path: the directory or file to check.
@type path: C{str}
@param keys: the list of desired metadata
@type keys: C{list} of C{str}
"""
filePath = self._path(path)
if filePath.isdir():
entries = filePath.listdir()
fileEntries = [filePath.child(p) for p in entries]
elif filePath.isfile():
entries = [os.path.join(*filePath.segmentsFrom(self.filesystemRoot))]
fileEntries = [filePath]
else:
return defer.fail(FileNotFoundError(path))
results = []
for fileName, filePath in zip(entries, fileEntries):
ent = []
results.append((fileName, ent))
if keys:
try:
ent.extend(self._statNode(filePath, keys))
except (IOError, OSError), e:
return errnoToFailure(e.errno, fileName)
except:
return defer.fail()
return defer.succeed(results)
def _statNode(self, filePath, keys):
"""
Shortcut method to get stat info on a node.
@param filePath: the node to stat.
@type filePath: C{filepath.FilePath}
@param keys: the stat keys to get.
@type keys: C{iterable}
"""
filePath.restat()
return [getattr(self, '_stat_' + k)(filePath.statinfo) for k in keys]
_stat_size = operator.attrgetter('st_size')
_stat_permissions = operator.attrgetter('st_mode')
_stat_hardlinks = operator.attrgetter('st_nlink')
_stat_modified = operator.attrgetter('st_mtime')
def _stat_owner(self, st):
if pwd is not None:
try:
return pwd.getpwuid(st.st_uid)[0]
except KeyError:
pass
return str(st.st_uid)
def _stat_group(self, st):
if grp is not None:
try:
return grp.getgrgid(st.st_gid)[0]
except KeyError:
pass
return str(st.st_gid)
def _stat_directory(self, st):
return bool(st.st_mode & stat.S_IFDIR)
class _FileReader(object):
implements(IReadFile)
def __init__(self, fObj):
self.fObj = fObj
self._send = False
def _close(self, passthrough):
self._send = True
self.fObj.close()
return passthrough
def send(self, consumer):
assert not self._send, "Can only call IReadFile.send *once* per instance"
self._send = True
d = basic.FileSender().beginFileTransfer(self.fObj, consumer)
d.addBoth(self._close)
return d
class FTPShell(FTPAnonymousShell):
"""
An authenticated implementation of L{IFTPShell}.
"""
def makeDirectory(self, path):
p = self._path(path)
try:
p.makedirs()
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(None)
def removeDirectory(self, path):
p = self._path(path)
if p.isfile():
# Win32 returns the wrong errno when rmdir is called on a file
# instead of a directory, so as we have the info here, let's fail
# early with a pertinent error
return defer.fail(IsNotADirectoryError(path))
try:
os.rmdir(p.path)
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(None)
def removeFile(self, path):
p = self._path(path)
if p.isdir():
# Win32 returns the wrong errno when remove is called on a
# directory instead of a file, so as we have the info here,
# let's fail early with a pertinent error
return defer.fail(IsADirectoryError(path))
try:
p.remove()
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
else:
return defer.succeed(None)
def rename(self, fromPath, toPath):
fp = self._path(fromPath)
tp = self._path(toPath)
try:
os.rename(fp.path, tp.path)
except (IOError, OSError), e:
return errnoToFailure(e.errno, fromPath)
except:
return defer.fail()
else:
return defer.succeed(None)
def openForWriting(self, path):
"""
Open C{path} for writing.
@param path: The path, as a list of segments, to open.
@type path: C{list} of C{unicode}
@return: A L{Deferred} is returned that will fire with an object
implementing L{IWriteFile} if the file is successfully opened. If
C{path} is a directory, or if an exception is raised while trying
to open the file, the L{Deferred} will fire with an error.
"""
p = self._path(path)
if p.isdir():
# Normally, we would only check for EISDIR in open, but win32
# returns EACCES in this case, so we check before
return defer.fail(IsADirectoryError(path))
try:
fObj = p.open('w')
except (IOError, OSError), e:
return errnoToFailure(e.errno, path)
except:
return defer.fail()
return defer.succeed(_FileWriter(fObj))
class _FileWriter(object):
implements(IWriteFile)
def __init__(self, fObj):
self.fObj = fObj
self._receive = False
def receive(self):
assert not self._receive, "Can only call IWriteFile.receive *once* per instance"
self._receive = True
# FileConsumer will close the file object
return defer.succeed(FileConsumer(self.fObj))
def close(self):
return defer.succeed(None)
class BaseFTPRealm:
"""
Base class for simple FTP realms which provides an easy hook for specifying
the home directory for each user.
"""
implements(portal.IRealm)
def __init__(self, anonymousRoot):
self.anonymousRoot = filepath.FilePath(anonymousRoot)
def getHomeDirectory(self, avatarId):
"""
Return a L{FilePath} representing the home directory of the given
avatar. Override this in a subclass.
@param avatarId: A user identifier returned from a credentials checker.
@type avatarId: C{str}
@rtype: L{FilePath}
"""
raise NotImplementedError(
"%r did not override getHomeDirectory" % (self.__class__,))
def requestAvatar(self, avatarId, mind, *interfaces):
for iface in interfaces:
if iface is IFTPShell:
if avatarId is checkers.ANONYMOUS:
avatar = FTPAnonymousShell(self.anonymousRoot)
else:
avatar = FTPShell(self.getHomeDirectory(avatarId))
return (IFTPShell, avatar,
getattr(avatar, 'logout', lambda: None))
raise NotImplementedError(
"Only IFTPShell interface is supported by this realm")
class FTPRealm(BaseFTPRealm):
"""
@type anonymousRoot: L{twisted.python.filepath.FilePath}
@ivar anonymousRoot: Root of the filesystem to which anonymous
users will be granted access.
@type userHome: L{filepath.FilePath}
@ivar userHome: Root of the filesystem containing user home directories.
"""
def __init__(self, anonymousRoot, userHome='/home'):
BaseFTPRealm.__init__(self, anonymousRoot)
self.userHome = filepath.FilePath(userHome)
def getHomeDirectory(self, avatarId):
"""
Use C{avatarId} as a single path segment to construct a child of
C{self.userHome} and return that child.
"""
return self.userHome.child(avatarId)
class SystemFTPRealm(BaseFTPRealm):
"""
L{SystemFTPRealm} uses system user account information to decide what the
home directory for a particular avatarId is.
This works on POSIX but probably is not reliable on Windows.
"""
def getHomeDirectory(self, avatarId):
"""
Return the system-defined home directory of the system user account with
the name C{avatarId}.
"""
path = os.path.expanduser('~' + avatarId)
if path.startswith('~'):
raise cred_error.UnauthorizedLogin()
return filepath.FilePath(path)
# --- FTP CLIENT -------------------------------------------------------------
####
# And now for the client...
# Notes:
# * Reference: http://cr.yp.to/ftp.html
# * FIXME: Does not support pipelining (which is not supported by all
# servers anyway). This isn't a functionality limitation, just a
# small performance issue.
# * Only has a rudimentary understanding of FTP response codes (although
# the full response is passed to the caller if they so choose).
# * Assumes that USER and PASS should always be sent
# * Always sets TYPE I (binary mode)
# * Doesn't understand any of the weird, obscure TELNET stuff (\377...)
# * FIXME: Doesn't share any code with the FTPServer
class ConnectionLost(FTPError):
pass
class CommandFailed(FTPError):
pass
class BadResponse(FTPError):
pass
class UnexpectedResponse(FTPError):
pass
class UnexpectedData(FTPError):
pass
class FTPCommand:
def __init__(self, text=None, public=0):
self.text = text
self.deferred = defer.Deferred()
self.ready = 1
self.public = public
self.transferDeferred = None
def fail(self, failure):
if self.public:
self.deferred.errback(failure)
class ProtocolWrapper(protocol.Protocol):
def __init__(self, original, deferred):
self.original = original
self.deferred = deferred
def makeConnection(self, transport):
self.original.makeConnection(transport)
def dataReceived(self, data):
self.original.dataReceived(data)
def connectionLost(self, reason):
self.original.connectionLost(reason)
# Signal that transfer has completed
self.deferred.callback(None)
class IFinishableConsumer(interfaces.IConsumer):
"""
A Consumer for producers that finish.
@since: 11.0
"""
def finish():
"""
The producer has finished producing.
"""
class SenderProtocol(protocol.Protocol):
implements(IFinishableConsumer)
def __init__(self):
# Fired upon connection
self.connectedDeferred = defer.Deferred()
# Fired upon disconnection
self.deferred = defer.Deferred()
#Protocol stuff
def dataReceived(self, data):
raise UnexpectedData(
"Received data from the server on a "
"send-only data-connection"
)
def makeConnection(self, transport):
protocol.Protocol.makeConnection(self, transport)
self.connectedDeferred.callback(self)
def connectionLost(self, reason):
if reason.check(error.ConnectionDone):
self.deferred.callback('connection done')
else:
self.deferred.errback(reason)
#IFinishableConsumer stuff
def write(self, data):
self.transport.write(data)
def registerProducer(self, producer, streaming):
"""
Register the given producer with our transport.
"""
self.transport.registerProducer(producer, streaming)
def unregisterProducer(self):
"""
Unregister the previously registered producer.
"""
self.transport.unregisterProducer()
def finish(self):
self.transport.loseConnection()
def decodeHostPort(line):
"""
Decode an FTP response specifying a host and port.
@return: a 2-tuple of (host, port).
"""
abcdef = re.sub('[^0-9, ]', '', line)
parsed = [int(p.strip()) for p in abcdef.split(',')]
for x in parsed:
if x < 0 or x > 255:
raise ValueError("Out of range", line, x)
a, b, c, d, e, f = parsed
host = "%s.%s.%s.%s" % (a, b, c, d)
port = (int(e) << 8) + int(f)
return host, port
def encodeHostPort(host, port):
numbers = host.split('.') + [str(port >> 8), str(port % 256)]
return ','.join(numbers)
def _unwrapFirstError(failure):
failure.trap(defer.FirstError)
return failure.value.subFailure
class FTPDataPortFactory(protocol.ServerFactory):
"""
Factory for data connections that use the PORT command
(i.e. "active" transfers)
"""
noisy = 0
def buildProtocol(self, addr):
# This is a bit hackish -- we already have a Protocol instance,
# so just return it instead of making a new one
# FIXME: Reject connections from the wrong address/port
# (potential security problem)
self.protocol.factory = self
self.port.loseConnection()
return self.protocol
class FTPClientBasic(basic.LineReceiver):
"""
Foundations of an FTP client.
"""
debug = False
def __init__(self):
self.actionQueue = []
self.greeting = None
self.nextDeferred = defer.Deferred().addCallback(self._cb_greeting)
self.nextDeferred.addErrback(self.fail)
self.response = []
self._failed = 0
def fail(self, error):
"""
Give an error to any queued deferreds.
"""
self._fail(error)
def _fail(self, error):
"""
Errback all queued deferreds.
"""
if self._failed:
# We're recursing; bail out here for simplicity
return error
self._failed = 1
if self.nextDeferred:
try:
self.nextDeferred.errback(failure.Failure(ConnectionLost('FTP connection lost', error)))
except defer.AlreadyCalledError:
pass
for ftpCommand in self.actionQueue:
ftpCommand.fail(failure.Failure(ConnectionLost('FTP connection lost', error)))
return error
def _cb_greeting(self, greeting):
self.greeting = greeting
def sendLine(self, line):
"""
(Private) Sends a line, unless line is None.
"""
if line is None:
return
basic.LineReceiver.sendLine(self, line)
def sendNextCommand(self):
"""
(Private) Processes the next command in the queue.
"""
ftpCommand = self.popCommandQueue()
if ftpCommand is None:
self.nextDeferred = None
return
if not ftpCommand.ready:
self.actionQueue.insert(0, ftpCommand)
reactor.callLater(1.0, self.sendNextCommand)
self.nextDeferred = None
return
# FIXME: this if block doesn't belong in FTPClientBasic, it belongs in
# FTPClient.
if ftpCommand.text == 'PORT':
self.generatePortCommand(ftpCommand)
if self.debug:
log.msg('<-- %s' % ftpCommand.text)
self.nextDeferred = ftpCommand.deferred
self.sendLine(ftpCommand.text)
def queueCommand(self, ftpCommand):
"""
Add an FTPCommand object to the queue.
If it's the only thing in the queue, and we are connected and we aren't
waiting for a response of an earlier command, the command will be sent
immediately.
@param ftpCommand: an L{FTPCommand}
"""
self.actionQueue.append(ftpCommand)
if (len(self.actionQueue) == 1 and self.transport is not None and
self.nextDeferred is None):
self.sendNextCommand()
def queueStringCommand(self, command, public=1):
"""
Queues a string to be issued as an FTP command
@param command: string of an FTP command to queue
@param public: a flag intended for internal use by FTPClient. Don't
change it unless you know what you're doing.
@return: a L{Deferred} that will be called when the response to the
command has been received.
"""
ftpCommand = FTPCommand(command, public)
self.queueCommand(ftpCommand)
return ftpCommand.deferred
def popCommandQueue(self):
"""
Return the front element of the command queue, or None if empty.
"""
if self.actionQueue:
return self.actionQueue.pop(0)
else:
return None
def queueLogin(self, username, password):
"""
Login: send the username, send the password.
If the password is C{None}, the PASS command won't be sent. Also, if
the response to the USER command has a response code of 230 (User logged
in), then PASS won't be sent either.
"""
# Prepare the USER command
deferreds = []
userDeferred = self.queueStringCommand('USER ' + username, public=0)
deferreds.append(userDeferred)
# Prepare the PASS command (if a password is given)
if password is not None:
passwordCmd = FTPCommand('PASS ' + password, public=0)
self.queueCommand(passwordCmd)
deferreds.append(passwordCmd.deferred)
# Avoid sending PASS if the response to USER is 230.
# (ref: http://cr.yp.to/ftp/user.html#user)
def cancelPasswordIfNotNeeded(response):
if response[0].startswith('230'):
# No password needed!
self.actionQueue.remove(passwordCmd)
return response
userDeferred.addCallback(cancelPasswordIfNotNeeded)
# Error handling.
for deferred in deferreds:
# If something goes wrong, call fail
deferred.addErrback(self.fail)
# But also swallow the error, so we don't cause spurious errors
deferred.addErrback(lambda x: None)
def lineReceived(self, line):
"""
(Private) Parses the response messages from the FTP server.
"""
# Add this line to the current response
if self.debug:
log.msg('--> %s' % line)
self.response.append(line)
# Bail out if this isn't the last line of a response
# The last line of response starts with 3 digits followed by a space
codeIsValid = re.match(r'\d{3} ', line)
if not codeIsValid:
return
code = line[0:3]
# Ignore marks
if code[0] == '1':
return
# Check that we were expecting a response
if self.nextDeferred is None:
self.fail(UnexpectedResponse(self.response))
return
# Reset the response
response = self.response
self.response = []
# Look for a success or error code, and call the appropriate callback
if code[0] in ('2', '3'):
# Success
self.nextDeferred.callback(response)
elif code[0] in ('4', '5'):
# Failure
self.nextDeferred.errback(failure.Failure(CommandFailed(response)))
else:
# This shouldn't happen unless something screwed up.
log.msg('Server sent invalid response code %s' % (code,))
self.nextDeferred.errback(failure.Failure(BadResponse(response)))
# Run the next command
self.sendNextCommand()
def connectionLost(self, reason):
self._fail(reason)
class _PassiveConnectionFactory(protocol.ClientFactory):
noisy = False
def __init__(self, protoInstance):
self.protoInstance = protoInstance
def buildProtocol(self, ignored):
self.protoInstance.factory = self
return self.protoInstance
def clientConnectionFailed(self, connector, reason):
e = FTPError('Connection Failed', reason)
self.protoInstance.deferred.errback(e)
class FTPClient(FTPClientBasic):
"""
L{FTPClient} is a client implementation of the FTP protocol which
exposes FTP commands as methods which return L{Deferred}s.
Each command method returns a L{Deferred} which is called back when a
successful response code (2xx or 3xx) is received from the server or
which is error backed if an error response code (4xx or 5xx) is received
from the server or if a protocol violation occurs. If an error response
code is received, the L{Deferred} fires with a L{Failure} wrapping a
L{CommandFailed} instance. The L{CommandFailed} instance is created
with a list of the response lines received from the server.
See U{RFC 959<http://www.ietf.org/rfc/rfc959.txt>} for error code
definitions.
Both active and passive transfers are supported.
@ivar passive: See description in __init__.
"""
connectFactory = reactor.connectTCP
def __init__(self, username='anonymous',
password='[email protected]',
passive=1):
"""
Constructor.
I will login as soon as I receive the welcome message from the server.
@param username: FTP username
@param password: FTP password
@param passive: flag that controls if I use active or passive data
connections. You can also change this after construction by
assigning to C{self.passive}.
"""
FTPClientBasic.__init__(self)
self.queueLogin(username, password)
self.passive = passive
def fail(self, error):
"""
Disconnect, and also give an error to any queued deferreds.
"""
self.transport.loseConnection()
self._fail(error)
def receiveFromConnection(self, commands, protocol):
"""
Retrieves a file or listing generated by the given command,
feeding it to the given protocol.
@param commands: list of strings of FTP commands to execute then receive
the results of (e.g. C{LIST}, C{RETR})
@param protocol: A L{Protocol} B{instance} e.g. an
L{FTPFileListProtocol}, or something that can be adapted to one.
Typically this will be an L{IConsumer} implementation.
@return: L{Deferred}.
"""
protocol = interfaces.IProtocol(protocol)
wrapper = ProtocolWrapper(protocol, defer.Deferred())
return self._openDataConnection(commands, wrapper)
def queueLogin(self, username, password):
"""
Login: send the username, send the password, and
set retrieval mode to binary
"""
FTPClientBasic.queueLogin(self, username, password)
d = self.queueStringCommand('TYPE I', public=0)
# If something goes wrong, call fail
d.addErrback(self.fail)
# But also swallow the error, so we don't cause spurious errors
d.addErrback(lambda x: None)
def sendToConnection(self, commands):
"""
XXX
@return: A tuple of two L{Deferred}s:
- L{Deferred} L{IFinishableConsumer}. You must call
the C{finish} method on the IFinishableConsumer when the file
is completely transferred.
- L{Deferred} list of control-connection responses.
"""
s = SenderProtocol()
r = self._openDataConnection(commands, s)
return (s.connectedDeferred, r)
def _openDataConnection(self, commands, protocol):
"""
This method returns a DeferredList.
"""
cmds = [FTPCommand(command, public=1) for command in commands]
cmdsDeferred = defer.DeferredList([cmd.deferred for cmd in cmds],
fireOnOneErrback=True, consumeErrors=True)
cmdsDeferred.addErrback(_unwrapFirstError)
if self.passive:
# Hack: use a mutable object to sneak a variable out of the
# scope of doPassive
_mutable = [None]
def doPassive(response):
"""Connect to the port specified in the response to PASV"""
host, port = decodeHostPort(response[-1][4:])
f = _PassiveConnectionFactory(protocol)
_mutable[0] = self.connectFactory(host, port, f)
pasvCmd = FTPCommand('PASV')
self.queueCommand(pasvCmd)
pasvCmd.deferred.addCallback(doPassive).addErrback(self.fail)
results = [cmdsDeferred, pasvCmd.deferred, protocol.deferred]
d = defer.DeferredList(results, fireOnOneErrback=True, consumeErrors=True)
d.addErrback(_unwrapFirstError)
# Ensure the connection is always closed
def close(x, m=_mutable):
m[0] and m[0].disconnect()
return x
d.addBoth(close)
else:
# We just place a marker command in the queue, and will fill in
# the host and port numbers later (see generatePortCommand)
portCmd = FTPCommand('PORT')
# Ok, now we jump through a few hoops here.
# This is the problem: a transfer is not to be trusted as complete
# until we get both the "226 Transfer complete" message on the
# control connection, and the data socket is closed. Thus, we use
# a DeferredList to make sure we only fire the callback at the
# right time.
portCmd.transferDeferred = protocol.deferred
portCmd.protocol = protocol
portCmd.deferred.addErrback(portCmd.transferDeferred.errback)
self.queueCommand(portCmd)
# Create dummy functions for the next callback to call.
# These will also be replaced with real functions in
# generatePortCommand.
portCmd.loseConnection = lambda result: result
portCmd.fail = lambda error: error
# Ensure that the connection always gets closed
cmdsDeferred.addErrback(lambda e, pc=portCmd: pc.fail(e) or e)
results = [cmdsDeferred, portCmd.deferred, portCmd.transferDeferred]
d = defer.DeferredList(results, fireOnOneErrback=True, consumeErrors=True)
d.addErrback(_unwrapFirstError)
for cmd in cmds:
self.queueCommand(cmd)
return d
def generatePortCommand(self, portCmd):
"""
(Private) Generates the text of a given PORT command.
"""
# The problem is that we don't create the listening port until we need
# it for various reasons, and so we have to muck about to figure out
# what interface and port it's listening on, and then finally we can
# create the text of the PORT command to send to the FTP server.
# FIXME: This method is far too ugly.
# FIXME: The best solution is probably to only create the data port
# once per FTPClient, and just recycle it for each new download.
# This should be ok, because we don't pipeline commands.
# Start listening on a port
factory = FTPDataPortFactory()
factory.protocol = portCmd.protocol
listener = reactor.listenTCP(0, factory)
factory.port = listener
# Ensure we close the listening port if something goes wrong
def listenerFail(error, listener=listener):
if listener.connected:
listener.loseConnection()
return error
portCmd.fail = listenerFail
# Construct crufty FTP magic numbers that represent host & port
host = self.transport.getHost().host
port = listener.getHost().port
portCmd.text = 'PORT ' + encodeHostPort(host, port)
def escapePath(self, path):
"""
Returns a FTP escaped path (replace newlines with nulls).
"""
# Escape newline characters
return path.replace('\n', '\0')
def retrieveFile(self, path, protocol, offset=0):
"""
Retrieve a file from the given path
This method issues the 'RETR' FTP command.
The file is fed into the given Protocol instance. The data connection
will be passive if self.passive is set.
@param path: path to file that you wish to receive.
@param protocol: a L{Protocol} instance.
@param offset: offset to start downloading from
@return: L{Deferred}
"""
cmds = ['RETR ' + self.escapePath(path)]
if offset:
cmds.insert(0, ('REST ' + str(offset)))
return self.receiveFromConnection(cmds, protocol)
retr = retrieveFile
def storeFile(self, path, offset=0):
"""
Store a file at the given path.
This method issues the 'STOR' FTP command.
@return: A tuple of two L{Deferred}s:
- L{Deferred} L{IFinishableConsumer}. You must call
the C{finish} method on the IFinishableConsumer when the file
is completely transferred.
- L{Deferred} list of control-connection responses.
"""
cmds = ['STOR ' + self.escapePath(path)]
if offset:
cmds.insert(0, ('REST ' + str(offset)))
return self.sendToConnection(cmds)
stor = storeFile
def rename(self, pathFrom, pathTo):
"""
Rename a file.
This method issues the I{RNFR}/I{RNTO} command sequence to rename
C{pathFrom} to C{pathTo}.
@param: pathFrom: the absolute path to the file to be renamed
@type pathFrom: C{str}
@param: pathTo: the absolute path to rename the file to.
@type pathTo: C{str}
@return: A L{Deferred} which fires when the rename operation has
succeeded or failed. If it succeeds, the L{Deferred} is called
back with a two-tuple of lists. The first list contains the
responses to the I{RNFR} command. The second list contains the
responses to the I{RNTO} command. If either I{RNFR} or I{RNTO}
fails, the L{Deferred} is errbacked with L{CommandFailed} or
L{BadResponse}.
@rtype: L{Deferred}
@since: 8.2
"""
renameFrom = self.queueStringCommand('RNFR ' + self.escapePath(pathFrom))
renameTo = self.queueStringCommand('RNTO ' + self.escapePath(pathTo))
fromResponse = []
# Use a separate Deferred for the ultimate result so that Deferred
# chaining can't interfere with its result.
result = defer.Deferred()
# Bundle up all the responses
result.addCallback(lambda toResponse: (fromResponse, toResponse))
def ebFrom(failure):
# Make sure the RNTO doesn't run if the RNFR failed.
self.popCommandQueue()
result.errback(failure)
# Save the RNFR response to pass to the result Deferred later
renameFrom.addCallbacks(fromResponse.extend, ebFrom)
# Hook up the RNTO to the result Deferred as well
renameTo.chainDeferred(result)
return result
def list(self, path, protocol):
"""
Retrieve a file listing into the given protocol instance.
This method issues the 'LIST' FTP command.
@param path: path to get a file listing for.
@param protocol: a L{Protocol} instance, probably a
L{FTPFileListProtocol} instance. It can cope with most common file
listing formats.
@return: L{Deferred}
"""
if path is None:
path = ''
return self.receiveFromConnection(['LIST ' + self.escapePath(path)], protocol)
def nlst(self, path, protocol):
"""
Retrieve a short file listing into the given protocol instance.
This method issues the 'NLST' FTP command.
NLST (should) return a list of filenames, one per line.
@param path: path to get short file listing for.
@param protocol: a L{Protocol} instance.
"""
if path is None:
path = ''
return self.receiveFromConnection(['NLST ' + self.escapePath(path)], protocol)
def cwd(self, path):
"""
Issues the CWD (Change Working Directory) command.
@return: a L{Deferred} that will be called when done.
"""
return self.queueStringCommand('CWD ' + self.escapePath(path))
def makeDirectory(self, path):
"""
Make a directory
This method issues the MKD command.
@param path: The path to the directory to create.
@type path: C{str}
@return: A L{Deferred} which fires when the server responds. If the
directory is created, the L{Deferred} is called back with the
server response. If the server response indicates the directory
was not created, the L{Deferred} is errbacked with a L{Failure}
wrapping L{CommandFailed} or L{BadResponse}.
@rtype: L{Deferred}
@since: 8.2
"""
return self.queueStringCommand('MKD ' + self.escapePath(path))
def removeFile(self, path):
"""
Delete a file on the server.
L{removeFile} issues a I{DELE} command to the server to remove the
indicated file. Note that this command cannot remove a directory.
@param path: The path to the file to delete. May be relative to the
current dir.
@type path: C{str}
@return: A L{Deferred} which fires when the server responds. On error,
it is errbacked with either L{CommandFailed} or L{BadResponse}. On
success, it is called back with a list of response lines.
@rtype: L{Deferred}
@since: 8.2
"""
return self.queueStringCommand('DELE ' + self.escapePath(path))
def removeDirectory(self, path):
"""
Delete a directory on the server.
L{removeDirectory} issues a I{RMD} command to the server to remove the
indicated directory. Described in RFC959.
@param path: The path to the directory to delete. May be relative to
the current working directory.
@type path: C{str}
@return: A L{Deferred} which fires when the server responds. On error,
it is errbacked with either L{CommandFailed} or L{BadResponse}. On
success, it is called back with a list of response lines.
@rtype: L{Deferred}
@since: 11.1
"""
return self.queueStringCommand('RMD ' + self.escapePath(path))
def cdup(self):
"""
Issues the CDUP (Change Directory UP) command.
@return: a L{Deferred} that will be called when done.
"""
return self.queueStringCommand('CDUP')
def pwd(self):
"""
Issues the PWD (Print Working Directory) command.
The L{getDirectory} does the same job but automatically parses the
result.
@return: a L{Deferred} that will be called when done. It is up to the
caller to interpret the response, but the L{parsePWDResponse} method
in this module should work.
"""
return self.queueStringCommand('PWD')
def getDirectory(self):
"""
Returns the current remote directory.
@return: a L{Deferred} that will be called back with a C{str} giving
the remote directory or which will errback with L{CommandFailed}
if an error response is returned.
"""
def cbParse(result):
try:
# The only valid code is 257
if int(result[0].split(' ', 1)[0]) != 257:
raise ValueError
except (IndexError, ValueError):
return failure.Failure(CommandFailed(result))
path = parsePWDResponse(result[0])
if path is None:
return failure.Failure(CommandFailed(result))
return path
return self.pwd().addCallback(cbParse)
def quit(self):
"""
Issues the I{QUIT} command.
@return: A L{Deferred} that fires when the server acknowledges the
I{QUIT} command. The transport should not be disconnected until
this L{Deferred} fires.
"""
return self.queueStringCommand('QUIT')
class FTPFileListProtocol(basic.LineReceiver):
"""
Parser for standard FTP file listings
This is the evil required to match::
-rw-r--r-- 1 root other 531 Jan 29 03:26 README
If you need different evil for a wacky FTP server, you can
override either C{fileLinePattern} or C{parseDirectoryLine()}.
It populates the instance attribute self.files, which is a list containing
dicts with the following keys (examples from the above line):
- filetype: e.g. 'd' for directories, or '-' for an ordinary file
- perms: e.g. 'rw-r--r--'
- nlinks: e.g. 1
- owner: e.g. 'root'
- group: e.g. 'other'
- size: e.g. 531
- date: e.g. 'Jan 29 03:26'
- filename: e.g. 'README'
- linktarget: e.g. 'some/file'
Note that the 'date' value will be formatted differently depending on the
date. Check U{http://cr.yp.to/ftp.html} if you really want to try to parse
it.
It also matches the following::
-rw-r--r-- 1 root other 531 Jan 29 03:26 I HAVE\ SPACE
- filename: e.g. 'I HAVE SPACE'
-rw-r--r-- 1 root other 531 Jan 29 03:26 LINK -> TARGET
- filename: e.g. 'LINK'
- linktarget: e.g. 'TARGET'
-rw-r--r-- 1 root other 531 Jan 29 03:26 N S -> L S
- filename: e.g. 'N S'
- linktarget: e.g. 'L S'
@ivar files: list of dicts describing the files in this listing
"""
fileLinePattern = re.compile(
r'^(?P<filetype>.)(?P<perms>.{9})\s+(?P<nlinks>\d*)\s*'
r'(?P<owner>\S+)\s+(?P<group>\S+)\s+(?P<size>\d+)\s+'
r'(?P<date>...\s+\d+\s+[\d:]+)\s+(?P<filename>.{1,}?)'
r'( -> (?P<linktarget>[^\r]*))?\r?$'
)
delimiter = '\n'
def __init__(self):
self.files = []
def lineReceived(self, line):
d = self.parseDirectoryLine(line)
if d is None:
self.unknownLine(line)
else:
self.addFile(d)
def parseDirectoryLine(self, line):
"""
Return a dictionary of fields, or None if line cannot be parsed.
@param line: line of text expected to contain a directory entry
@type line: str
@return: dict
"""
match = self.fileLinePattern.match(line)
if match is None:
return None
else:
d = match.groupdict()
d['filename'] = d['filename'].replace(r'\ ', ' ')
d['nlinks'] = int(d['nlinks'])
d['size'] = int(d['size'])
if d['linktarget']:
d['linktarget'] = d['linktarget'].replace(r'\ ', ' ')
return d
def addFile(self, info):
"""
Append file information dictionary to the list of known files.
Subclasses can override or extend this method to handle file
information differently without affecting the parsing of data
from the server.
@param info: dictionary containing the parsed representation
of the file information
@type info: dict
"""
self.files.append(info)
def unknownLine(self, line):
"""
Deal with received lines which could not be parsed as file
information.
Subclasses can override this to perform any special processing
needed.
@param line: unparsable line as received
@type line: str
"""
pass
def parsePWDResponse(response):
"""
Returns the path from a response to a PWD command.
Responses typically look like::
257 "/home/andrew" is current directory.
For this example, I will return C{'/home/andrew'}.
If I can't find the path, I return C{None}.
"""
match = re.search('"(.*)"', response)
if match:
return match.groups()[0]
else:
return None
|
bsd-3-clause
|
MihaiMoldovanu/ansible
|
lib/ansible/modules/network/layer3/net_l3_interface.py
|
96
|
2074
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: net_l3_interface
version_added: "2.4"
author: "Ricardo Carrillo Cruz (@rcarrillocruz)"
short_description: Manage L3 interfaces on network devices
description:
- This module provides declarative management of L3 interfaces
on network devices.
options:
name:
description:
- Name of the L3 interface.
ipv4:
description:
- IPv4 of the L3 interface.
ipv6:
description:
- IPv6 of the L3 interface.
aggregate:
description: List of L3 interfaces definitions
purge:
description:
- Purge L3 interfaces not defined in the I(aggregate) parameter.
default: no
state:
description:
- State of the L3 interface configuration.
default: present
choices: ['present', 'absent']
"""
EXAMPLES = """
- name: Set eth0 IPv4 address
net_l3_interface:
name: eth0
ipv4: 192.168.0.1/24
- name: Remove eth0 IPv4 address
net_l3_interface:
name: eth0
state: absent
- name: Set IP addresses on aggregate
net_l3_interface:
aggregate:
- { name: eth1, ipv4: 192.168.2.10/24 }
- { name: eth2, ipv4: 192.168.3.10/24, ipv6: "fd5d:12c9:2201:1::1/64" }
- name: Remove IP addresses on aggregate
net_l3_interface:
aggregate:
- { name: eth1, ipv4: 192.168.2.10/24 }
- { name: eth2, ipv4: 192.168.3.10/24, ipv6: "fd5d:12c9:2201:1::1/64" }
state: absent
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device
returned: always, except for the platforms that use Netconf transport to manage the device.
type: list
sample:
- set interfaces ethernet eth0 address '192.168.0.1/24'
"""
|
gpl-3.0
|
sarahfo/oppia
|
core/platform/transactions/gae_transaction_services.py
|
33
|
1124
|
# coding: utf-8
#
# Copyright 2014 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provides a seam for transaction services."""
__author__ = 'Sean Lip'
from google.appengine.ext import ndb
def run_in_transaction(fn, *args, **kwargs):
"""Run a function in a transaction."""
return ndb.transaction(
lambda: fn(*args, **kwargs),
xg=True,
propagation=ndb.TransactionOptions.ALLOWED,
)
# The NDB toplevel() function. For more details, see
# https://developers.google.com/appengine/docs/python/ndb/async#intro
toplevel_wrapper = ndb.toplevel
|
apache-2.0
|
kkk669/mxnet
|
python/mxnet/monitor.py
|
46
|
5239
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# coding: utf-8
# pylint: disable=protected-access, logging-format-interpolation, invalid-name, no-member, too-many-branches
"""Monitor outputs, weights, and gradients for debugging."""
from __future__ import absolute_import
import re
import ctypes
import logging
from math import sqrt
from .ndarray import NDArray
from .base import NDArrayHandle, py_str
from . import ndarray
class Monitor(object):
"""Monitor outputs, weights, and gradients for debugging.
Parameters
----------
interval : int
Number of batches between printing.
stat_func : function
A function that computes statistics of tensors.
Takes an `NDArray` and returns an `NDArray`. Defaults to mean
absolute value |x|/size(x).
pattern : str
A regular expression specifying which tensors to monitor.
Only tensors with names that match `name_pattern` will be included.
For example, '.*weight|.*output' will print all weights and outputs and
'.*backward.*' will print all gradients.
"""
def __init__(self, interval, stat_func=None, pattern='.*', sort=False):
if stat_func is None:
def asum_stat(x):
"""returns |x|/size(x), async execution."""
return ndarray.norm(x)/sqrt(x.size)
stat_func = asum_stat
self.stat_func = stat_func
self.interval = interval
self.activated = False
self.queue = []
self.step = 0
self.exes = []
self.re_prog = re.compile(pattern)
self.sort = sort
def stat_helper(name, array):
"""wrapper for executor callback"""
array = ctypes.cast(array, NDArrayHandle)
array = NDArray(array, writable=False)
if not self.activated or not self.re_prog.match(py_str(name)):
return
self.queue.append((self.step, py_str(name), self.stat_func(array)))
self.stat_helper = stat_helper
def install(self, exe):
"""install callback to executor.
Supports installing to multiple exes.
Parameters
----------
exe : mx.executor.Executor
The Executor (returned by symbol.bind) to install to.
"""
exe.set_monitor_callback(self.stat_helper)
self.exes.append(exe)
def tic(self):
"""Start collecting stats for current batch.
Call before calling forward."""
if self.step % self.interval == 0:
for exe in self.exes:
for array in exe.arg_arrays:
array.wait_to_read()
for array in exe.aux_arrays:
array.wait_to_read()
self.queue = []
self.activated = True
self.step += 1
def toc(self):
"""End collecting for current batch and return results.
Call after computation of current batch.
Returns
-------
res : list of """
if not self.activated:
return []
for exe in self.exes:
for array in exe.arg_arrays:
array.wait_to_read()
for array in exe.aux_arrays:
array.wait_to_read()
for exe in self.exes:
for name, array in zip(exe._symbol.list_arguments(), exe.arg_arrays):
if self.re_prog.match(name):
self.queue.append((self.step, name, self.stat_func(array)))
for name, array in zip(exe._symbol.list_auxiliary_states(), exe.aux_arrays):
if self.re_prog.match(name):
self.queue.append((self.step, name, self.stat_func(array)))
self.activated = False
res = []
if self.sort:
self.queue.sort(key=lambda x: x[1])
for n, k, v_list in self.queue:
if isinstance(v_list, NDArray):
v_list = [v_list]
assert isinstance(v_list, list)
s = ''
for v in v_list:
assert isinstance(v, NDArray)
if v.shape == (1,):
s += str(v.asscalar()) + '\t'
else:
s += str(v.asnumpy()) + '\t'
res.append((n, k, s))
self.queue = []
return res
def toc_print(self):
"""End collecting and print results."""
res = self.toc()
for n, k, v in res:
logging.info('Batch: {:7d} {:30s} {:s}'.format(n, k, v))
|
apache-2.0
|
rwillmer/django
|
tests/utils_tests/test_datetime_safe.py
|
207
|
2371
|
import unittest
from datetime import (
date as original_date, datetime as original_datetime,
time as original_time,
)
from django.utils.datetime_safe import date, datetime, time
class DatetimeTests(unittest.TestCase):
def setUp(self):
self.just_safe = (1900, 1, 1)
self.just_unsafe = (1899, 12, 31, 23, 59, 59)
self.just_time = (11, 30, 59)
self.really_old = (20, 1, 1)
self.more_recent = (2006, 1, 1)
def test_compare_datetimes(self):
self.assertEqual(original_datetime(*self.more_recent), datetime(*self.more_recent))
self.assertEqual(original_datetime(*self.really_old), datetime(*self.really_old))
self.assertEqual(original_date(*self.more_recent), date(*self.more_recent))
self.assertEqual(original_date(*self.really_old), date(*self.really_old))
self.assertEqual(original_date(*self.just_safe).strftime('%Y-%m-%d'), date(*self.just_safe).strftime('%Y-%m-%d'))
self.assertEqual(original_datetime(*self.just_safe).strftime('%Y-%m-%d'), datetime(*self.just_safe).strftime('%Y-%m-%d'))
self.assertEqual(original_time(*self.just_time).strftime('%H:%M:%S'), time(*self.just_time).strftime('%H:%M:%S'))
def test_safe_strftime(self):
self.assertEqual(date(*self.just_unsafe[:3]).strftime('%Y-%m-%d (weekday %w)'), '1899-12-31 (weekday 0)')
self.assertEqual(date(*self.just_safe).strftime('%Y-%m-%d (weekday %w)'), '1900-01-01 (weekday 1)')
self.assertEqual(datetime(*self.just_unsafe).strftime('%Y-%m-%d %H:%M:%S (weekday %w)'), '1899-12-31 23:59:59 (weekday 0)')
self.assertEqual(datetime(*self.just_safe).strftime('%Y-%m-%d %H:%M:%S (weekday %w)'), '1900-01-01 00:00:00 (weekday 1)')
self.assertEqual(time(*self.just_time).strftime('%H:%M:%S AM'), '11:30:59 AM')
# %y will error before this date
self.assertEqual(date(*self.just_safe).strftime('%y'), '00')
self.assertEqual(datetime(*self.just_safe).strftime('%y'), '00')
self.assertEqual(date(1850, 8, 2).strftime("%Y/%m/%d was a %A"), '1850/08/02 was a Friday')
def test_zero_padding(self):
"""
Regression for #12524
Check that pre-1000AD dates are padded with zeros if necessary
"""
self.assertEqual(date(1, 1, 1).strftime("%Y/%m/%d was a %A"), '0001/01/01 was a Monday')
|
bsd-3-clause
|
dantagg/awsbackup
|
awsbackup/__main__.py
|
1
|
4227
|
from __future__ import absolute_import, division, print_function
import os
from jinja2 import Environment, PackageLoader
import click
import boto3
from botocore.client import ClientError
DEFAULT_AWS_PROFILE = 'default'
DEFAULT_BACKUP_CREDENTIAL_FILE = 'credentials'
env = Environment(loader=PackageLoader('awsbackup', 'templates'))
class AwsBackup(object):
def __init__(self, home=None, profile=''):
self.home = os.path.abspath(home or '.')
self.profile = profile
class BucketExistsError(Exception):
pass
#pass_awsbackup = click.make_pass_decorator(AwsBackup)
@click.group()
@click.version_option()
@click.option('--profile', '-p', default=lambda: os.environ.get('AWS_PROFILE', DEFAULT_AWS_PROFILE),
help="tell awsbackup which aws profile to use from your aws credential file, by default it will use '%s'"
% (DEFAULT_AWS_PROFILE,))
@click.pass_context
def main(ctx, profile):
ctx.obj = AwsBackup(profile=profile)
@main.command()
@click.option('--bucket', '-b', prompt=True, default=lambda: os.environ.get('AWS_S3_BUCKET', ''),
help='tell awsbackup what to call the bucket to send backups to')
@click.option('--user', '-u', prompt=True,
help='tell awsbackup what user to create for the server to use to backup with')
@click.option('--file', '-f', type=click.File('w'), prompt=True,
default=lambda: os.environ.get('AWS_BACKUP_CREDENTIAL_FILE', DEFAULT_BACKUP_CREDENTIAL_FILE),
help="Location of file to SAVE user's credentials to")
@click.pass_context
def create(ctx, bucket, user, file):
policy_template = env.get_template('backup_user_policy.json')
backup_policy = policy_template.render(bucket=bucket)
backup_policy_name = user+'_access_policy'
profile = ctx.obj.profile # get the profile from the parent command's options
session = boto3.Session(profile_name=profile)
s3_client = session.client('s3')
try:
bl = s3_client.get_bucket_location(Bucket=bucket)
raise BucketExistsError("Bucket %s already exists!" % (bucket,)) # this bucket has been created already
except ClientError as ce:
if ce.response['Error']['Code'] == 'NoSuchBucket':
pass # the bucket doesn't exist, phew
elif ce.response['Error']['Code'] == 'AllAccessDisabled':
raise BucketExistsError("Bucket %s already exists with a different owner!" % (bucket,)) # someone else has a bucket with this name
else:
raise ce
bucket_rc = s3_client.create_bucket(Bucket=bucket)
iam_client = session.client('iam')
usr = iam_client.create_user(UserName=user)
usr_policy = iam_client.put_user_policy(UserName=user, PolicyName=backup_policy_name, PolicyDocument=backup_policy)
usr_keys = iam_client.create_access_key(UserName=user)
access_key = usr_keys['AccessKey']['AccessKeyId']
access_secret = usr_keys['AccessKey']['SecretAccessKey']
credentials = "[%s]\naws_access_key_id = %s\naws_secret_access_key = %s" % (user, access_key, access_secret)
file.write(credentials)
import pdb; pdb.set_trace()
cleanup(session, bucket, user, backup_policy_name, usr_keys['AccessKey']['AccessKeyId'])
@main.command()
@click.option('--bucket', '-b', prompt=True, default=lambda: os.environ.get('AWS_S3_BUCKET', ''),
help='tell awsbackup what bucket to send backups to')
@click.option('--name', '-n', type=click.File('w'), prompt=True,
default=lambda: os.environ.get('AWS_BACKUP_SCRIPT_FILE', DEFAULT_BACKUP_SCRIPT_FILE),
help="Location of file to SAVE script to")
@click.option('--from', '-f', prompt=True,
default=lambda: os.environ.get('AWS_BACKUP_DIRECTORY', DEFAULT_BACKUP_DIRECTORY),
help="Location of directory to BACKUP")
@click.pass_context
def syncscript(ctx, bucket, name):
pass
def cleanup(session, bucket, user, backup_policy_name, key_id):
client = session.client('s3')
client.delete_bucket(Bucket=bucket)
client = session.client('iam')
client.delete_user_policy(UserName=user, PolicyName=backup_policy_name)
client.delete_access_key(UserName=user,AccessKeyId=key_id)
client.delete_user(UserName=user)
|
mit
|
Astroua/TurbuStat
|
turbustat/statistics/pdf/compare_pdf.py
|
2
|
30563
|
# Licensed under an MIT open source license - see LICENSE
from __future__ import print_function, absolute_import, division
import numpy as np
from scipy.stats import ks_2samp, lognorm # , anderson_ksamp
from statsmodels.distributions.empirical_distribution import ECDF
from statsmodels.base.model import GenericLikelihoodModel
from warnings import warn
from ..stats_utils import hellinger, common_histogram_bins, data_normalization
from ..base_statistic import BaseStatisticMixIn
from ...io import common_types, twod_types, threed_types, input_data
class PDF(BaseStatisticMixIn):
'''
Create the PDF of a given array.
Parameters
----------
img : %(dtypes)s
A 1-3D array.
min_val : float, optional
Minimum value to keep in the given image.
bins : list or numpy.ndarray or int, optional
Bins to compute the PDF from.
weights : %(dtypes)s, optional
Weights to apply to the image. Must have the same shape as the image.
normalization_type : {"standardize", "center", "normalize", "normalize_by_mean"}, optional
See `~turbustat.statistics.stat_utils.data_normalization`.
Examples
--------
>>> from turbustat.statistics import PDF
>>> from astropy.io import fits
>>> moment0 = fits.open("Design4_21_0_0_flatrho_0021_13co.moment0.fits")[0] # doctest: +SKIP
>>> pdf_mom0 = PDF(moment0).run(verbose=True) # doctest: +SKIP
'''
__doc__ %= {"dtypes": " or ".join(common_types + twod_types +
threed_types)}
def __init__(self, img, min_val=-np.inf, bins=None, weights=None,
normalization_type=None):
super(PDF, self).__init__()
self.need_header_flag = False
self.header = None
output_data = input_data(img, no_header=True)
self.img = output_data
# We want to remove NaNs and value below the threshold.
keep_values = np.logical_and(np.isfinite(output_data),
output_data > min_val)
self.data = output_data[keep_values]
# Do the same for the weights, then apply weights to the data.
if weights is not None:
output_weights = input_data(weights, no_header=True)
self.weights = output_weights[keep_values]
isfinite = np.isfinite(self.weights)
self.data = self.data[isfinite] * self.weights[isfinite]
if normalization_type is not None:
self._normalization_type = normalization_type
self.data = data_normalization(self.data,
norm_type=normalization_type)
else:
self._normalization_type = "None"
self._bins = bins
self._pdf = None
self._ecdf = None
self._do_fit = False
def make_pdf(self, bins=None):
'''
Create the PDF.
Parameters
----------
bins : list or numpy.ndarray or int, optional
Bins to compute the PDF from. Overrides initial bin input.
'''
if bins is not None:
self._bins = bins
# If the number of bins is not given, use sqrt of data length.
if self.bins is None:
self._bins = np.sqrt(self.data.shape[0])
self._bins = int(np.round(self.bins))
# norm_weights = np.ones_like(self.data) / self.data.shape[0]
self._pdf, bin_edges = np.histogram(self.data, bins=self.bins,
density=True)
# weights=norm_weights)
self._bins = (bin_edges[:-1] + bin_edges[1:]) / 2
@property
def normalization_type(self):
return self._normalization_type
@property
def pdf(self):
'''
PDF values in `~PDF.bins`.
'''
return self._pdf
@property
def bins(self):
'''
Bin centers.
'''
return self._bins
def make_ecdf(self):
'''
Create the ECDF.
'''
if self.pdf is None:
self.make_pdf()
self._ecdf_function = ECDF(self.data)
self._ecdf = self._ecdf_function(self.bins)
@property
def ecdf(self):
'''
ECDF values in `~PDF.bins`.
'''
return self._ecdf
def find_percentile(self, values):
'''
Return the percentiles of given values from the
data distribution.
Parameters
----------
values : float or np.ndarray
Value or array of values.
'''
if self.ecdf is None:
self.make_ecdf()
return self._ecdf_function(values) * 100.
def find_at_percentile(self, percentiles):
'''
Return the values at the given percentiles.
Parameters
----------
percentiles : float or np.ndarray
Percentile or array of percentiles. Must be between 0 and 100.
'''
if np.any(np.logical_or(percentiles > 100, percentiles < 0.)):
raise ValueError("Percentiles must be between 0 and 100.")
return np.percentile(self.data, percentiles)
def fit_pdf(self, model=lognorm, verbose=False,
fit_type='mle', floc=True, loc=0.0, fscale=False, scale=1.0,
**kwargs):
'''
Fit a model to the PDF. Use statsmodel's generalized likelihood
setup to get uncertainty estimates and such.
Parameters
----------
model : scipy.stats distribution, optional
Pass any scipy distribution. NOTE: All fits assume `loc` can be
fixed to 0. This is reasonable for all realistic PDF forms in the
ISM.
verbose : bool, optional
Enable printing of the fit results.
fit_type : {'mle', 'mcmc'}, optional
Type of fitting to use. By default Maximum Likelihood Estimation
('mle') is used. An MCMC approach ('mcmc') may also be used. This
requires the optional `emcee` to be installed. kwargs can be
passed to adjust various properties of the MCMC chain.
floc : bool, optional
Fix the `loc` parameter when fitting.
loc : float, optional
Value to set `loc` to when fixed.
fscale : bool, optional
Fix the `scale` parameter when fitting.
scale : float, optional
Value to set `scale` to when fixed.
kwargs : Passed to `~emcee.EnsembleSampler`.
'''
if fit_type not in ['mle', 'mcmc']:
raise ValueError("fit_type must be 'mle' or 'mcmc'.")
self._fit_fixes = {"loc": [floc, loc], "scale": [fscale, scale]}
self._do_fit = True
class Likelihood(GenericLikelihoodModel):
# Get the number of parameters from shapes.
# Add one for scales, since we're assuming loc is frozen.
# Keeping loc=0 is appropriate for log-normal models.
nparams = 1 if model.shapes is None else \
len(model.shapes.split(",")) + 1
_loc = loc
_scale = scale
def loglike(self, params):
if np.isnan(params).any():
return - np.inf
if not floc and not fscale:
loc = params[-2]
scale = params[-1]
cut = -2
elif not floc:
loc = params[-1]
scale = self._scale
cut = -1
elif not fscale:
scale = params[-1]
loc = self._loc
cut = -1
loglikes = \
model.logpdf(self.endog, *params[:cut],
scale=scale,
loc=loc)
if not np.isfinite(loglikes).all():
return -np.inf
else:
return loglikes.sum()
def emcee_fit(model, init_params, burnin=200, steps=2000, thin=10):
try:
import emcee
except ImportError:
raise ImportError("emcee must be installed for MCMC fitting.")
ndim = len(init_params)
nwalkers = ndim * 10
p0 = np.zeros((nwalkers, ndim))
for i, val in enumerate(init_params):
p0[:, i] = np.random.randn(nwalkers) * 0.1 + val
sampler = emcee.EnsembleSampler(nwalkers,
ndim,
model.loglike,
args=[])
pos, prob, state = sampler.run_mcmc(p0, burnin)
sampler.reset()
pos, prob, state = sampler.run_mcmc(pos, steps, thin=thin)
return sampler
# Do an initial fit with the scipy model
if floc and fscale:
init_params = model.fit(self.data)
elif floc:
init_params = model.fit(self.data, floc=loc)
# Remove loc from the params
init_params = np.append(init_params[:-2], init_params[-1])
elif fscale:
init_params = model.fit(self.data, fscale=scale)
# Remove scale from the params
init_params = np.append(init_params[:-2], init_params[-2])
else:
init_params = model.fit(self.data)
init_params = np.array(init_params)
self._model = Likelihood(self.data)
self._scipy_model = model
if fit_type == 'mle':
fitted_model = \
self._model.fit(start_params=init_params, method='nm')
self._mle_fit = fitted_model
fitted_model.df_model = len(init_params)
fitted_model.df_resid = len(self.data) - len(init_params)
self._model_params = fitted_model.params.copy()
try:
self._model_stderrs = fitted_model.bse.copy()
cov_calc_failed = False
except ValueError:
warn("Variance calculation failed.")
self._model_stderrs = np.ones_like(self.model_params) * np.NaN
cov_calc_failed = True
elif fit_type == 'mcmc':
chain = emcee_fit(self._model,
init_params.copy(),
**kwargs)
self._model_params = np.mean(chain.flatchain, axis=0)
self._model_stderrs = np.percentile(chain.flatchain, [15, 85],
axis=0)
self._mcmc_chain = chain
if verbose:
if fit_type == 'mle':
if cov_calc_failed:
print("Fitted parameters: {}".format(self.model_params))
print("Covariance calculation failed.")
else:
print(fitted_model.summary())
else:
print("Ran chain for {0} iterations".format(chain.iterations))
print("Used {} walkers".format(chain.acceptance_fraction.size))
print("Mean acceptance fraction of {}"
.format(np.mean(chain.acceptance_fraction)))
print("Parameter values: {}".format(self.model_params))
print("15th to 85th percentile ranges: {}"
.format(self.model_stderrs[1] - self.model_stderrs[0]))
@property
def model_params(self):
'''
Parameters of the fitted model.
'''
if hasattr(self, "_model_params"):
return self._model_params
raise Exception("No model has been fit. Run `fit_pdf` first.")
@property
def model_stderrs(self):
'''
Standard errors of the fitted model. If using an MCMC, the 15th and
85th percentiles are returned.
'''
if hasattr(self, "_model_stderrs"):
return self._model_stderrs
raise Exception("No model has been fit. Run `fit_pdf` first.")
def corner_plot(self, **kwargs):
'''
Create a corner plot from the MCMC. Requires the 'corner' package.
Parameters
----------
kwargs : Passed to `~corner.corner`.
'''
if not hasattr(self, "_mcmc_chain"):
raise Exception("Must run MCMC fitting first.")
try:
import corner
except ImportError:
raise ImportError("The optional package 'corner' is not "
"installed.")
corner.corner(self._mcmc_chain.flatchain, **kwargs)
def trace_plot(self, **kwargs):
'''
Create a trace plot from the MCMC.
Parameters
----------
kwargs : Passed to `~matplotlib.pyplot.plot`.
'''
if not hasattr(self, "_mcmc_chain"):
raise Exception("Must run MCMC fitting first.")
npars = self._mcmc_chain.flatchain.shape[1]
import matplotlib.pyplot as plt
fig, axes = plt.subplots(npars, 1, sharex=True)
for i, ax in enumerate(axes.ravel()):
ax.plot(self._mcmc_chain.flatchain[:, i], **kwargs)
ax.set_ylabel("par{}".format(i + 1))
axes.ravel()[-1].set_xlabel("Iterations")
plt.tight_layout()
def plot_distrib(self, save_name=None, color='r', fit_color='k',
show_ecdf=True):
'''
Plot the PDF distribution and (if fitted) the best fit model.
Optionally show the ECDF and fit ECDF, too.
Parameters
----------
save_name : str,optional
Save the figure when a file name is given.
color : {str, RGB tuple}, optional
Color to show the Genus curves in.
fit_color : {str, RGB tuple}, optional
Color of the fitted line. Defaults to `color` when no input is
given.
show_ecdf : bool, optional
Plot the ECDF when enabled.
'''
import matplotlib.pyplot as plt
if self.normalization_type == "standardize":
xlabel = r"z-score"
elif self.normalization_type == "center":
xlabel = r"$I - \bar{I}$"
elif self.normalization_type == "normalize_by_mean":
xlabel = r"$I/\bar{I}$"
else:
xlabel = r"Intensity"
if fit_color is None:
fit_color = color
# PDF
if show_ecdf:
plt.subplot(121)
else:
plt.subplot(111)
plt.semilogy(self.bins, self.pdf, '-', color=color, label='Data')
if self._do_fit:
# Plot the fitted model.
vals = np.linspace(self.bins[0], self.bins[-1], 1000)
# Check which of the parameters were kept fixed
if self._fit_fixes['loc'][0] and self._fit_fixes['scale'][0]:
loc = self._fit_fixes['loc'][1]
scale = self._fit_fixes['scale'][1]
params = self.model_params
elif self._fit_fixes['loc'][0]:
loc = self._fit_fixes['loc'][1]
scale = self.model_params[-1]
params = self.model_params[:-1]
elif self._fit_fixes['scale'][0]:
loc = self.model_params[-1]
scale = self._fit_fixes['scale'][1]
params = self.model_params[:-1]
else:
loc = self.model_params[-2]
scale = self.model_params[-1]
params = self.model_params[:-2]
plt.semilogy(vals,
self._scipy_model.pdf(vals, *params,
scale=scale,
loc=loc),
'--', color=fit_color, label='Fit')
plt.legend(loc='best')
plt.grid(True)
plt.xlabel(xlabel)
plt.ylabel("PDF")
# ECDF
if show_ecdf:
ax2 = plt.subplot(122)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
if self.normalization_type != "None":
ax2.plot(self.bins, self.ecdf, '-', color=color)
if self._do_fit:
ax2.plot(vals,
self._scipy_model.cdf(vals, *params,
scale=scale,
loc=loc),
'--', color=fit_color)
else:
ax2.semilogx(self.bins, self.ecdf, '-', color=color)
if self._do_fit:
ax2.semilogx(vals,
self._scipy_model.cdf(vals, *params,
scale=scale,
loc=0),
'--', color=fit_color)
plt.grid(True)
plt.xlabel(xlabel)
plt.ylabel("ECDF")
plt.tight_layout()
if save_name is not None:
plt.savefig(save_name)
plt.close()
else:
plt.show()
def run(self, verbose=False, save_name=None, bins=None, do_fit=True,
model=lognorm, color=None, **kwargs):
'''
Compute the PDF and ECDF. Enabling verbose provides
a summary plot.
Parameters
----------
verbose : bool, optional
Enables plotting of the results.
save_name : str,optional
Save the figure when a file name is given.
bins : list or numpy.ndarray or int, optional
Bins to compute the PDF from. Overrides initial bin input.
do_fit : bool, optional
Enables (by default) fitting a given model.
model : scipy.stats distribution, optional
Pass any scipy distribution. See `~PDF.fit_pdf`.
color : {str, RGB tuple}, optional
Color to show the Genus curves in when `verbose=True`.
kwargs : Passed to `~PDF.fit_pdf`.
'''
self.make_pdf(bins=bins)
self.make_ecdf()
if do_fit:
self.fit_pdf(model=model, verbose=verbose, **kwargs)
if verbose:
self.plot_distrib(save_name=save_name, color=color)
return self
class PDF_Distance(object):
'''
Calculate the distance between two arrays using their PDFs.
.. note:: Pre-computed `~PDF` classes cannot be passed to `~PDF_Distance`
as the data need to be normalized and the PDFs should use the
same set of histogram bins.
Parameters
----------
img1 : %(dtypes)s
Array (1-3D).
img2 : %(dtypes)s
Array (1-3D).
min_val1 : float, optional
Minimum value to keep in img1
min_val2 : float, optional
Minimum value to keep in img2
do_fit : bool, optional
Enables fitting a lognormal distribution to each data set.
normalization_type : {"normalize", "normalize_by_mean"}, optional
See `~turbustat.statistics.stat_utils.data_normalization`.
nbins : int, optional
Manually set the number of bins to use for creating the PDFs.
weights1 : %(dtypes)s, optional
Weights to be used with img1
weights2 : %(dtypes)s, optional
Weights to be used with img2
bin_min : float, optional
Minimum value to use for the histogram bins *after* normalization is
applied.
bin_max : float, optional
Maximum value to use for the histogram bins *after* normalization is
applied.
'''
__doc__ %= {"dtypes": " or ".join(common_types + twod_types +
threed_types)}
def __init__(self, img1, img2, min_val1=-np.inf, min_val2=-np.inf,
do_fit=True, normalization_type=None,
nbins=None, weights1=None, weights2=None,
bin_min=None, bin_max=None):
super(PDF_Distance, self).__init__()
if do_fit:
if normalization_type in ["standardize", "center"]:
raise Exception("Cannot perform lognormal fit when using"
" 'standardize' or 'center'.")
self.normalization_type = normalization_type
self.PDF1 = PDF(img1, min_val=min_val1,
normalization_type=normalization_type,
weights=weights1)
self.PDF2 = PDF(img2, min_val=min_val2,
normalization_type=normalization_type,
weights=weights2)
self.bins, self.bin_centers = \
common_histogram_bins(self.PDF1.data, self.PDF2.data,
return_centered=True, nbins=nbins,
min_val=bin_min, max_val=bin_max)
# Feed the common set of bins to be used in the PDFs
self._do_fit = do_fit
self.PDF1.run(verbose=False, bins=self.bins, do_fit=do_fit)
self.PDF2.run(verbose=False, bins=self.bins, do_fit=do_fit)
def compute_hellinger_distance(self):
'''
Computes the Hellinger Distance between the two PDFs.
'''
# We're using the same bins, so normalize each to unity to keep the
# distance normalized.
self.hellinger_distance = \
hellinger(self.PDF1.pdf / self.PDF1.pdf.sum(),
self.PDF2.pdf / self.PDF2.pdf.sum())
def compute_ks_distance(self):
'''
Compute the distance using the KS Test.
'''
D, p = ks_2samp(self.PDF1.data, self.PDF2.data)
self.ks_distance = D
self.ks_pval = p
def compute_ad_distance(self):
'''
Compute the distance using the Anderson-Darling Test.
'''
raise NotImplementedError(
"Use of the Anderson-Darling test has been disabled"
" due to occurence of overflow errors.")
# D, _, p = anderson_ksamp([self.PDF1.data, self.PDF2.data])
# self.ad_distance = D
# self.ad_pval = p
def compute_lognormal_distance(self):
'''
Compute the combined t-statistic for the difference in the widths of
a lognormal distribution.
'''
try:
self.PDF1.model_params
self.PDF2.model_params
except AttributeError:
raise Exception("Fitting has not been performed. 'do_fit' must "
"first be enabled.")
diff = np.abs(self.PDF1.model_params[0] - self.PDF2.model_params[0])
denom = np.sqrt(self.PDF1.model_stderrs[0]**2 +
self.PDF2.model_stderrs[0]**2)
self.lognormal_distance = diff / denom
def distance_metric(self, statistic='all', verbose=False,
plot_kwargs1={'color': 'b', 'marker': 'D',
'label': '1'},
plot_kwargs2={'color': 'g', 'marker': 'o',
'label': '2'},
save_name=None):
'''
Calculate the distance.
*NOTE:* The data are standardized before comparing to ensure the
distance is calculated on the same scales.
Parameters
----------
statistic : 'all', 'hellinger', 'ks', 'lognormal'
Which measure of distance to use.
labels : tuple, optional
Sets the labels in the output plot.
verbose : bool, optional
Enables plotting.
plot_kwargs1 : dict, optional
Pass kwargs to `~matplotlib.pyplot.plot` for
`dataset1`.
plot_kwargs2 : dict, optional
Pass kwargs to `~matplotlib.pyplot.plot` for
`dataset2`.
save_name : str,optional
Save the figure when a file name is given.
'''
if statistic is 'all':
self.compute_hellinger_distance()
self.compute_ks_distance()
# self.compute_ad_distance()
if self._do_fit:
self.compute_lognormal_distance()
elif statistic is 'hellinger':
self.compute_hellinger_distance()
elif statistic is 'ks':
self.compute_ks_distance()
elif statistic is 'lognormal':
if not self._do_fit:
raise Exception("Fitting must be enabled to compute the"
" lognormal distance.")
self.compute_lognormal_distance()
# elif statistic is 'ad':
# self.compute_ad_distance()
else:
raise TypeError("statistic must be 'all',"
"'hellinger', 'ks', or 'lognormal'.")
# "'hellinger', 'ks' or 'ad'.")
if verbose:
import matplotlib.pyplot as plt
defaults1 = {'color': 'b', 'marker': 'D', 'label': '1'}
defaults2 = {'color': 'g', 'marker': 'o', 'label': '2'}
for key in defaults1:
if key not in plot_kwargs1:
plot_kwargs1[key] = defaults1[key]
for key in defaults2:
if key not in plot_kwargs2:
plot_kwargs2[key] = defaults2[key]
if self.normalization_type == "standardize":
xlabel = r"z-score"
elif self.normalization_type == "center":
xlabel = r"$I - \bar{I}$"
elif self.normalization_type == "normalize_by_mean":
xlabel = r"$I/\bar{I}$"
else:
xlabel = r"Intensity"
# Print fit summaries if using fitting
if self._do_fit:
try:
print(self.PDF1._mle_fit.summary())
except ValueError:
warn("Covariance calculation failed. Check the fit quality"
" for data set 1!")
try:
print(self.PDF2._mle_fit.summary())
except ValueError:
warn("Covariance calculation failed. Check the fit quality"
" for data set 2!")
# PDF
plt.subplot(121)
plt.semilogy(self.bin_centers, self.PDF1.pdf,
color=plot_kwargs1['color'], linestyle='none',
marker=plot_kwargs1['marker'],
label=plot_kwargs1['label'])
plt.semilogy(self.bin_centers, self.PDF2.pdf,
color=plot_kwargs2['color'], linestyle='none',
marker=plot_kwargs2['marker'],
label=plot_kwargs2['label'])
if self._do_fit:
# Plot the fitted model.
vals = np.linspace(self.bin_centers[0], self.bin_centers[-1],
1000)
fit_params1 = self.PDF1.model_params
plt.semilogy(vals,
lognorm.pdf(vals, *fit_params1[:-1],
scale=fit_params1[-1],
loc=0),
color=plot_kwargs1['color'], linestyle='-')
fit_params2 = self.PDF2.model_params
plt.semilogy(vals,
lognorm.pdf(vals, *fit_params2[:-1],
scale=fit_params2[-1],
loc=0),
color=plot_kwargs2['color'], linestyle='-')
plt.grid(True)
plt.xlabel(xlabel)
plt.ylabel("PDF")
plt.legend(frameon=True)
# ECDF
ax2 = plt.subplot(122)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
if self.normalization_type is not None:
ax2.plot(self.bin_centers, self.PDF1.ecdf,
color=plot_kwargs1['color'], linestyle='-',
marker=plot_kwargs1['marker'],
label=plot_kwargs1['label'])
ax2.plot(self.bin_centers, self.PDF2.ecdf,
color=plot_kwargs2['color'], linestyle='-',
marker=plot_kwargs2['marker'],
label=plot_kwargs2['label'])
if self._do_fit:
ax2.plot(vals,
lognorm.cdf(vals,
*fit_params1[:-1],
scale=fit_params1[-1],
loc=0),
color=plot_kwargs1['color'], linestyle='-',)
ax2.plot(vals,
lognorm.cdf(vals,
*fit_params2[:-1],
scale=fit_params2[-1],
loc=0),
color=plot_kwargs2['color'], linestyle='-',)
else:
ax2.semilogx(self.bin_centers, self.PDF1.ecdf,
color=plot_kwargs1['color'], linestyle='-',
marker=plot_kwargs1['marker'],
label=plot_kwargs1['label'])
ax2.semilogx(self.bin_centers, self.PDF2.ecdf,
color=plot_kwargs2['color'], linestyle='-',
marker=plot_kwargs2['marker'],
label=plot_kwargs2['label'])
if self._do_fit:
ax2.semilogx(vals,
lognorm.cdf(vals, *fit_params1[:-1],
scale=fit_params1[-1],
loc=0),
color=plot_kwargs1['color'], linestyle='-',)
ax2.semilogx(vals,
lognorm.cdf(vals, *fit_params2[:-1],
scale=fit_params2[-1],
loc=0),
color=plot_kwargs2['color'], linestyle='-',)
plt.grid(True)
plt.xlabel(xlabel)
plt.ylabel("ECDF")
plt.tight_layout()
if save_name is not None:
plt.savefig(save_name)
plt.close()
else:
plt.show()
return self
|
mit
|
mijkenator/stellar
|
test/user.py
|
1
|
2169
|
#!/usr/bin/python3
import requests
s = requests.Session()
#url = "http://127.0.0.1:8080"
url = "http://52.76.131.184/_mijkweb/user"
#payload = {'request' : '{"type":"signup", "login":"[email protected]", "password":"test"}'}
#r = s.post(url, data=payload)
#print(r.text)
#exit(0)
#payload = {'request' : '{"type":"restore_password", "login":"[email protected]"}'}
#r = s.post(url, data=payload)
#print(r.text)
#payload = {'request' : '{"type":"signup-confirm", "guid":"111111"}'}
#r = s.post(url, data=payload)
#print(r.text)
#exit(0)
#payload = {'request' : '{"type":"restore-pwd-confirm", "guid":"111111", "password":"test123"}'}
#payload = {'request' : '{"type":"delete", "login":"[email protected]", "password":"test123"}'}
#payload = {'request' : '{"type":"get_details", "login":"[email protected]", "password":"test"}'}
#r = s.post(url, data=payload)
#print(r.text)
#payload = {'request' : '{"type":"set_details", "name":"1", "street":"2", "apt":"3", "zip":"4", "city":"5", "state":"6", "phone":"7"}'}
#r = s.post(url, data=payload)
#print(r.text)
payload = {'request' : '{"type":"get_details", "login":"[email protected]", "password":"1815133"}'}
r = s.post(url, data=payload)
print(r.text)
#payload = {'request' : '{"type":"create_order", "contractor_id":4, "service_id":3, "service_ontime":"2016-12-31 12:00:00",\
# "number_of_services":1, "number_of_contractors":1, "cost":99}'}
#payload = {'request' : '{"type":"create_order","service_id":"3","service_ontime":"2016-12-26 12:15:00","number_of_services":1,"number_of_contractors":1,"cost":9}'}
#r = s.post(url, data=payload)
#print(r.text)
#payload = {'request' : '{"type":"get_orders"}'}
#r = s.post(url, data=payload)
#print(r.text)
payload = {'request' : '{"type":"referral_activity"}'}
r = s.post(url, data=payload)
print(r.text)
#payload = {'request' : '{"type":"invite","to_email":[{"email":"[email protected]","name":"Stellar Test 1"}]}'}
#r = s.post(url, data=payload)
#print(r.text)
payload = {'request' : '{"type":"signup","login":"[email protected]","password":"123","refcode":"70-924"}'}
r = s.post(url, data=payload)
print(r.text)
|
bsd-3-clause
|
hellckt/micblog
|
app/models.py
|
1
|
14065
|
# -*- coding:utf-8 -*-
from datetime import datetime
import hashlib
from werkzeug.security import generate_password_hash, check_password_hash
from itsdangerous import TimedJSONWebSignatureSerializer as Serializer
from markdown import markdown
import bleach
from flask import current_app, request, url_for
from flask.ext.login import UserMixin, AnonymousUserMixin
from app.exceptions import ValidationError
from . import db, login_manager
class Permission:
"""权限常量
0b00000001 == 0x01
0b00000010 == 0x02
0b00000100 == 0x04
0b00001000 == 0x08
0b00010000 == 0x10
0b00100000 == 0x20
0b01000000 == 0x40
0b10000000 == 0x80
"""
FOLLOW = 0x01 # 关注
COMMENT = 0x02 # 评论
WRITE_ARTICLES = 0x04 # 写文章
MODERATE_COMMENTS = 0x08 # 审核评论
ADMINISTER = 0x80 # 管理员
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
default = db.Column(db.Boolean, default=False, index=True)
permissions = db.Column(db.Integer)
users = db.relationship('User', backref='role', lazy='dynamic')
@staticmethod
def insert_roles():
roles = {
'User': (Permission.FOLLOW |
Permission.COMMENT |
Permission.WRITE_ARTICLES, True),
'Moderator': (Permission.FOLLOW |
Permission.COMMENT |
Permission.WRITE_ARTICLES |
Permission.MODERATE_COMMENTS, False),
'Administrator': (0xff, False)
}
for r in roles:
role = Role.query.filter_by(name=r).first()
if role is None:
role = Role(name=r)
role.permissions = roles[r][0]
role.default = roles[r][1]
db.session.add(role)
db.session.commit()
def __repr__(self):
return '<Role %r>' % self.name
class Follow(db.Model):
__tablename__ = 'follows'
follower_id = db.Column(db.Integer, db.ForeignKey('users.id'),
primary_key=True)
followed_id = db.Column(db.Integer, db.ForeignKey('users.id'),
primary_key=True)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
class User(UserMixin, db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(64), unique=True, index=True)
username = db.Column(db.String(64), unique=True, index=True)
role_id = db.Column(db.Integer, db.ForeignKey('roles.id'))
password_hash = db.Column(db.String(128))
confirmed = db.Column(db.Boolean, default=False)
name = db.Column(db.String(64))
location = db.Column(db.String(64))
about_me = db.Column(db.Text())
member_since = db.Column(db.DateTime(), default=datetime.utcnow)
last_seen = db.Column(db.DateTime(), default=datetime.utcnow)
avatar_hash = db.Column(db.String(32))
posts = db.relationship('Post', backref='author', lazy='dynamic')
followed = db.relationship('Follow',
foreign_keys=[Follow.follower_id],
backref=db.backref('follower', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
followers = db.relationship('Follow',
foreign_keys=[Follow.followed_id],
backref=db.backref('followed', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
comments = db.relationship('Comment', backref='author', lazy='dynamic')
@staticmethod
def generate_fake(count=100):
from sqlalchemy.exc import IntegrityError
from random import seed
import forgery_py
seed()
for i in range(count):
u = User(email=forgery_py.internet.email_address(),
username=forgery_py.internet.user_name(True),
password=forgery_py.lorem_ipsum.word(),
confirmed=True,
name=forgery_py.name.full_name(),
location=forgery_py.address.city(),
about_me=forgery_py.lorem_ipsum.sentence(),
member_since=forgery_py.date.date(True))
db.session.add(u)
try:
db.session.commit()
except IntegrityError:
db.session.rollback()
@staticmethod
def add_self_follows():
for user in User.query.all():
if not user.is_following(user):
user.follow(user)
db.session.add(user)
db.session.commit()
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
if self.role is None:
if self.email == current_app.config['FLASKY_ADMIN']:
self.role = Role.query.filter_by(permissions=0xff).first()
if self.role is None:
self.role = Role.query.filter_by(default=True).first()
if self.email is not None and self.avatar_hash is None:
self.avatar_hash = hashlib.md5(
self.email.encode('utf-8')).hexdigest()
self.followed.append(Follow(followed=self))
@property
def password(self):
raise AttributeError('password is not a readable attribute')
@password.setter
def password(self, password):
self.password_hash = generate_password_hash(password)
def verify_password(self, password):
return check_password_hash(self.password_hash, password)
def generate_confirmation_token(self, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps({'confirm': self.id})
def confirm(self, token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return False
if data.get('confirm') != self.id:
return False
self.confirmed = True
db.session.add(self)
return True
def generate_reset_token(self, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps({'reset': self.id})
def reset_password(self, token, new_password):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return False
if data.get('reset') != self.id:
return False
self.password = new_password
db.session.add(self)
return True
def generate_email_change_token(self, new_email, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps({'change_email': self.id, 'new_email': new_email})
def change_email(self, token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return False
if data.get('change_email') != self.id:
return False
new_email = data.get('new_email')
if new_email is None:
return False
if self.query.filter_by(email=new_email).first() is not None:
return False
self.email = new_email
self.avatar_hash = hashlib.md5(
self.email.encode('utf-8')).hexdigest()
db.session.add(self)
return True
def can(self, permissions):
return self.role is not None and \
(self.role.permissions & permissions) == permissions
def is_administrator(self):
return self.can(Permission.ADMINISTER)
def ping(self):
self.last_seen = datetime.utcnow()
db.session.add(self)
def gravatar(self, size=100, default='identicon', rating='g'):
if request.is_secure:
url = 'https://secure.gravatar.com/avatar'
else:
url = 'http://www.gravatar.com/avatar'
hash = self.avatar_hash or hashlib.md5(
self.email.encode('utf-8')).hexdigest()
return '{url}/{hash}?s={size}&d={default}&r={rating}'.format(
url=url, hash=hash, size=size, default=default, rating=rating)
def follow(self, user):
if not self.is_following(user):
f = Follow(follower=self, followed=user)
db.session.add(f)
def unfollow(self, user):
f = self.followed.filter_by(followed_id=user.id).first()
if f:
db.session.delete(f)
def is_following(self, user):
return self.followed.filter_by(
followed_id=user.id).first() is not None
def is_followed_by(self, user):
return self.followers.filter_by(
follower_id=user.id).first() is not None
@property
def followed_posts(self):
return Post.query.join(Follow, Follow.followed_id == Post.author_id)\
.filter(Follow.follower_id == self.id)
def to_json(self):
json_user = {
'url': url_for('api.get_post', id=self.id, _external=True),
'username': self.username,
'member_since': self.member_since,
'last_seen': self.last_seen,
'posts': url_for('api.get_user_posts', id=self.id, _external=True),
'followed_posts': url_for('api.get_user_followed_posts',
id=self.id, _external=True),
'post_count': self.posts.count()
}
return json_user
def generate_auth_token(self, expiration):
s = Serializer(current_app.config['SECRET_KEY'],
expires_in=expiration)
return s.dumps({'id': self.id}).decode('ascii')
@staticmethod
def verify_auth_token(token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return None
return User.query.get(data['id'])
def __repr__(self):
return '<User %r>' % self.username
class AnonymousUser(AnonymousUserMixin):
def can(self, permissions):
return False
def is_administrator(self):
return False
login_manager.anonymous_user = AnonymousUser
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
class Post(db.Model):
__tablename__ = 'posts'
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.Text)
body_html = db.Column(db.Text)
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
comments = db.relationship('Comment', backref='post', lazy='dynamic')
@staticmethod
def generate_fake(count=100):
from random import seed, randint
import forgery_py
seed()
user_count = User.query.count()
for i in range(count):
u = User.query.offset(randint(0, user_count - 1)).first()
p = Post(body=forgery_py.lorem_ipsum.sentences(randint(1, 5)),
timestamp=forgery_py.date.date(True),
author=u)
db.session.add(p)
db.session.commit()
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'abbr', 'acronym', 'b', 'blockquote', 'code',
'em', 'i', 'li', 'ol', 'pre', 'strong', 'ul',
'h1', 'h2', 'h3', 'p']
target.body_html = bleach.linkify(bleach.clean(
markdown(value, output_format='html'),
tags=allowed_tags, strip=True))
def to_json(self):
json_post = {
'url': url_for('api.get_post', id=self.id, _external=True),
'body': self.body,
'body_html': self.body_html,
'timestamp': self.timestamp,
'author': url_for('api.get_user', id=self.author_id,
_external=True),
'comments': url_for('api.get_post_comments', id=self.id,
_external=True),
'comment_count': self.comments.count()
}
return json_post
@staticmethod
def from_json(json_post):
body = json_post.get('body')
if body is None or body == '':
raise ValidationError('post does not have a body')
return Post(body=body)
db.event.listen(Post.body, 'set', Post.on_changed_body)
class Comment(db.Model):
__tablename__ = 'comments'
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.Text)
body_html = db.Column(db.Text)
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
disabled = db.Column(db.Boolean)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
post_id = db.Column(db.Integer, db.ForeignKey('posts.id'))
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'abbr', 'acronym', 'b', 'code', 'em', 'i',
'strong']
target.body_html = bleach.linkify(bleach.clean(
markdown(value, output_format='html'),
tags=allowed_tags, strip=True))
def to_json(self):
json_comment = {
'url': url_for('api.get_comment', id=self.id, _external=True),
'post': url_for('api.get_post', id=self.post_id, _external=True),
'body': self.body,
'body_html': self.body_html,
'timestamp': self.timestamp,
'author': url_for('api.get_user', id=self.author_id,
_external=True),
}
return json_comment
@staticmethod
def from_json(json_comment):
body = json_comment.get('body')
if body is None or body == '':
raise ValidationError('comment does not have a body')
return Comment(body=body)
db.event.listen(Comment.body, 'set', Comment.on_changed_body)
|
mit
|
rezib/cloubed-deb
|
cloubed/DomainNetif.py
|
1
|
1927
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2013 Rémi Palancher
#
# This file is part of Cloubed.
#
# Cloubed is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# Cloubed is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with Cloubed. If not, see
# <http://www.gnu.org/licenses/>.
""" DomainNetif class of Cloubed """
import logging
from cloubed.Utils import gen_mac
class DomainNetif:
""" DomainNetif class """
def __init__(self, tbd, hostname, netif_conf):
self.network = tbd.get_network_by_name(netif_conf["network"])
if netif_conf.has_key("mac"):
self.mac = netif_conf["mac"]
else:
self.mac = gen_mac("{domain:s}-{network:s}" \
.format(domain=hostname,
network=self.network.name))
logging.debug("generated mac {mac} for netif on domain {domain} "\
"connected to network {network}" \
.format(mac=self.mac,
domain=hostname,
network=self.network.name))
self.ip = netif_conf.get('ip')
if self.ip is not None:
self.network.register_host(hostname, self.mac, self.ip)
def get_network_name(self):
"""
Returns the name of the Network connected to the domain interface
"""
return self.network.name
|
lgpl-3.0
|
scue/vim-ycm_win7
|
third_party/ycmd/third_party/jedi/test/test_jedi_system.py
|
30
|
1866
|
"""
Test the Jedi "System" which means for example to test if imports are
correctly used.
"""
import os
import inspect
import jedi
def test_settings_module():
"""
jedi.settings and jedi.cache.settings must be the same module.
"""
from jedi import cache
from jedi import settings
assert cache.settings is settings
def test_no_duplicate_modules():
"""
Make sure that import hack works as expected.
Jedi does an import hack (see: jedi/__init__.py) to have submodules
with circular dependencies. The modules in this circular dependency
"loop" must be imported by ``import <module>`` rather than normal
``from jedi import <module>`` (or ``from . jedi ...``). This test
make sure that this is satisfied.
See also:
- `#160 <https://github.com/davidhalter/jedi/issues/160>`_
- `#161 <https://github.com/davidhalter/jedi/issues/161>`_
"""
import sys
jedipath = os.path.dirname(os.path.abspath(jedi.__file__))
def is_submodule(m):
try:
filepath = m.__file__
except AttributeError:
return False
return os.path.abspath(filepath).startswith(jedipath)
modules = list(filter(is_submodule, sys.modules.values()))
top_modules = [m for m in modules if not m.__name__.startswith('jedi.')]
for m in modules:
if m is jedi:
# py.test automatically improts `jedi.*` when --doctest-modules
# is given. So this test cannot succeeds.
continue
for tm in top_modules:
try:
imported = getattr(m, tm.__name__)
except AttributeError:
continue
if inspect.ismodule(imported):
# module could have a function with the same name, e.g.
# `keywords.keywords`.
assert imported is tm
|
gpl-3.0
|
blackzw/openwrt_sdk_dev1
|
staging_dir/target-mips_r2_uClibc-0.9.33.2/usr/lib/python2.7/test/test_pkgutil.py
|
112
|
4651
|
from test.test_support import run_unittest
import unittest
import sys
import imp
import pkgutil
import os
import os.path
import tempfile
import shutil
import zipfile
class PkgutilTests(unittest.TestCase):
def setUp(self):
self.dirname = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, self.dirname)
sys.path.insert(0, self.dirname)
def tearDown(self):
del sys.path[0]
def test_getdata_filesys(self):
pkg = 'test_getdata_filesys'
# Include a LF and a CRLF, to test that binary data is read back
RESOURCE_DATA = 'Hello, world!\nSecond line\r\nThird line'
# Make a package with some resources
package_dir = os.path.join(self.dirname, pkg)
os.mkdir(package_dir)
# Empty init.py
f = open(os.path.join(package_dir, '__init__.py'), "wb")
f.close()
# Resource files, res.txt, sub/res.txt
f = open(os.path.join(package_dir, 'res.txt'), "wb")
f.write(RESOURCE_DATA)
f.close()
os.mkdir(os.path.join(package_dir, 'sub'))
f = open(os.path.join(package_dir, 'sub', 'res.txt'), "wb")
f.write(RESOURCE_DATA)
f.close()
# Check we can read the resources
res1 = pkgutil.get_data(pkg, 'res.txt')
self.assertEqual(res1, RESOURCE_DATA)
res2 = pkgutil.get_data(pkg, 'sub/res.txt')
self.assertEqual(res2, RESOURCE_DATA)
del sys.modules[pkg]
def test_getdata_zipfile(self):
zip = 'test_getdata_zipfile.zip'
pkg = 'test_getdata_zipfile'
# Include a LF and a CRLF, to test that binary data is read back
RESOURCE_DATA = 'Hello, world!\nSecond line\r\nThird line'
# Make a package with some resources
zip_file = os.path.join(self.dirname, zip)
z = zipfile.ZipFile(zip_file, 'w')
# Empty init.py
z.writestr(pkg + '/__init__.py', "")
# Resource files, res.txt, sub/res.txt
z.writestr(pkg + '/res.txt', RESOURCE_DATA)
z.writestr(pkg + '/sub/res.txt', RESOURCE_DATA)
z.close()
# Check we can read the resources
sys.path.insert(0, zip_file)
res1 = pkgutil.get_data(pkg, 'res.txt')
self.assertEqual(res1, RESOURCE_DATA)
res2 = pkgutil.get_data(pkg, 'sub/res.txt')
self.assertEqual(res2, RESOURCE_DATA)
del sys.path[0]
del sys.modules[pkg]
def test_unreadable_dir_on_syspath(self):
# issue7367 - walk_packages failed if unreadable dir on sys.path
package_name = "unreadable_package"
d = os.path.join(self.dirname, package_name)
# this does not appear to create an unreadable dir on Windows
# but the test should not fail anyway
os.mkdir(d, 0)
self.addCleanup(os.rmdir, d)
for t in pkgutil.walk_packages(path=[self.dirname]):
self.fail("unexpected package found")
class PkgutilPEP302Tests(unittest.TestCase):
class MyTestLoader(object):
def load_module(self, fullname):
# Create an empty module
mod = sys.modules.setdefault(fullname, imp.new_module(fullname))
mod.__file__ = "<%s>" % self.__class__.__name__
mod.__loader__ = self
# Make it a package
mod.__path__ = []
# Count how many times the module is reloaded
mod.__dict__['loads'] = mod.__dict__.get('loads',0) + 1
return mod
def get_data(self, path):
return "Hello, world!"
class MyTestImporter(object):
def find_module(self, fullname, path=None):
return PkgutilPEP302Tests.MyTestLoader()
def setUp(self):
sys.meta_path.insert(0, self.MyTestImporter())
def tearDown(self):
del sys.meta_path[0]
def test_getdata_pep302(self):
# Use a dummy importer/loader
self.assertEqual(pkgutil.get_data('foo', 'dummy'), "Hello, world!")
del sys.modules['foo']
def test_alreadyloaded(self):
# Ensure that get_data works without reloading - the "loads" module
# variable in the example loader should count how many times a reload
# occurs.
import foo
self.assertEqual(foo.loads, 1)
self.assertEqual(pkgutil.get_data('foo', 'dummy'), "Hello, world!")
self.assertEqual(foo.loads, 1)
del sys.modules['foo']
def test_main():
run_unittest(PkgutilTests, PkgutilPEP302Tests)
# this is necessary if test is run repeated (like when finding leaks)
import zipimport
zipimport._zip_directory_cache.clear()
if __name__ == '__main__':
test_main()
|
gpl-2.0
|
simonwydooghe/ansible
|
lib/ansible/modules/cloud/vmware/vmware_host_ntp.py
|
18
|
15765
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# Copyright: (c) 2018, Christian Kotte <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_ntp
short_description: Manage NTP server configuration of an ESXi host
description:
- This module can be used to configure, add or remove NTP servers from an ESXi host.
- If C(state) is not given, the NTP servers will be configured in the exact sequence.
- User can specify an ESXi hostname or Cluster name. In case of cluster name, all ESXi hosts are updated.
version_added: '2.5'
author:
- Abhijeet Kasurde (@Akasurde)
- Christian Kotte (@ckotte)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
esxi_hostname:
description:
- Name of the host system to work with.
- This parameter is required if C(cluster_name) is not specified.
type: str
cluster_name:
description:
- Name of the cluster from which all host systems will be used.
- This parameter is required if C(esxi_hostname) is not specified.
type: str
ntp_servers:
description:
- "IP or FQDN of NTP server(s)."
- This accepts a list of NTP servers. For multiple servers, please look at the examples.
type: list
required: True
state:
description:
- "present: Add NTP server(s), if specified server(s) are absent else do nothing."
- "absent: Remove NTP server(s), if specified server(s) are present else do nothing."
- Specified NTP server(s) will be configured if C(state) isn't specified.
choices: [ present, absent ]
type: str
verbose:
description:
- Verbose output of the configuration change.
- Explains if an NTP server was added, removed, or if the NTP server sequence was changed.
type: bool
required: false
default: false
version_added: 2.8
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Configure NTP servers for an ESXi Host
vmware_host_ntp:
hostname: vcenter01.example.local
username: [email protected]
password: SuperSecretPassword
esxi_hostname: esx01.example.local
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Set NTP servers for all ESXi Host in given Cluster
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
state: present
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Set NTP servers for an ESXi Host
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
state: present
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Remove NTP servers for an ESXi Host
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
state: absent
ntp_servers:
- bad.server.ntp.org
delegate_to: localhost
'''
RETURN = r'''
host_ntp_status:
description: metadata about host system's NTP configuration
returned: always
type: dict
sample: {
"esx01.example.local": {
"ntp_servers_changed": ["time1.example.local", "time2.example.local", "time3.example.local", "time4.example.local"],
"ntp_servers": ["time3.example.local", "time4.example.local"],
"ntp_servers_previous": ["time1.example.local", "time2.example.local"],
},
"esx02.example.local": {
"ntp_servers_changed": ["time3.example.local"],
"ntp_servers_current": ["time1.example.local", "time2.example.local", "time3.example.local"],
"state": "present",
"ntp_servers_previous": ["time1.example.local", "time2.example.local"],
},
}
'''
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
class VmwareNtpConfigManager(PyVmomi):
"""Class to manage configured NTP servers"""
def __init__(self, module):
super(VmwareNtpConfigManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.ntp_servers = self.params.get('ntp_servers', list())
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
if not self.hosts:
self.module.fail_json(msg="Failed to find host system.")
self.results = {}
self.desired_state = self.params.get('state', None)
self.verbose = module.params.get('verbose', False)
def update_ntp_servers(self, host, ntp_servers_configured, ntp_servers_to_change, operation='overwrite'):
"""Update NTP server configuration"""
host_date_time_manager = host.configManager.dateTimeSystem
if host_date_time_manager:
# Prepare new NTP server list
if operation == 'overwrite':
new_ntp_servers = list(ntp_servers_to_change)
else:
new_ntp_servers = list(ntp_servers_configured)
if operation == 'add':
new_ntp_servers = new_ntp_servers + ntp_servers_to_change
elif operation == 'delete':
for server in ntp_servers_to_change:
if server in new_ntp_servers:
new_ntp_servers.remove(server)
# build verbose message
if self.verbose:
message = self.build_changed_message(
ntp_servers_configured,
new_ntp_servers,
ntp_servers_to_change,
operation
)
ntp_config_spec = vim.host.NtpConfig()
ntp_config_spec.server = new_ntp_servers
date_config_spec = vim.host.DateTimeConfig()
date_config_spec.ntpConfig = ntp_config_spec
try:
if not self.module.check_mode:
host_date_time_manager.UpdateDateTimeConfig(date_config_spec)
if self.verbose:
self.results[host.name]['msg'] = message
except vim.fault.HostConfigFault as config_fault:
self.module.fail_json(
msg="Failed to configure NTP for host '%s' due to : %s" %
(host.name, to_native(config_fault.msg))
)
return new_ntp_servers
def check_host_state(self):
"""Check ESXi host configuration"""
change_list = []
changed = False
for host in self.hosts:
self.results[host.name] = dict()
ntp_servers_configured, ntp_servers_to_change = self.check_ntp_servers(host=host)
# add/remove NTP servers
if self.desired_state:
self.results[host.name]['state'] = self.desired_state
if ntp_servers_to_change:
self.results[host.name]['ntp_servers_changed'] = ntp_servers_to_change
operation = 'add' if self.desired_state == 'present' else 'delete'
new_ntp_servers = self.update_ntp_servers(
host=host,
ntp_servers_configured=ntp_servers_configured,
ntp_servers_to_change=ntp_servers_to_change,
operation=operation
)
self.results[host.name]['ntp_servers_current'] = new_ntp_servers
self.results[host.name]['changed'] = True
change_list.append(True)
else:
self.results[host.name]['ntp_servers_current'] = ntp_servers_configured
if self.verbose:
self.results[host.name]['msg'] = (
"NTP servers already added" if self.desired_state == 'present'
else "NTP servers already removed"
)
self.results[host.name]['changed'] = False
change_list.append(False)
# overwrite NTP servers
else:
self.results[host.name]['ntp_servers'] = self.ntp_servers
if ntp_servers_to_change:
self.results[host.name]['ntp_servers_changed'] = self.get_differt_entries(
ntp_servers_configured,
ntp_servers_to_change
)
self.update_ntp_servers(
host=host,
ntp_servers_configured=ntp_servers_configured,
ntp_servers_to_change=ntp_servers_to_change,
operation='overwrite'
)
self.results[host.name]['changed'] = True
change_list.append(True)
else:
if self.verbose:
self.results[host.name]['msg'] = "NTP servers already configured"
self.results[host.name]['changed'] = False
change_list.append(False)
if any(change_list):
changed = True
self.module.exit_json(changed=changed, host_ntp_status=self.results)
def check_ntp_servers(self, host):
"""Check configured NTP servers"""
update_ntp_list = []
host_datetime_system = host.configManager.dateTimeSystem
if host_datetime_system:
ntp_servers_configured = host_datetime_system.dateTimeInfo.ntpConfig.server
# add/remove NTP servers
if self.desired_state:
for ntp_server in self.ntp_servers:
if self.desired_state == 'present' and ntp_server not in ntp_servers_configured:
update_ntp_list.append(ntp_server)
if self.desired_state == 'absent' and ntp_server in ntp_servers_configured:
update_ntp_list.append(ntp_server)
# overwrite NTP servers
else:
if ntp_servers_configured != self.ntp_servers:
for ntp_server in self.ntp_servers:
update_ntp_list.append(ntp_server)
if update_ntp_list:
self.results[host.name]['ntp_servers_previous'] = ntp_servers_configured
return ntp_servers_configured, update_ntp_list
def build_changed_message(self, ntp_servers_configured, new_ntp_servers, ntp_servers_to_change, operation):
"""Build changed message"""
check_mode = 'would be ' if self.module.check_mode else ''
if operation == 'overwrite':
# get differences
add = self.get_not_in_list_one(new_ntp_servers, ntp_servers_configured)
remove = self.get_not_in_list_one(ntp_servers_configured, new_ntp_servers)
diff_servers = list(ntp_servers_configured)
if add and remove:
for server in add:
diff_servers.append(server)
for server in remove:
diff_servers.remove(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sadded and %s %sremoved and the server sequence %schanged as well" %
(self.array_to_string(add), check_mode, self.array_to_string(remove), check_mode, check_mode)
)
else:
if new_ntp_servers != ntp_servers_configured:
message = (
"NTP server %s %sreplaced with %s" %
(self.array_to_string(remove), check_mode, self.array_to_string(add))
)
else:
message = (
"NTP server %s %sremoved and %s %sadded" %
(self.array_to_string(remove), check_mode, self.array_to_string(add), check_mode)
)
elif add:
for server in add:
diff_servers.append(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sadded and the server sequence %schanged as well" %
(self.array_to_string(add), check_mode, check_mode)
)
else:
message = "NTP server %s %sadded" % (self.array_to_string(add), check_mode)
elif remove:
for server in remove:
diff_servers.remove(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sremoved and the server sequence %schanged as well" %
(self.array_to_string(remove), check_mode, check_mode)
)
else:
message = "NTP server %s %sremoved" % (self.array_to_string(remove), check_mode)
else:
message = "NTP server sequence %schanged" % check_mode
elif operation == 'add':
message = "NTP server %s %sadded" % (self.array_to_string(ntp_servers_to_change), check_mode)
elif operation == 'delete':
message = "NTP server %s %sremoved" % (self.array_to_string(ntp_servers_to_change), check_mode)
return message
@staticmethod
def get_not_in_list_one(list1, list2):
"""Return entries that ore not in list one"""
return [x for x in list1 if x not in set(list2)]
@staticmethod
def array_to_string(array):
"""Return string from array"""
if len(array) > 2:
string = (
', '.join("'{0}'".format(element) for element in array[:-1]) + ', and '
+ "'{0}'".format(str(array[-1]))
)
elif len(array) == 2:
string = ' and '.join("'{0}'".format(element) for element in array)
elif len(array) == 1:
string = "'{0}'".format(array[0])
return string
@staticmethod
def get_differt_entries(list1, list2):
"""Return different entries of two lists"""
return [a for a in list1 + list2 if (a not in list1) or (a not in list2)]
def main():
"""Main"""
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
ntp_servers=dict(type='list', required=True),
state=dict(type='str', choices=['absent', 'present']),
verbose=dict(type='bool', default=False, required=False)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
supports_check_mode=True
)
vmware_host_ntp_config = VmwareNtpConfigManager(module)
vmware_host_ntp_config.check_host_state()
if __name__ == "__main__":
main()
|
gpl-3.0
|
davelab6/nototools
|
nototools/chart/chart.py
|
8
|
4094
|
#!/usr/bin/python
import sys
import cairo
import pycairoft
from fontTools import ttLib
def clamp(x, Min, Max):
return max(Min, min(Max, x))
class Color:
def __init__(self, rgb):
self.rgb = rgb
def __repr__(self):
return 'Color(%g,%g,%g)' % self.rgb
def __str__(self):
return "#%02X%02X%02X" % tuple(int(255 * c) for c in self.rgb)
class Font:
def __init__(self, fontfile):
self.filename = fontfile
self.ttfont = ttLib.TTFont(fontfile)
cmap = self.ttfont['cmap']
self.charset = set()
self.charset.update(*[t.cmap.keys() for t in cmap.tables if t.isUnicode()])
self.cairo_font_face = None
def get_cairo_font_face(self):
if self.cairo_font_face is None:
self.cairo_font_face = pycairoft.create_cairo_font_face_for_file (
self.filename)
return self.cairo_font_face
def __repr__(self):
return 'Font("%s")' % self.filename
def assign_colors(fonts):
import colorsys
n = len(fonts)
mult = (n-1) // 2
darkness = .3
for i,font in enumerate(fonts):
pos = (i * mult / float(n)) % 1.
rgb = colorsys.hsv_to_rgb(pos, 1., darkness)
luma = .3*rgb[0] + .59*rgb[1] + .11*rgb[2]
adj = .3 - luma
rgb = [c+adj for c in rgb]
font.color = Color(rgb)
outfile = sys.argv[1]
fonts = [Font(fontfile) for fontfile in sys.argv[2:]]
charset = set.union(*[f.charset for f in fonts])
assign_colors(fonts)
coverage = {c:[] for c in charset}
for font in fonts:
for char in font.charset:
coverage[char].append(font)
NUM_COLS = 128
FONT_SIZE = 5
PADDING = 0.3
BOX_WIDTH = PADDING * .6
CELL_SIZE = FONT_SIZE + 2 * PADDING
MARGIN = 1 * FONT_SIZE
LABEL_WIDTH = 8 * FONT_SIZE/2.
rows = set([u // NUM_COLS * NUM_COLS for u in charset])
num_rows = len(rows)
width = NUM_COLS * CELL_SIZE + 2 * (2 * MARGIN + LABEL_WIDTH)
height = num_rows * CELL_SIZE + 2 * MARGIN
print "Generating %s at %.3gx%.3gin" % (outfile, width/72., height/72.)
if outfile.endswith(".pdf"):
surface = cairo.PDFSurface(outfile, width, height)
elif outfile.endswith(".ps"):
surface = cairo.PSSurface(outfile, width, height)
else:
assert 0
cr = cairo.Context(surface)
noto_sans_lgc = pycairoft.create_cairo_font_face_for_file ("../../fonts/individual/unhinted/NotoSans-Regular.ttf")
#cr.select_font_face("@cairo:", cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_NORMAL)
cr.set_font_size(FONT_SIZE)
cr.set_line_width(PADDING)
STAGE_BOXES = 0
STAGE_GLYPHS = 1
for stage in range(2):
cr.save()
cr.translate(MARGIN, MARGIN)
for row,row_start in enumerate(sorted(rows)):
cr.translate(0, PADDING)
cr.save()
cr.set_source_rgb(0,0,0)
cr.move_to(0,FONT_SIZE)
if stage == 0:
cr.set_font_face(noto_sans_lgc)
cr.show_text ("U+%04X" % row_start)
cr.translate(LABEL_WIDTH + MARGIN, 0)
for char in range(row_start, row_start + NUM_COLS):
cr.translate(PADDING, 0)
for font in coverage.get(char, []):
if stage == STAGE_BOXES:
#cr.rectangle(-BOX_WIDTH*.5, -BOX_WIDTH*.5, FONT_SIZE+BOX_WIDTH, FONT_SIZE+BOX_WIDTH)
#cr.set_source_rgba(*[c * .1 + .9 for c in font.color.rgb])
#cr.stroke()
pass
elif stage == STAGE_GLYPHS:
cr.set_source_rgb(*(font.color.rgb))
#cr.set_source_rgb(0,0,0)
cr.set_font_face(font.get_cairo_font_face())
ascent,descent,font_height,max_x_adv,max_y_adv = cr.font_extents()
cr.save()
# XXX cr.set_font_size (FONT_SIZE*FONT_SIZE / (ascent+descent))
cr.set_font_size (round(1.2 * FONT_SIZE*FONT_SIZE / (ascent+descent)))
ascent,descent,font_height,max_x_adv,max_y_adv = cr.font_extents()
utf8 = unichr(char).encode('utf-8')
x1,y1,width,height,xadv,yadv = cr.text_extents(utf8)
cr.move_to(FONT_SIZE*.5 - (x1+.5*width),
FONT_SIZE*.5 - (-ascent+descent)*.5)
cr.show_text(utf8)
cr.restore()
break
cr.translate(FONT_SIZE, 0)
cr.translate(PADDING, 0)
cr.set_source_rgb(0,0,0)
cr.move_to(MARGIN,FONT_SIZE)
if stage == 0:
cr.set_font_face(noto_sans_lgc)
cr.show_text ("U+%04X" % (row_start + NUM_COLS - 1))
cr.translate(LABEL_WIDTH + 2 * MARGIN, 0)
cr.restore()
cr.translate(0, FONT_SIZE)
cr.translate(0, PADDING)
cr.restore()
|
apache-2.0
|
CubicERP/geraldo
|
site/newsite/site-geraldo/django/core/management/commands/compilemessages.py
|
25
|
2431
|
import os
import sys
from optparse import make_option
from django.core.management.base import BaseCommand, CommandError
try:
set
except NameError:
from sets import Set as set # For Python 2.3
def compile_messages(locale=None):
basedirs = [os.path.join('conf', 'locale'), 'locale']
if os.environ.get('DJANGO_SETTINGS_MODULE'):
from django.conf import settings
basedirs.extend(settings.LOCALE_PATHS)
# Gather existing directories.
basedirs = set(map(os.path.abspath, filter(os.path.isdir, basedirs)))
if not basedirs:
raise CommandError("This script should be run from the Django SVN tree or your project or app tree, or with the settings module specified.")
for basedir in basedirs:
if locale:
basedir = os.path.join(basedir, locale, 'LC_MESSAGES')
for dirpath, dirnames, filenames in os.walk(basedir):
for f in filenames:
if f.endswith('.po'):
sys.stderr.write('processing file %s in %s\n' % (f, dirpath))
pf = os.path.splitext(os.path.join(dirpath, f))[0]
# Store the names of the .mo and .po files in an environment
# variable, rather than doing a string replacement into the
# command, so that we can take advantage of shell quoting, to
# quote any malicious characters/escaping.
# See http://cyberelk.net/tim/articles/cmdline/ar01s02.html
os.environ['djangocompilemo'] = pf + '.mo'
os.environ['djangocompilepo'] = pf + '.po'
if sys.platform == 'win32': # Different shell-variable syntax
cmd = 'msgfmt --check-format -o "%djangocompilemo%" "%djangocompilepo%"'
else:
cmd = 'msgfmt --check-format -o "$djangocompilemo" "$djangocompilepo"'
os.system(cmd)
class Command(BaseCommand):
option_list = BaseCommand.option_list + (
make_option('--locale', '-l', dest='locale',
help='The locale to process. Default is to process all.'),
)
help = 'Compiles .po files to .mo files for use with builtin gettext support.'
requires_model_validation = False
can_import_settings = False
def handle(self, **options):
locale = options.get('locale')
compile_messages(locale)
|
lgpl-3.0
|
tetherless-world/graphene
|
whyis/autonomic/dataset_importer.py
|
2
|
1223
|
from builtins import str
import sadi
import rdflib
import setlr
from datetime import datetime
from .update_change_service import UpdateChangeService
from nanopub import Nanopublication
from datastore import create_id
import flask
from flask import render_template
from flask import render_template_string
import logging
import sys, traceback
import database
import tempfile
from depot.io.interfaces import StoredFile
from whyis.namespace import *
class DatasetImporter(UpdateChangeService):
activity_class = whyis.ImportDatasetEntities
def getInputClass(self):
return whyis.DatasetEntity
def getOutputClass(self):
return whyis.ImportedDatasetEntity
_query = None
def get_query(self):
if self._query is None:
prefixes = [x.detect_url for x in self.app.config['namespaces']]
self._query = '''select distinct ?resource where {
?resource void:inDataset ?dataset.
FILTER (regex(str(?resource), "^(%s)")) .
filter not exists {
?assertion prov:wasQuotedFrom ?resource.
}
} ''' % '|'.join(prefixes)
print(self._query)
return self._query
def process(self, i, o):
node = self.app.run_importer(i.identifier)
|
apache-2.0
|
cloudbase/nova
|
nova/db/sqlalchemy/migrate_repo/versions/346_remove_scheduled_at_column.py
|
13
|
1120
|
# Copyright 2016 Intel Corporation
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData, Table
def upgrade(migrate_engine):
meta = MetaData(bind=migrate_engine)
column_name = 'scheduled_at'
# Remove scheduled_at column from instances table
instances = Table('instances', meta, autoload=True)
shadow_instances = Table('shadow_instances', meta, autoload=True)
if hasattr(instances.c, column_name):
instances.drop_column(instances.c[column_name])
if hasattr(shadow_instances.c, column_name):
shadow_instances.drop_column(shadow_instances.c[column_name])
|
apache-2.0
|
yewang15215/django
|
tests/aggregation_regress/models.py
|
282
|
3288
|
# -*- coding: utf-8 -*-
from django.contrib.contenttypes.fields import (
GenericForeignKey, GenericRelation,
)
from django.contrib.contenttypes.models import ContentType
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class Author(models.Model):
name = models.CharField(max_length=100)
age = models.IntegerField()
friends = models.ManyToManyField('self', blank=True)
def __str__(self):
return self.name
@python_2_unicode_compatible
class Publisher(models.Model):
name = models.CharField(max_length=255)
num_awards = models.IntegerField()
def __str__(self):
return self.name
class ItemTag(models.Model):
tag = models.CharField(max_length=100)
content_type = models.ForeignKey(ContentType, models.CASCADE)
object_id = models.PositiveIntegerField()
content_object = GenericForeignKey('content_type', 'object_id')
@python_2_unicode_compatible
class Book(models.Model):
isbn = models.CharField(max_length=9)
name = models.CharField(max_length=255)
pages = models.IntegerField()
rating = models.FloatField()
price = models.DecimalField(decimal_places=2, max_digits=6)
authors = models.ManyToManyField(Author)
contact = models.ForeignKey(Author, models.CASCADE, related_name='book_contact_set')
publisher = models.ForeignKey(Publisher, models.CASCADE)
pubdate = models.DateField()
tags = GenericRelation(ItemTag)
class Meta:
ordering = ('name',)
def __str__(self):
return self.name
@python_2_unicode_compatible
class Store(models.Model):
name = models.CharField(max_length=255)
books = models.ManyToManyField(Book)
original_opening = models.DateTimeField()
friday_night_closing = models.TimeField()
def __str__(self):
return self.name
class Entries(models.Model):
EntryID = models.AutoField(primary_key=True, db_column='Entry ID')
Entry = models.CharField(unique=True, max_length=50)
Exclude = models.BooleanField(default=False)
class Clues(models.Model):
ID = models.AutoField(primary_key=True)
EntryID = models.ForeignKey(Entries, models.CASCADE, verbose_name='Entry', db_column='Entry ID')
Clue = models.CharField(max_length=150)
class WithManualPK(models.Model):
# The generic relations regression test needs two different model
# classes with the same PK value, and there are some (external)
# DB backends that don't work nicely when assigning integer to AutoField
# column (MSSQL at least).
id = models.IntegerField(primary_key=True)
@python_2_unicode_compatible
class HardbackBook(Book):
weight = models.FloatField()
def __str__(self):
return "%s (hardback): %s" % (self.name, self.weight)
# Models for ticket #21150
class Alfa(models.Model):
name = models.CharField(max_length=10, null=True)
class Bravo(models.Model):
pass
class Charlie(models.Model):
alfa = models.ForeignKey(Alfa, models.SET_NULL, null=True)
bravo = models.ForeignKey(Bravo, models.SET_NULL, null=True)
class SelfRefFK(models.Model):
name = models.CharField(max_length=50)
parent = models.ForeignKey('self', models.SET_NULL, null=True, blank=True, related_name='children')
|
bsd-3-clause
|
dahlstrom-g/intellij-community
|
plugins/hg4idea/testData/bin/mercurial/demandimport.py
|
94
|
5252
|
# demandimport.py - global demand-loading of modules for Mercurial
#
# Copyright 2006, 2007 Matt Mackall <[email protected]>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''
demandimport - automatic demandloading of modules
To enable this module, do:
import demandimport; demandimport.enable()
Imports of the following forms will be demand-loaded:
import a, b.c
import a.b as c
from a import b,c # a will be loaded immediately
These imports will not be delayed:
from a import *
b = __import__(a)
'''
import __builtin__
_origimport = __import__
nothing = object()
try:
_origimport(__builtin__.__name__, {}, {}, None, -1)
except TypeError: # no level argument
def _import(name, globals, locals, fromlist, level):
"call _origimport with no level argument"
return _origimport(name, globals, locals, fromlist)
else:
_import = _origimport
class _demandmod(object):
"""module demand-loader and proxy"""
def __init__(self, name, globals, locals):
if '.' in name:
head, rest = name.split('.', 1)
after = [rest]
else:
head = name
after = []
object.__setattr__(self, "_data", (head, globals, locals, after))
object.__setattr__(self, "_module", None)
def _extend(self, name):
"""add to the list of submodules to load"""
self._data[3].append(name)
def _load(self):
if not self._module:
head, globals, locals, after = self._data
mod = _origimport(head, globals, locals)
# load submodules
def subload(mod, p):
h, t = p, None
if '.' in p:
h, t = p.split('.', 1)
if getattr(mod, h, nothing) is nothing:
setattr(mod, h, _demandmod(p, mod.__dict__, mod.__dict__))
elif t:
subload(getattr(mod, h), t)
for x in after:
subload(mod, x)
# are we in the locals dictionary still?
if locals and locals.get(head) == self:
locals[head] = mod
object.__setattr__(self, "_module", mod)
def __repr__(self):
if self._module:
return "<proxied module '%s'>" % self._data[0]
return "<unloaded module '%s'>" % self._data[0]
def __call__(self, *args, **kwargs):
raise TypeError("%s object is not callable" % repr(self))
def __getattribute__(self, attr):
if attr in ('_data', '_extend', '_load', '_module'):
return object.__getattribute__(self, attr)
self._load()
return getattr(self._module, attr)
def __setattr__(self, attr, val):
self._load()
setattr(self._module, attr, val)
def _demandimport(name, globals=None, locals=None, fromlist=None, level=-1):
if not locals or name in ignore or fromlist == ('*',):
# these cases we can't really delay
return _import(name, globals, locals, fromlist, level)
elif not fromlist:
# import a [as b]
if '.' in name: # a.b
base, rest = name.split('.', 1)
# email.__init__ loading email.mime
if globals and globals.get('__name__', None) == base:
return _import(name, globals, locals, fromlist, level)
# if a is already demand-loaded, add b to its submodule list
if base in locals:
if isinstance(locals[base], _demandmod):
locals[base]._extend(rest)
return locals[base]
return _demandmod(name, globals, locals)
else:
if level != -1:
# from . import b,c,d or from .a import b,c,d
return _origimport(name, globals, locals, fromlist, level)
# from a import b,c,d
mod = _origimport(name, globals, locals)
# recurse down the module chain
for comp in name.split('.')[1:]:
if getattr(mod, comp, nothing) is nothing:
setattr(mod, comp, _demandmod(comp, mod.__dict__, mod.__dict__))
mod = getattr(mod, comp)
for x in fromlist:
# set requested submodules for demand load
if getattr(mod, x, nothing) is nothing:
setattr(mod, x, _demandmod(x, mod.__dict__, locals))
return mod
ignore = [
'_hashlib',
'_xmlplus',
'fcntl',
'win32com.gen_py',
'_winreg', # 2.7 mimetypes needs immediate ImportError
'pythoncom',
# imported by tarfile, not available under Windows
'pwd',
'grp',
# imported by profile, itself imported by hotshot.stats,
# not available under Windows
'resource',
# this trips up many extension authors
'gtk',
# setuptools' pkg_resources.py expects "from __main__ import x" to
# raise ImportError if x not defined
'__main__',
'_ssl', # conditional imports in the stdlib, issue1964
'rfc822',
'mimetools',
]
def enable():
"enable global demand-loading of modules"
__builtin__.__import__ = _demandimport
def disable():
"disable global demand-loading of modules"
__builtin__.__import__ = _origimport
|
apache-2.0
|
UdK-VPT/Open_eQuarter
|
mole3/extensions/eval_present_heritage/oeq_UPH_Roof.py
|
2
|
2202
|
# -*- coding: utf-8 -*-
import os,math
from qgis.core import NULL
from mole3 import oeq_global
from mole3.project import config
from mole3.extensions import OeQExtension
from mole3.stat_corr import rb_present_roof_uvalue_AVG_by_building_age_lookup, nrb_present_roof_uvalue_by_building_age_lookup, rb_contemporary_roof_uvalue_by_building_age_lookup, nrb_contemporary_roof_uvalue_by_building_age_lookup
def calculation(self=None, parameters={},feature = None):
from scipy.constants import golden
from math import floor, ceil
from qgis.PyQt.QtCore import QVariant
rf_uph = NULL
#differentiation between RB and NRB (for now in case of contemporary U-Values RB=NRB. After getting NRB data for contemporary case code must be adaptet)
if parameters['BLD_USAGE'] == "RB":
if not oeq_global.isnull(parameters['YOC']):
rf_uph = rb_present_roof_uvalue_AVG_by_building_age_lookup.get(parameters['YOC'])
elif parameters['BLD_USAGE'] == "NRB":
if not oeq_global.isnull(parameters['YOC']):
rf_uph = nrb_present_roof_uvalue_by_building_age_lookup.get(parameters['YOC'])
else:
if not oeq_global.isnull(parameters['YOC']):
rf_uph = (((rb_present_roof_uvalue_AVG_by_building_age_lookup.get(parameters['YOC'])) + (
nrb_present_roof_uvalue_by_building_age_lookup.get(parameters['YOC']))) / 2)
return {'RF_UPH': {'type': QVariant.Double, 'value': rf_uph}}
extension = OeQExtension(
extension_id=__name__,
category='Evaluation',
subcategory='U-Values Present Heritage',
extension_name='Roof Quality (U_Value, Present Heritage)',
layer_name= 'U Roof Present Heritage',
extension_filepath=os.path.join(__file__),
colortable = os.path.join(os.path.splitext(__file__)[0] + '.qml'),
field_id='RF_UPH',
source_type='none',
par_in=['YOC','BLD_USAGE','HERIT_STAT'],
sourcelayer_name=config.data_layer_name,
targetlayer_name=config.data_layer_name,
active=True,
show_results=['RF_UPH'],
description="Calculate the present heritage U-Value of the Building's roof",
evaluation_method=calculation)
extension.registerExtension(default=True)
|
gpl-2.0
|
imiolek-ireneusz/pysiogame
|
i18n/custom/da.py
|
1
|
11670
|
# -*- coding: utf-8 -*-
# FAO Translators:
# First of all thank you for your interest in translating this game,
# I will be grateful if you could share it with the community -
# if possible please send it back to my email, and I'll add it to the next version.
# The translation does not have to be exact as long as it makes sense and fits in its location
# (if it doesn't I'll try to either make the font smaller or make the area wider - where possible).
# The colour names in other languages than English are already in smaller font.
d = dict()
dp = dict() # messages with pronunciation exceptions - this dictionary will override entries in a copy of d
numbers = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven', 'twelve',
'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen', 'twenty', 'twenty one',
'twenty two', 'twenty three', 'twenty four', 'twenty five', 'twenty six', 'twenty seven', 'twenty eight',
'twenty nine']
numbers2090 = ['twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety']
# The following 2 lines are not to be translated but replaced with a sequence of words starting in each of the letters of your alphabet in order, best if these words have a corresponding picture in images/flashcard_images.jpg. The second line has the number of the image that the word describes.
# The images are numbered from left to bottom such that the top left is numbered 0, the last image is 73, if none of the available things have names that start with any of the letters we can add new pictures.
dp['abc_flashcards_word_sequence'] = ['Apple', 'Butterfly', 'Cat', 'Dolphin', 'Elephant', 'Fortepiano', 'Guitar',
'Hedgehog', 'Igloo', 'Jar', 'Koala', 'Lion', 'Monitor', 'Notebook', 'Ocean',
'Parrot', 'Queen', 'Rabbit', 'Street', 'Tomato', 'Umbrella', 'Violin',
'Watermelon', 'Xylophone', 'Yarn', 'Zebra']
d['abc_flashcards_word_sequence'] = ['<1>A<2>pple', '<1>B<2>utterfly', '<1>C<2>at', '<1>D<2>olphin',
'<1>E<2>l<1>e<2>phant', '<1>F<2>ortepiano', '<1>G<2>uitar', '<1>H<2>edge<1>h<2>og',
'<1>I<2>gloo', '<1>J<2>ar', '<1>K<2>oala', '<1>L<2>ion', '<1>M<2>onitor',
'<1>N<2>otebook', '<1>O<2>cean', '<1>P<2>arrot', '<1>Q<2>ueen', '<1>R<2>abbit',
'<1>S<2>treet', '<1>T<2>oma<1>t<2>o', '<1>U<2>mbrella', '<1>V<2>iolin',
'<1>W<2>atermelon', '<1>X<2>ylophone', '<1>Y<2>arn', '<1>Z<2>ebra']
d['abc_flashcards_frame_sequence'] = [42, 27, 2, 59, 4, 34, 28, 29, 8, 9, 72, 11, 40, 13, 52, 15, 16, 17, 53, 33, 20,
21, 26, 23, 24, 25, 43, 43, 43]
# alphabet en
alphabet_lc = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u',
'v', 'w', 'x', 'y', 'z', 'æ', 'ø', 'å']
alphabet_uc = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U',
'V', 'W', 'X', 'Y', 'Z', 'Æ', 'Ø', 'Å']
# correction of eSpeak pronounciation of single letters if needed
letter_names = []
accents_lc = ['é', '-']
accents_uc = ['É']
def n2txt(n, twoliner=False):
"takes a number from 1 - 99 and returns it back in a word form, ie: 63 returns 'sixty three'."
if 0 < n < 30:
return numbers[n - 1]
elif 30 <= n < 100:
m = n % 10
tens = numbers2090[(n // 10) - 2]
if m == 0:
return tens
elif m > 0:
ones = numbers[m - 1]
if twoliner:
return [tens, ones]
else:
return tens + " " + ones
elif n == 0:
return "zero"
elif n == 100:
return "one hundred"
return ""
def time2str(h, m):
'takes 2 variables: h - hour, m - minute, returns time as a string, ie. five to seven - for 6:55'
if m > 30:
if h == 12:
h = 1
else:
h += 1
if m == 0:
return "%s o'clock" % n2txt(h)
elif m == 1:
return "one minute past %s" % n2txt(h)
elif m == 15:
return "quarter past %s" % n2txt(h)
elif m == 30:
return "half past %s" % n2txt(h)
elif m == 45:
return "quarter to %s" % n2txt(h)
elif m == 59:
return "one minute to %s" % n2txt(h)
elif m < 30:
return "%s past %s" % (n2txt(m), n2txt(h))
elif m > 30:
return "%s to %s" % (n2txt(60 - m), n2txt(h))
return ""
d["a4a_animals"] = ["cow", "turkey", "shrimp", "wolf", "panther", "panda", "magpie", "clam", "pony", "mouse", "pug",
"koala", "frog", "ladybug", "gorilla", "llama", "vulture", "hamster", "bird", "starfish", "crow",
"parakeet", "caterpillar", "tiger", "hummingbird", "piranha", "pig", "scorpion", "fox", "leopard",
"iguana", "dolphin", "bat", "chick", "crab", "hen", "wasp", "chameleon", "whale", "hedgehog",
"fawn", "moose", "bee", "viper", "shrike", "donkey", "guinea pig", "sloth", "horse", "penguin",
"otter", "bear", "zebra", "ostrich", "camel", "antelope", "lemur", "pigeon", "lama", "mole", "ray",
"ram", "skunk", "jellyfish", "sheep", "shark", "kitten", "deer", "snail", "flamingo", "rabbit",
"oyster", "beaver", "sparrow", "dove", "eagle", "beetle", "hippopotamus", "owl", "cobra",
"salamander", "goose", "kangaroo", "dragonfly", "toad", "pelican", "squid", "lion cub", "jaguar",
"duck", "lizard", "rhinoceros", "hyena", "ox", "peacock", "parrot", "elk", "alligator", "ant",
"goat", "baby rabbit", "lion", "squirrel", "opossum", "chimp", "doe", "gopher", "elephant",
"giraffe", "spider", "puppy", "jay", "seal", "rooster", "turtle", "bull", "cat", "lamb", "rat",
"slug", "buffalo", "blackbird", "swan", "lobster", "dog", "mosquito", "snake", "chicken",
"anteater"]
d["a4a_sport"] = ["judo", "pool", "ride", "stretch", "helmet", "ice skating", "walk", "ran", "run", "swim", "hop",
"hike", "boxing", "hockey", "race", "throw", "skate", "win", "squat", "ski", "golf", "whistle",
"torch", "sailing", "stand", "tennis", "jump", "rowing", "jog", "rope"]
d["a4a_body"] = ["teeth", "cheek", "ankle", "knee", "toe", "muscle", "mouth", "feet", "hand", "elbow", "hair",
"eyelash", "beard", "belly button", "thumb", "breast", "nostril", "nose", "hip", "arm", "eyebrow",
"fist", "neck", "wrist", "throat", "eye", "leg", "spine", "ear", "finger", "foot", "braid", "face",
"back", "chin", "bottom", "thigh", "belly"]
d["a4a_people"] = ["girl", "male", "son", "mates", "friends", "baby", "child", "dad", "mom", "twin boys", "brothers",
"man", "mother", "grandfather", "family", "female", "wife", "husband", "bride", "madam",
"grandmother", "couple", "lad", "twin girls", "tribe", "boy", "sisters", "woman", "lady"]
d["a4a_food"] = ["candy", "sausage", "hamburger", "steak", "fudge", "doughnut", "coconut", "rice", "ice cream", "jelly",
"yoghurt", "dessert", "pretzel", "peanut", "jam", "feast", "cookie", "bacon", "spice", "coffee", "pie",
"lemonade", "chocolate", "water bottle", "lunch", "ice", "sugar", "sauce", "soup", "juice", "fries",
"cake", "mashed potatoes", "tea", "bun", "cheese", "beef", "sandwich", "slice", "sprinkle", "pizza",
"flour", "gum", "spaghetti", "roast", "drink", "stew", "spread", "meat", "milk", "meal", "corn",
"bread", "walnut", "egg", "hot dog", "ham"]
d["a4a_clothes_n_accessories"] = ["jewellery", "sock", "jacket", "heel", "smock", "shorts", "pocket", "necklace",
"sweatshirt", "uniform", "raincoat", "trousers", "sunglasses", "coat", "pullover",
"shirt", "sandals", "suit", "pyjamas", "skirt", "zip", "shoes", "jewel", "tie",
"slippers", "gloves", "hat", "sleeve", "cap", "swimming suit", "trainer", "vest",
"glasses", "shoelace", "patch", "scarf", "shoe", "button", "dress", "sash",
"shoe sole", "robe", "pants", "kimono", "overalls"]
d["a4a_actions"] = ["lick", "slam", "beg", "fell", "scratch", "touch", "sniff", "see", "climb", "dig", "howl", "sleep",
"explore", "draw", "hug", "teach", "nap", "clay", "catch", "clap", "cry", "sing", "meet", "sell",
"peck", "beat", "kneel", "find", "dance", "cough", "cut", "think", "bark", "speak", "cheer", "bake",
"write", "punch", "strum", "study", "plow", "dream", "post", "dive", "whisper", "sob", "shake",
"feed", "crawl", "camp", "spill", "clean", "scream", "tear", "float", "pull", "ate", "kiss", "sit",
"hatch", "blink", "hear", "smooch", "play", "wash", "chat", "drive", "drink", "fly", "juggle",
"bit", "sweep", "look", "knit", "lift", "fetch", "read", "croak", "stare", "eat"]
d["a4a_construction"] = ["lighthouse", "door", "circus", "church", "kennel", "temple", "smoke", "chimney", "brick",
"well", "street", "castle", "store", "staircase", "school", "farm", "bridge", "dam", "pyramid",
"barn", "mill", "window", "cabin", "step", "shop", "shed", "roof", "steeple", "garage",
"mosque", "hospital", "tent", "house", "wall", "bank", "shutter", "hut"]
d["a4a_nature"] = ["land", "cliff", "hill", "canyon", "rock", "sea", "lake", "coast", "shore", "mountain", "pond",
"peak", "lava", "cave", "dune", "island", "forest", "desert", "iceberg"]
d["a4a_jobs"] = ["clown", "engineer", "priest", "vet", "judge", "chef", "athlete", "librarian", "juggler", "police",
"plumber", "badge", "queen", "farmer", "magic", "knight", "doctor", "bricklayer", "cleaner", "teacher",
"hunter", "soldier", "musician", "lawyer", "fisherman", "princess", "fireman", "nun", "chief",
"pirate", "cowboy", "electrician", "nurse", "king", "president", "office", "carpenter", "jockey",
"worker", "mechanic", "pilot", "actor", "cook", "student", "butcher", "accountant", "prince", "pope",
"sailor", "boxer", "ballet", "coach", "astronaut", "painter", "anaesthesiologist", "scientist"]
d["a4a_fruit_n_veg"] = ["carrot", "blackberries", "celery", "turnip", "cacao", "peach", "melon", "grapefruit",
"broccoli", "grapes", "spinach", "fig", "kernel", "radish", "tomato", "kiwi", "asparagus",
"olives", "cucumbers", "beans", "strawberry", "peppers", "raspberry", "apricot", "potatoes",
"peas", "cabbage", "cherries", "squash", "blueberries", "pear", "orange", "pumpkin", "avocado",
"garlic", "onion", "apple", "lime", "cauliflower", "mango", "lettuce", "lemon", "aubergine",
"artichokes", "plums", "leek", "bananas", "papaya"]
d["a4a_transport"] = ["sail", "taxi", "car", "bike", "raft", "pedal", "bus", "handlebar", "boat", "truck", "sleigh",
"carpet", "motorcycle", "train", "ship", "van", "canoe", "rocket", "mast", "sledge", "bicycle"]
|
gpl-3.0
|
airanmehr/bio
|
Scripts/Miscellaneous/Ali/run.py
|
1
|
2293
|
plt.switch_backend('TkAgg')
mng = plt.get_current_fig_manager()
from matplotlib.backends.backend_pdf import PdfPages
if __name__ == '__main__':
with PdfPages('/home/arya/sim2.pdf') as pdf:
for iii in range(100):
fig=plt.figure(figsize=(20,20), dpi=100)
print iii
freq=0.3+np.random.random()/0.7
[M,idx,oc]=hard(0,freq,flag=1)
A=M.dot(M.T)
C=np.zeros(A.shape)
for i in range(C.shape[1]):
for j in range(C.shape[1]):
C[i,j]=(M[i,:].dot(M[j,:]))**5/np.logical_or(M[i,:],M[j,:]).sum()
# B=np.zeros((M.shape[1],M.shape[0],M.shape[0]))
# B.shape
#
# for i in range(B.shape[1]):
# B[i,:,:]=M[:,i][:,None].dot(M[:,i][:,None].T)
#
# ss=np.array([B[i,:,:].sum() for i in range(B.shape[1])])
# ss.argsort()
# b=B[idx,:,:]
# np.linalg.norm(b-b.T)
#
# B.sum(1,2)
car=M[:,idx]==1
h=A.sum(1)
l=cluster_CFP_scores_GMM(h[:,None])
ev=abs(np.linalg.eigh(A)[1][:,-1])
# ev2=np.linalg.eigh(A)[0]
# carb=car
# ev=ev/ev.sum();h=h/h.sum()
# err=np.array([np.linalg.norm(np.linalg.eigh(A)[0][:,-1]*ev[:,None].dot(ev[:,None].T)-M[:,i][:,None].dot(M[:,i][:,None].T)) for i in range(M.shape[1])])
# plt.plot(err)
hJ=C.sum(1)
evJ=abs(np.linalg.eigh(C)[1][:,-1])
plt.subplot(2,2,1);plt.hist([h[car],h[~ car]],nbin,color=['red','blue']);plt.grid();plt.title('HAF');plt.subplot(2,2,2);plt.hist([ev[car],ev[~ car]],nbin,color=['red','blue']);plt.title('EV');plt.grid();plt.subplot(2,2,3);plt.hist([hJ[car],hJ[~ car]],nbin,color=['red','blue']);plt.grid();plt.title('Jaccard');plt.subplot(2,2,4);plt.hist([evJ[car],evJ[~ car]],nbin,color=['red','blue']);plt.title('EVJ') ;plt.grid()
plt.suptitle('freq={}'.format(freq))
C2=C/C.sum(1)[:,None];A2=A/A.sum(1)[:,None];
pdf.savefig(fig)
# print np.linalg.norm(A2-C2)
# mng.resize(*mng.window.maxsize())
# print idx
# plt.show()
|
mit
|
szymonm/flask
|
tests/test_deprecations.py
|
149
|
1278
|
# -*- coding: utf-8 -*-
"""
tests.deprecations
~~~~~~~~~~~~~~~~~~
Tests deprecation support. Not used currently.
:copyright: (c) 2015 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
import pytest
import flask
class TestRequestDeprecation(object):
def test_request_json(self, catch_deprecation_warnings):
"""Request.json is deprecated"""
app = flask.Flask(__name__)
app.testing = True
@app.route('/', methods=['POST'])
def index():
assert flask.request.json == {'spam': 42}
print(flask.request.json)
return 'OK'
with catch_deprecation_warnings() as captured:
c = app.test_client()
c.post('/', data='{"spam": 42}', content_type='application/json')
assert len(captured) == 1
def test_request_module(self, catch_deprecation_warnings):
"""Request.module is deprecated"""
app = flask.Flask(__name__)
app.testing = True
@app.route('/')
def index():
assert flask.request.module is None
return 'OK'
with catch_deprecation_warnings() as captured:
c = app.test_client()
c.get('/')
assert len(captured) == 1
|
bsd-3-clause
|
174high/bitcoin
|
contrib/seeds/generate-seeds.py
|
55
|
4341
|
#!/usr/bin/env python3
# Copyright (c) 2014-2017 Wladimir J. van der Laan
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
'''
Script to generate list of seed nodes for chainparams.cpp.
This script expects two text files in the directory that is passed as an
argument:
nodes_main.txt
nodes_test.txt
These files must consist of lines in the format
<ip>
<ip>:<port>
[<ipv6>]
[<ipv6>]:<port>
<onion>.onion
0xDDBBCCAA (IPv4 little-endian old pnSeeds format)
The output will be two data structures with the peers in binary format:
static SeedSpec6 pnSeed6_main[]={
...
}
static SeedSpec6 pnSeed6_test[]={
...
}
These should be pasted into `src/chainparamsseeds.h`.
'''
from base64 import b32decode
from binascii import a2b_hex
import sys, os
import re
# ipv4 in ipv6 prefix
pchIPv4 = bytearray([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0xff, 0xff])
# tor-specific ipv6 prefix
pchOnionCat = bytearray([0xFD,0x87,0xD8,0x7E,0xEB,0x43])
def name_to_ipv6(addr):
if len(addr)>6 and addr.endswith('.onion'):
vchAddr = b32decode(addr[0:-6], True)
if len(vchAddr) != 16-len(pchOnionCat):
raise ValueError('Invalid onion %s' % s)
return pchOnionCat + vchAddr
elif '.' in addr: # IPv4
return pchIPv4 + bytearray((int(x) for x in addr.split('.')))
elif ':' in addr: # IPv6
sub = [[], []] # prefix, suffix
x = 0
addr = addr.split(':')
for i,comp in enumerate(addr):
if comp == '':
if i == 0 or i == (len(addr)-1): # skip empty component at beginning or end
continue
x += 1 # :: skips to suffix
assert(x < 2)
else: # two bytes per component
val = int(comp, 16)
sub[x].append(val >> 8)
sub[x].append(val & 0xff)
nullbytes = 16 - len(sub[0]) - len(sub[1])
assert((x == 0 and nullbytes == 0) or (x == 1 and nullbytes > 0))
return bytearray(sub[0] + ([0] * nullbytes) + sub[1])
elif addr.startswith('0x'): # IPv4-in-little-endian
return pchIPv4 + bytearray(reversed(a2b_hex(addr[2:])))
else:
raise ValueError('Could not parse address %s' % addr)
def parse_spec(s, defaultport):
match = re.match('\[([0-9a-fA-F:]+)\](?::([0-9]+))?$', s)
if match: # ipv6
host = match.group(1)
port = match.group(2)
elif s.count(':') > 1: # ipv6, no port
host = s
port = ''
else:
(host,_,port) = s.partition(':')
if not port:
port = defaultport
else:
port = int(port)
host = name_to_ipv6(host)
return (host,port)
def process_nodes(g, f, structname, defaultport):
g.write('static SeedSpec6 %s[] = {\n' % structname)
first = True
for line in f:
comment = line.find('#')
if comment != -1:
line = line[0:comment]
line = line.strip()
if not line:
continue
if not first:
g.write(',\n')
first = False
(host,port) = parse_spec(line, defaultport)
hoststr = ','.join(('0x%02x' % b) for b in host)
g.write(' {{%s}, %i}' % (hoststr, port))
g.write('\n};\n')
def main():
if len(sys.argv)<2:
print(('Usage: %s <path_to_nodes_txt>' % sys.argv[0]), file=sys.stderr)
exit(1)
g = sys.stdout
indir = sys.argv[1]
g.write('#ifndef BITCOIN_CHAINPARAMSSEEDS_H\n')
g.write('#define BITCOIN_CHAINPARAMSSEEDS_H\n')
g.write('/**\n')
g.write(' * List of fixed seed nodes for the bitcoin network\n')
g.write(' * AUTOGENERATED by contrib/seeds/generate-seeds.py\n')
g.write(' *\n')
g.write(' * Each line contains a 16-byte IPv6 address and a port.\n')
g.write(' * IPv4 as well as onion addresses are wrapped inside a IPv6 address accordingly.\n')
g.write(' */\n')
with open(os.path.join(indir,'nodes_main.txt'),'r') as f:
process_nodes(g, f, 'pnSeed6_main', 8333)
g.write('\n')
with open(os.path.join(indir,'nodes_test.txt'),'r') as f:
process_nodes(g, f, 'pnSeed6_test', 18333)
g.write('#endif // BITCOIN_CHAINPARAMSSEEDS_H\n')
if __name__ == '__main__':
main()
|
mit
|
eleme/thriftpy
|
setup.py
|
1
|
3334
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import sys
import platform
from os.path import join, dirname
from setuptools import setup, find_packages
from setuptools.extension import Extension
with open(join(dirname(__file__), 'thriftpy', '__init__.py'), 'r') as f:
version = re.match(r".*__version__ = '(.*?)'", f.read(), re.S).group(1)
install_requires = [
"ply>=3.4,<4.0",
]
tornado_requires = [
"tornado>=4.0,<5.0",
"toro>=0.6"
]
dev_requires = [
"cython>=0.23",
"flake8>=2.5",
"pytest>=2.8",
"sphinx-rtd-theme>=0.1.9",
"sphinx>=1.3",
] + tornado_requires
# cython detection
try:
from Cython.Build import cythonize
CYTHON = True
except ImportError:
CYTHON = False
cmdclass = {}
ext_modules = []
# pypy detection
PYPY = "__pypy__" in sys.modules
UNIX = platform.system() in ("Linux", "Darwin")
# only build ext in CPython with UNIX platform
if UNIX and not PYPY:
# rebuild .c files if cython available
if CYTHON:
cythonize("thriftpy/transport/cybase.pyx")
cythonize("thriftpy/transport/**/*.pyx")
cythonize("thriftpy/protocol/cybin/cybin.pyx")
ext_modules.append(Extension("thriftpy.transport.cybase",
["thriftpy/transport/cybase.c"]))
ext_modules.append(Extension("thriftpy.transport.buffered.cybuffered",
["thriftpy/transport/buffered/cybuffered.c"]))
ext_modules.append(Extension("thriftpy.transport.memory.cymemory",
["thriftpy/transport/memory/cymemory.c"]))
ext_modules.append(Extension("thriftpy.transport.framed.cyframed",
["thriftpy/transport/framed/cyframed.c"]))
ext_modules.append(Extension("thriftpy.protocol.cybin",
["thriftpy/protocol/cybin/cybin.c"]))
setup(name="thriftpy",
version=version,
description="Pure python implementation of Apache Thrift.",
keywords="thrift python thriftpy",
author="Lx Yu",
author_email="[email protected]",
packages=find_packages(exclude=['benchmark', 'docs', 'tests']),
package_data={"thriftpy": ["contrib/tracking/tracking.thrift"]},
entry_points={},
url="https://thriftpy.readthedocs.org/",
license="MIT",
zip_safe=False,
long_description=open("README.rst").read(),
install_requires=install_requires,
tests_require=tornado_requires,
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
extras_require={
"dev": dev_requires,
"tornado": tornado_requires
},
cmdclass=cmdclass,
ext_modules=ext_modules,
classifiers=[
"Topic :: Software Development",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
])
|
mit
|
gramps-project/gramps
|
gramps/plugins/tool/eventcmp.py
|
4
|
16453
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2006 Donald N. Allingham
# Copyright (C) 2008 Brian G. Matherly
# Copyright (C) 2010 Jakim Friant
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""Tools/Analysis and Exploration/Compare Individual Events"""
#------------------------------------------------------------------------
#
# python modules
#
#------------------------------------------------------------------------
import os
from collections import defaultdict
#------------------------------------------------------------------------
#
# GNOME/GTK modules
#
#------------------------------------------------------------------------
from gi.repository import Gtk
#------------------------------------------------------------------------
#
# Gramps modules
#
#------------------------------------------------------------------------
from gramps.gen.filters import GenericFilter, rules
from gramps.gui.filters import build_filter_model
from gramps.gen.sort import Sort
from gramps.gui.utils import ProgressMeter
from gramps.gen.utils.docgen import ODSTab
from gramps.gen.const import CUSTOM_FILTERS, URL_MANUAL_PAGE
from gramps.gen.errors import WindowActiveError
from gramps.gen.datehandler import get_date
from gramps.gui.dialog import WarningDialog
from gramps.gui.plug import tool
from gramps.gen.plug.report import utils
from gramps.gui.display import display_help
from gramps.gui.managedwindow import ManagedWindow
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.sgettext
from gramps.gui.glade import Glade
from gramps.gui.editors import FilterEditor
from gramps.gen.constfunc import get_curr_dir
#-------------------------------------------------------------------------
#
# Constants
#
#-------------------------------------------------------------------------
WIKI_HELP_PAGE = '%s_-_Tools' % URL_MANUAL_PAGE
WIKI_HELP_SEC = _('Compare_Individual_Events', 'manual')
#------------------------------------------------------------------------
#
# EventCmp
#
#------------------------------------------------------------------------
class TableReport:
"""
This class provides an interface for the spreadsheet table
used to save the data into the file.
"""
def __init__(self,filename,doc):
self.filename = filename
self.doc = doc
def initialize(self,cols):
self.doc.open(self.filename)
self.doc.start_page()
def finalize(self):
self.doc.end_page()
self.doc.close()
def write_table_data(self,data,skip_columns=[]):
self.doc.start_row()
index = -1
for item in data:
index += 1
if index not in skip_columns:
self.doc.write_cell(item)
self.doc.end_row()
def set_row(self,val):
self.row = val + 2
def write_table_head(self, data):
self.doc.start_row()
list(map(self.doc.write_cell, data))
self.doc.end_row()
#------------------------------------------------------------------------
#
#
#
#------------------------------------------------------------------------
class EventComparison(tool.Tool,ManagedWindow):
def __init__(self, dbstate, user, options_class, name, callback=None):
uistate = user.uistate
self.dbstate = dbstate
self.uistate = uistate
tool.Tool.__init__(self,dbstate, options_class, name)
ManagedWindow.__init__(self, uistate, [], self)
self.qual = 0
self.filterDialog = Glade(toplevel="filters", also_load=["liststore1"])
self.filterDialog.connect_signals({
"on_apply_clicked" : self.on_apply_clicked,
"on_editor_clicked" : self.filter_editor_clicked,
"on_help_clicked" : self.on_help_clicked,
"destroy_passed_object" : self.close,
"on_write_table" : self.__dummy,
})
window = self.filterDialog.toplevel
self.filters = self.filterDialog.get_object("filter_list")
self.label = _('Event comparison filter selection')
self.set_window(window,self.filterDialog.get_object('title'),
self.label)
self.setup_configs('interface.eventcomparison', 640, 220)
self.on_filters_changed('Person')
uistate.connect('filters-changed', self.on_filters_changed)
self.show()
def __dummy(self, obj):
"""dummy callback, needed because widget is in same glade file
as another widget, so callbacks must be defined to avoid warnings.
"""
pass
def on_filters_changed(self, name_space):
if name_space == 'Person':
all_filter = GenericFilter()
all_filter.set_name(_("Entire Database"))
all_filter.add_rule(rules.person.Everyone([]))
self.filter_model = build_filter_model('Person', [all_filter])
self.filters.set_model(self.filter_model)
self.filters.set_active(0)
def on_help_clicked(self, obj):
"""Display the relevant portion of Gramps manual"""
display_help(webpage=WIKI_HELP_PAGE, section=WIKI_HELP_SEC)
def build_menu_names(self, obj):
return (_("Filter selection"),_("Event Comparison tool"))
def filter_editor_clicked(self, obj):
try:
FilterEditor('Person',CUSTOM_FILTERS,
self.dbstate,self.uistate)
except WindowActiveError:
pass
def on_apply_clicked(self, obj):
cfilter = self.filter_model[self.filters.get_active()][1]
progress_bar = ProgressMeter(_('Comparing events'), '',
parent=self.window)
progress_bar.set_pass(_('Selecting people'),1)
plist = cfilter.apply(self.db,
self.db.iter_person_handles())
progress_bar.step()
progress_bar.close()
self.options.handler.options_dict['filter'] = self.filters.get_active()
# Save options
self.options.handler.save_options()
if len(plist) == 0:
WarningDialog(_("No matches were found"),
parent=self.window)
else:
EventComparisonResults(self.dbstate, self.uistate, plist, self.track)
#-------------------------------------------------------------------------
#
#
#
#-------------------------------------------------------------------------
##def by_value(first,second):
## return cmp(second[0],first[0])
#-------------------------------------------------------------------------
#
#
#
#-------------------------------------------------------------------------
def fix(line):
l = line.strip().replace('&','&').replace('>','>')
return l.replace(l,'<','<').replace(l,'"','"')
#-------------------------------------------------------------------------
#
#
#
#-------------------------------------------------------------------------
class EventComparisonResults(ManagedWindow):
def __init__(self,dbstate,uistate,people_list,track):
self.dbstate = dbstate
self.uistate = uistate
ManagedWindow.__init__(self, uistate, track, self)
self.db = dbstate.db
self.my_list = people_list
self.row_data = []
self.save_form = None
self.topDialog = Glade(toplevel="eventcmp")
self.topDialog.connect_signals({
"on_write_table" : self.on_write_table,
"destroy_passed_object" : self.close,
"on_help_clicked" : self.on_help_clicked,
"on_apply_clicked" : self.__dummy,
"on_editor_clicked" : self.__dummy,
})
window = self.topDialog.toplevel
self.set_window(window, self.topDialog.get_object('title'),
_('Event Comparison Results'))
self.setup_configs('interface.eventcomparisonresults', 750, 400)
self.eventlist = self.topDialog.get_object('treeview')
self.sort = Sort(self.db)
self.my_list.sort(key=self.sort.by_last_name_key)
self.event_titles = self.make_event_titles()
self.table_titles = [_("Person"),_("ID")]
for event_name in self.event_titles:
self.table_titles.append(_("%(event_name)s Date") %
{'event_name' :event_name}
)
self.table_titles.append('sort') # This won't be shown in a tree
self.table_titles.append(_("%(event_name)s Place") %
{'event_name' :event_name}
)
self.build_row_data()
self.draw_display()
self.show()
def __dummy(self, obj):
"""dummy callback, needed because widget is in same glade file
as another widget, so callbacks must be defined to avoid warnings.
"""
pass
def on_help_clicked(self, obj):
"""Display the relevant portion of Gramps manual"""
display_help(webpage=WIKI_HELP_PAGE, section=WIKI_HELP_SEC)
def build_menu_names(self, obj):
return (_("Event Comparison Results"),None)
def draw_display(self):
model_index = 0
tree_index = 0
mylist = []
renderer = Gtk.CellRendererText()
for title in self.table_titles:
mylist.append(str)
if title == 'sort':
# This will override the previously defined column
self.eventlist.get_column(
tree_index-1).set_sort_column_id(model_index)
else:
column = Gtk.TreeViewColumn(title,renderer,text=model_index)
column.set_sort_column_id(model_index)
self.eventlist.append_column(column)
# This one numbers the tree columns: increment on new column
tree_index += 1
# This one numbers the model columns: always increment
model_index += 1
model = Gtk.ListStore(*mylist)
self.eventlist.set_model(model)
self.progress_bar.set_pass(_('Building display'),len(self.row_data))
for data in self.row_data:
model.append(row=list(data))
self.progress_bar.step()
self.progress_bar.close()
def build_row_data(self):
self.progress_bar = ProgressMeter(
_('Comparing Events'), '', parent=self.uistate.window)
self.progress_bar.set_pass(_('Building data'),len(self.my_list))
for individual_id in self.my_list:
individual = self.db.get_person_from_handle(individual_id)
name = individual.get_primary_name().get_name()
gid = individual.get_gramps_id()
the_map = defaultdict(list)
for ievent_ref in individual.get_event_ref_list():
ievent = self.db.get_event_from_handle(ievent_ref.ref)
event_name = str(ievent.get_type())
the_map[event_name].append(ievent_ref.ref)
first = True
done = False
while not done:
added = False
tlist = [name, gid] if first else ["", ""]
for ename in self.event_titles:
if ename in the_map and len(the_map[ename]) > 0:
event_handle = the_map[ename][0]
del the_map[ename][0]
date = place = ""
if event_handle:
event = self.db.get_event_from_handle(event_handle)
date = get_date(event)
sortdate = "%09d" % (
event.get_date_object().get_sort_value()
)
place_handle = event.get_place_handle()
if place_handle:
place = self.db.get_place_from_handle(
place_handle).get_title()
tlist += [date, sortdate, place]
added = True
else:
tlist += [""]*3
if first:
first = False
self.row_data.append(tlist)
elif not added:
done = True
else:
self.row_data.append(tlist)
self.progress_bar.step()
def make_event_titles(self):
"""
Create the list of unique event types, along with the person's
name, birth, and death.
This should be the column titles of the report.
"""
the_map = defaultdict(int)
for individual_id in self.my_list:
individual = self.db.get_person_from_handle(individual_id)
for event_ref in individual.get_event_ref_list():
event = self.db.get_event_from_handle(event_ref.ref)
name = str(event.get_type())
if not name:
break
the_map[name] += 1
unsort_list = sorted([(d, k) for k,d in the_map.items()],
key=lambda x: x[0], reverse=True)
sort_list = [ item[1] for item in unsort_list ]
## Presently there's no Birth and Death. Instead there's Birth Date and
## Birth Place, as well as Death Date and Death Place.
## # Move birth and death to the begining of the list
## if _("Death") in the_map:
## sort_list.remove(_("Death"))
## sort_list = [_("Death")] + sort_list
## if _("Birth") in the_map:
## sort_list.remove(_("Birth"))
## sort_list = [_("Birth")] + sort_list
return sort_list
def on_write_table(self, obj):
f = Gtk.FileChooserDialog(_("Select filename"),
transient_for=self.window,
action=Gtk.FileChooserAction.SAVE)
f.add_buttons(_('_Cancel'), Gtk.ResponseType.CANCEL,
_('_Save'), Gtk.ResponseType.OK)
f.set_current_folder(get_curr_dir())
status = f.run()
f.hide()
if status == Gtk.ResponseType.OK:
name = f.get_filename()
doc = ODSTab(len(self.row_data))
doc.creator(self.db.get_researcher().get_name())
spreadsheet = TableReport(name, doc)
new_titles = []
skip_columns = []
index = 0
for title in self.table_titles:
if title == 'sort':
skip_columns.append(index)
else:
new_titles.append(title)
index += 1
spreadsheet.initialize(len(new_titles))
spreadsheet.write_table_head(new_titles)
index = 0
for top in self.row_data:
spreadsheet.set_row(index%2)
index += 1
spreadsheet.write_table_data(top,skip_columns)
spreadsheet.finalize()
f.destroy()
#------------------------------------------------------------------------
#
#
#
#------------------------------------------------------------------------
class EventComparisonOptions(tool.ToolOptions):
"""
Defines options and provides handling interface.
"""
def __init__(self, name,person_id=None):
tool.ToolOptions.__init__(self, name,person_id)
# Options specific for this report
self.options_dict = {
'filter' : 0,
}
filters = utils.get_person_filters(None)
self.options_help = {
'filter' : ("=num","Filter number.",
[ filt.get_name() for filt in filters ],
True ),
}
|
gpl-2.0
|
Sprytile/Sprytile
|
rx/linq/observable/catch.py
|
2
|
3844
|
from rx.core import Observable, AnonymousObservable, Disposable
from rx.disposables import SingleAssignmentDisposable, \
CompositeDisposable, SerialDisposable
from rx.concurrency import current_thread_scheduler
from rx.internal import Enumerable
from rx.internal import extensionmethod, extensionclassmethod
def catch_handler(source, handler):
def subscribe(observer):
d1 = SingleAssignmentDisposable()
subscription = SerialDisposable()
subscription.disposable = d1
def on_error(exception):
try:
result = handler(exception)
except Exception as ex:
observer.on_error(ex)
return
result = Observable.from_future(result)
d = SingleAssignmentDisposable()
subscription.disposable = d
d.disposable = result.subscribe(observer)
d1.disposable = source.subscribe(
observer.on_next,
on_error,
observer.on_completed
)
return subscription
return AnonymousObservable(subscribe)
@extensionmethod(Observable, instancemethod=True)
def catch_exception(self, second=None, handler=None):
"""Continues an observable sequence that is terminated by an exception
with the next observable sequence.
1 - xs.catch_exception(ys)
2 - xs.catch_exception(lambda ex: ys(ex))
Keyword arguments:
handler -- Exception handler function that returns an observable
sequence given the error that occurred in the first sequence.
second -- Second observable sequence used to produce results when an
error occurred in the first sequence.
Returns an observable sequence containing the first sequence's
elements, followed by the elements of the handler sequence in case an
exception occurred.
"""
if handler or not isinstance(second, Observable):
return catch_handler(self, handler or second)
return Observable.catch_exception([self, second])
@extensionclassmethod(Observable)
def catch_exception(cls, *args):
"""Continues an observable sequence that is terminated by an
exception with the next observable sequence.
1 - res = Observable.catch_exception(xs, ys, zs)
2 - res = Observable.catch_exception([xs, ys, zs])
Returns an observable sequence containing elements from consecutive
source sequences until a source sequence terminates successfully.
"""
scheduler = current_thread_scheduler
if isinstance(args[0], list) or isinstance(args[0], Enumerable):
sources = args[0]
else:
sources = list(args)
def subscribe(observer):
subscription = SerialDisposable()
cancelable = SerialDisposable()
last_exception = [None]
is_disposed = []
e = iter(sources)
def action(action1, state=None):
def on_error(exn):
last_exception[0] = exn
cancelable.disposable = scheduler.schedule(action)
if is_disposed:
return
try:
current = next(e)
except StopIteration:
if last_exception[0]:
observer.on_error(last_exception[0])
else:
observer.on_completed()
except Exception as ex:
observer.on_error(ex)
else:
d = SingleAssignmentDisposable()
subscription.disposable = d
d.disposable = current.subscribe(observer.on_next, on_error, observer.on_completed)
cancelable.disposable = scheduler.schedule(action)
def dispose():
is_disposed.append(True)
return CompositeDisposable(subscription, cancelable, Disposable.create(dispose))
return AnonymousObservable(subscribe)
|
mit
|
Djlavoy/scrapy
|
scrapy/xlib/pydispatch/dispatcher.py
|
23
|
16963
|
"""Multiple-producer-multiple-consumer signal-dispatching
dispatcher is the core of the PyDispatcher system,
providing the primary API and the core logic for the
system.
Module attributes of note:
Any -- Singleton used to signal either "Any Sender" or
"Any Signal". See documentation of the _Any class.
Anonymous -- Singleton used to signal "Anonymous Sender"
See documentation of the _Anonymous class.
Internal attributes:
WEAKREF_TYPES -- tuple of types/classes which represent
weak references to receivers, and thus must be de-
referenced on retrieval to retrieve the callable
object
connections -- { senderkey (id) : { signal : [receivers...]}}
senders -- { senderkey (id) : weakref(sender) }
used for cleaning up sender references on sender
deletion
sendersBack -- { receiverkey (id) : [senderkey (id)...] }
used for cleaning up receiver references on receiver
deletion, (considerably speeds up the cleanup process
vs. the original code.)
"""
from __future__ import generators
import types, weakref, six
from scrapy.xlib.pydispatch import saferef, robustapply, errors
__author__ = "Patrick K. O'Brien <[email protected]>"
__cvsid__ = "$Id: dispatcher.py,v 1.1.1.1 2006/07/07 15:59:38 mcfletch Exp $"
__version__ = "$Revision: 1.1.1.1 $"[11:-2]
class _Parameter:
"""Used to represent default parameter values."""
def __repr__(self):
return self.__class__.__name__
class _Any(_Parameter):
"""Singleton used to signal either "Any Sender" or "Any Signal"
The Any object can be used with connect, disconnect,
send, or sendExact to signal that the parameter given
Any should react to all senders/signals, not just
a particular sender/signal.
"""
Any = _Any()
class _Anonymous(_Parameter):
"""Singleton used to signal "Anonymous Sender"
The Anonymous object is used to signal that the sender
of a message is not specified (as distinct from being
"any sender"). Registering callbacks for Anonymous
will only receive messages sent without senders. Sending
with anonymous will only send messages to those receivers
registered for Any or Anonymous.
Note:
The default sender for connect is Any, while the
default sender for send is Anonymous. This has
the effect that if you do not specify any senders
in either function then all messages are routed
as though there was a single sender (Anonymous)
being used everywhere.
"""
Anonymous = _Anonymous()
WEAKREF_TYPES = (weakref.ReferenceType, saferef.BoundMethodWeakref)
connections = {}
senders = {}
sendersBack = {}
def connect(receiver, signal=Any, sender=Any, weak=True):
"""Connect receiver to sender for signal
receiver -- a callable Python object which is to receive
messages/signals/events. Receivers must be hashable
objects.
if weak is True, then receiver must be weak-referencable
(more precisely saferef.safeRef() must be able to create
a reference to the receiver).
Receivers are fairly flexible in their specification,
as the machinery in the robustApply module takes care
of most of the details regarding figuring out appropriate
subsets of the sent arguments to apply to a given
receiver.
Note:
if receiver is itself a weak reference (a callable),
it will be de-referenced by the system's machinery,
so *generally* weak references are not suitable as
receivers, though some use might be found for the
facility whereby a higher-level library passes in
pre-weakrefed receiver references.
signal -- the signal to which the receiver should respond
if Any, receiver will receive any signal from the
indicated sender (which might also be Any, but is not
necessarily Any).
Otherwise must be a hashable Python object other than
None (DispatcherError raised on None).
sender -- the sender to which the receiver should respond
if Any, receiver will receive the indicated signals
from any sender.
if Anonymous, receiver will only receive indicated
signals from send/sendExact which do not specify a
sender, or specify Anonymous explicitly as the sender.
Otherwise can be any python object.
weak -- whether to use weak references to the receiver
By default, the module will attempt to use weak
references to the receiver objects. If this parameter
is false, then strong references will be used.
returns None, may raise DispatcherTypeError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)' % (
receiver, sender)
)
if weak:
receiver = saferef.safeRef(receiver, onDelete=_removeReceiver)
senderkey = id(sender)
if senderkey in connections:
signals = connections[senderkey]
else:
connections[senderkey] = signals = {}
# Keep track of senders for cleanup.
# Is Anonymous something we want to clean up?
if sender not in (None, Anonymous, Any):
def remove(object, senderkey=senderkey):
_removeSender(senderkey=senderkey)
# Skip objects that can not be weakly referenced, which means
# they won't be automatically cleaned up, but that's too bad.
try:
weakSender = weakref.ref(sender, remove)
senders[senderkey] = weakSender
except:
pass
receiverID = id(receiver)
# get current set, remove any current references to
# this receiver in the set, including back-references
if signal in signals:
receivers = signals[signal]
_removeOldBackRefs(senderkey, signal, receiver, receivers)
else:
receivers = signals[signal] = []
try:
current = sendersBack.get(receiverID)
if current is None:
sendersBack[receiverID] = current = []
if senderkey not in current:
current.append(senderkey)
except:
pass
receivers.append(receiver)
def disconnect(receiver, signal=Any, sender=Any, weak=True):
"""Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)' % (
receiver, sender)
)
if weak: receiver = saferef.safeRef(receiver)
senderkey = id(sender)
try:
signals = connections[senderkey]
receivers = signals[signal]
except KeyError:
raise errors.DispatcherKeyError(
"""No receivers found for signal %r from sender %r""" % (
signal,
sender
)
)
try:
# also removes from receivers
_removeOldBackRefs(senderkey, signal, receiver, receivers)
except ValueError:
raise errors.DispatcherKeyError(
"""No connection to receiver %s for signal %s from sender %s""" % (
receiver,
signal,
sender
)
)
_cleanupConnections(senderkey, signal)
def getReceivers(sender=Any, signal=Any):
"""Get list of receivers from global tables
This utility function allows you to retrieve the
raw list of receivers from the connections table
for the given sender and signal pair.
Note:
there is no guarantee that this is the actual list
stored in the connections table, so the value
should be treated as a simple iterable/truth value
rather than, for instance a list to which you
might append new records.
Normally you would use liveReceivers( getReceivers( ...))
to retrieve the actual receiver objects as an iterable
object.
"""
try:
return connections[id(sender)][signal]
except KeyError:
return []
def liveReceivers(receivers):
"""Filter sequence of receivers to get resolved, live receivers
This is a generator which will iterate over
the passed sequence, checking for weak references
and resolving them, then returning all live
receivers.
"""
for receiver in receivers:
if isinstance(receiver, WEAKREF_TYPES):
# Dereference the weak reference.
receiver = receiver()
if receiver is not None:
yield receiver
else:
yield receiver
def getAllReceivers(sender=Any, signal=Any):
"""Get list of all receivers from global tables
This gets all receivers which should receive
the given signal from sender, each receiver should
be produced only once by the resulting generator
"""
receivers = {}
for set in (
# Get receivers that receive *this* signal from *this* sender.
getReceivers(sender, signal),
# Add receivers that receive *any* signal from *this* sender.
getReceivers(sender, Any),
# Add receivers that receive *this* signal from *any* sender.
getReceivers(Any, signal),
# Add receivers that receive *any* signal from *any* sender.
getReceivers(Any, Any),
):
for receiver in set:
if receiver: # filter out dead instance-method weakrefs
try:
if receiver not in receivers:
receivers[receiver] = 1
yield receiver
except TypeError:
# dead weakrefs raise TypeError on hash...
pass
def send(signal=Any, sender=Anonymous, *arguments, **named):
"""Send signal from sender to all connected receivers.
signal -- (hashable) signal value, see connect for details
sender -- the sender of the signal
if Any, only receivers registered for Any will receive
the message.
if Anonymous, only receivers registered to receive
messages from Anonymous or Any will receive the message
Otherwise can be any python object (normally one
registered with a connect if you actually want
something to occur).
arguments -- positional arguments which will be passed to
*all* receivers. Note that this may raise TypeErrors
if the receivers do not allow the particular arguments.
Note also that arguments are applied before named
arguments, so they should be used with care.
named -- named arguments which will be filtered according
to the parameters of the receivers to only provide those
acceptable to the receiver.
Return a list of tuple pairs [(receiver, response), ... ]
if any receiver raises an error, the error propagates back
through send, terminating the dispatch loop, so it is quite
possible to not have all receivers called if a raises an
error.
"""
# Call each receiver with whatever arguments it can accept.
# Return a list of tuple pairs [(receiver, response), ... ].
responses = []
for receiver in liveReceivers(getAllReceivers(sender, signal)):
response = robustapply.robustApply(
receiver,
signal=signal,
sender=sender,
*arguments,
**named
)
responses.append((receiver, response))
return responses
def sendExact(signal=Any, sender=Anonymous, *arguments, **named):
"""Send signal only to those receivers registered for exact message
sendExact allows for avoiding Any/Anonymous registered
handlers, sending only to those receivers explicitly
registered for a particular signal on a particular
sender.
"""
responses = []
for receiver in liveReceivers(getReceivers(sender, signal)):
response = robustapply.robustApply(
receiver,
signal=signal,
sender=sender,
*arguments,
**named
)
responses.append((receiver, response))
return responses
def _removeReceiver(receiver):
"""Remove receiver from connections."""
if not sendersBack:
# During module cleanup the mapping will be replaced with None
return False
backKey = id(receiver)
try:
backSet = sendersBack.pop(backKey)
except KeyError as err:
return False
else:
for senderkey in backSet:
try:
signals = connections[senderkey].keys()
except KeyError as err:
pass
else:
for signal in signals:
try:
receivers = connections[senderkey][signal]
except KeyError:
pass
else:
try:
receivers.remove(receiver)
except Exception as err:
pass
_cleanupConnections(senderkey, signal)
def _cleanupConnections(senderkey, signal):
"""Delete any empty signals for senderkey. Delete senderkey if empty."""
try:
receivers = connections[senderkey][signal]
except:
pass
else:
if not receivers:
# No more connected receivers. Therefore, remove the signal.
try:
signals = connections[senderkey]
except KeyError:
pass
else:
del signals[signal]
if not signals:
# No more signal connections. Therefore, remove the sender.
_removeSender(senderkey)
def _removeSender(senderkey):
"""Remove senderkey from connections."""
_removeBackrefs(senderkey)
try:
del connections[senderkey]
except KeyError:
pass
# Senderkey will only be in senders dictionary if sender
# could be weakly referenced.
try:
del senders[senderkey]
except:
pass
def _removeBackrefs(senderkey):
"""Remove all back-references to this senderkey"""
try:
signals = connections[senderkey]
except KeyError:
signals = None
else:
items = signals.items()
def allReceivers():
for signal, set in items:
for item in set:
yield item
for receiver in allReceivers():
_killBackref(receiver, senderkey)
def _removeOldBackRefs(senderkey, signal, receiver, receivers):
"""Kill old sendersBack references from receiver
This guards against multiple registration of the same
receiver for a given signal and sender leaking memory
as old back reference records build up.
Also removes old receiver instance from receivers
"""
try:
index = receivers.index(receiver)
# need to scan back references here and remove senderkey
except ValueError:
return False
else:
oldReceiver = receivers[index]
del receivers[index]
found = 0
signals = connections.get(signal)
if signals is not None:
for sig, recs in six.iteritems(connections.get(signal, {})):
if sig != signal:
for rec in recs:
if rec is oldReceiver:
found = 1
break
if not found:
_killBackref(oldReceiver, senderkey)
return True
return False
def _killBackref(receiver, senderkey):
"""Do the actual removal of back reference from receiver to senderkey"""
receiverkey = id(receiver)
set = sendersBack.get(receiverkey, ())
while senderkey in set:
try:
set.remove(senderkey)
except:
break
if not set:
try:
del sendersBack[receiverkey]
except KeyError:
pass
return True
|
bsd-3-clause
|
redhat-openstack/python-openstackclient
|
openstackclient/network/utils.py
|
2
|
1562
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Transform compute security group rule for display.
def transform_compute_security_group_rule(sg_rule):
info = {}
info.update(sg_rule)
from_port = info.pop('from_port')
to_port = info.pop('to_port')
if isinstance(from_port, int) and isinstance(to_port, int):
port_range = {'port_range': "%u:%u" % (from_port, to_port)}
elif from_port is None and to_port is None:
port_range = {'port_range': ""}
else:
port_range = {'port_range': "%s:%s" % (from_port, to_port)}
info.update(port_range)
if 'cidr' in info['ip_range']:
info['ip_range'] = info['ip_range']['cidr']
else:
info['ip_range'] = ''
if info['ip_protocol'] is None:
info['ip_protocol'] = ''
elif info['ip_protocol'].lower() == 'icmp':
info['port_range'] = ''
group = info.pop('group')
if 'name' in group:
info['remote_security_group'] = group['name']
else:
info['remote_security_group'] = ''
return info
|
apache-2.0
|
paweljasinski/ironpython3
|
Src/StdLib/Lib/test/test_quopri.py
|
171
|
7715
|
from test import support
import unittest
import sys, os, io, subprocess
import quopri
ENCSAMPLE = b"""\
Here's a bunch of special=20
=A1=A2=A3=A4=A5=A6=A7=A8=A9
=AA=AB=AC=AD=AE=AF=B0=B1=B2=B3
=B4=B5=B6=B7=B8=B9=BA=BB=BC=BD=BE
=BF=C0=C1=C2=C3=C4=C5=C6
=C7=C8=C9=CA=CB=CC=CD=CE=CF
=D0=D1=D2=D3=D4=D5=D6=D7
=D8=D9=DA=DB=DC=DD=DE=DF
=E0=E1=E2=E3=E4=E5=E6=E7
=E8=E9=EA=EB=EC=ED=EE=EF
=F0=F1=F2=F3=F4=F5=F6=F7
=F8=F9=FA=FB=FC=FD=FE=FF
characters... have fun!
"""
# First line ends with a space
DECSAMPLE = b"Here's a bunch of special \n" + \
b"""\
\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9
\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3
\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe
\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6
\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf
\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7
\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf
\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7
\xe8\xe9\xea\xeb\xec\xed\xee\xef
\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7
\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff
characters... have fun!
"""
def withpythonimplementation(testfunc):
def newtest(self):
# Test default implementation
testfunc(self)
# Test Python implementation
if quopri.b2a_qp is not None or quopri.a2b_qp is not None:
oldencode = quopri.b2a_qp
olddecode = quopri.a2b_qp
try:
quopri.b2a_qp = None
quopri.a2b_qp = None
testfunc(self)
finally:
quopri.b2a_qp = oldencode
quopri.a2b_qp = olddecode
newtest.__name__ = testfunc.__name__
return newtest
class QuopriTestCase(unittest.TestCase):
# Each entry is a tuple of (plaintext, encoded string). These strings are
# used in the "quotetabs=0" tests.
STRINGS = (
# Some normal strings
(b'hello', b'hello'),
(b'''hello
there
world''', b'''hello
there
world'''),
(b'''hello
there
world
''', b'''hello
there
world
'''),
(b'\201\202\203', b'=81=82=83'),
# Add some trailing MUST QUOTE strings
(b'hello ', b'hello=20'),
(b'hello\t', b'hello=09'),
# Some long lines. First, a single line of 108 characters
(b'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xd8\xd9\xda\xdb\xdc\xdd\xde\xdfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
b'''xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=D8=D9=DA=DB=DC=DD=DE=DFx=
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'''),
# A line of exactly 76 characters, no soft line break should be needed
(b'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy',
b'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'),
# A line of 77 characters, forcing a soft line break at position 75,
# and a second line of exactly 2 characters (because the soft line
# break `=' sign counts against the line length limit).
(b'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz',
b'''zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz=
zz'''),
# A line of 151 characters, forcing a soft line break at position 75,
# with a second line of exactly 76 characters and no trailing =
(b'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz',
b'''zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz=
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz'''),
# A string containing a hard line break, but which the first line is
# 151 characters and the second line is exactly 76 characters. This
# should leave us with three lines, the first which has a soft line
# break, and which the second and third do not.
(b'''yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz''',
b'''yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy=
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz'''),
# Now some really complex stuff ;)
(DECSAMPLE, ENCSAMPLE),
)
# These are used in the "quotetabs=1" tests.
ESTRINGS = (
(b'hello world', b'hello=20world'),
(b'hello\tworld', b'hello=09world'),
)
# These are used in the "header=1" tests.
HSTRINGS = (
(b'hello world', b'hello_world'),
(b'hello_world', b'hello=5Fworld'),
)
@withpythonimplementation
def test_encodestring(self):
for p, e in self.STRINGS:
self.assertEqual(quopri.encodestring(p), e)
@withpythonimplementation
def test_decodestring(self):
for p, e in self.STRINGS:
self.assertEqual(quopri.decodestring(e), p)
@withpythonimplementation
def test_idempotent_string(self):
for p, e in self.STRINGS:
self.assertEqual(quopri.decodestring(quopri.encodestring(e)), e)
@withpythonimplementation
def test_encode(self):
for p, e in self.STRINGS:
infp = io.BytesIO(p)
outfp = io.BytesIO()
quopri.encode(infp, outfp, quotetabs=False)
self.assertEqual(outfp.getvalue(), e)
@withpythonimplementation
def test_decode(self):
for p, e in self.STRINGS:
infp = io.BytesIO(e)
outfp = io.BytesIO()
quopri.decode(infp, outfp)
self.assertEqual(outfp.getvalue(), p)
@withpythonimplementation
def test_embedded_ws(self):
for p, e in self.ESTRINGS:
self.assertEqual(quopri.encodestring(p, quotetabs=True), e)
self.assertEqual(quopri.decodestring(e), p)
@withpythonimplementation
def test_encode_header(self):
for p, e in self.HSTRINGS:
self.assertEqual(quopri.encodestring(p, header=True), e)
@withpythonimplementation
def test_decode_header(self):
for p, e in self.HSTRINGS:
self.assertEqual(quopri.decodestring(e, header=True), p)
def test_scriptencode(self):
(p, e) = self.STRINGS[-1]
process = subprocess.Popen([sys.executable, "-mquopri"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
self.addCleanup(process.stdout.close)
cout, cerr = process.communicate(p)
# On Windows, Python will output the result to stdout using
# CRLF, as the mode of stdout is text mode. To compare this
# with the expected result, we need to do a line-by-line comparison.
cout = cout.decode('latin-1').splitlines()
e = e.decode('latin-1').splitlines()
assert len(cout)==len(e)
for i in range(len(cout)):
self.assertEqual(cout[i], e[i])
self.assertEqual(cout, e)
def test_scriptdecode(self):
(p, e) = self.STRINGS[-1]
process = subprocess.Popen([sys.executable, "-mquopri", "-d"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
self.addCleanup(process.stdout.close)
cout, cerr = process.communicate(e)
cout = cout.decode('latin-1')
p = p.decode('latin-1')
self.assertEqual(cout.splitlines(), p.splitlines())
def test_main():
support.run_unittest(QuopriTestCase)
if __name__ == "__main__":
test_main()
|
apache-2.0
|
cfehring/slack-onnow
|
requests/packages/chardet/latin1prober.py
|
1778
|
5232
|
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Universal charset detector code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 2001
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
# Shy Shalom - original C code
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .charsetprober import CharSetProber
from .constants import eNotMe
from .compat import wrap_ord
FREQ_CAT_NUM = 4
UDF = 0 # undefined
OTH = 1 # other
ASC = 2 # ascii capital letter
ASS = 3 # ascii small letter
ACV = 4 # accent capital vowel
ACO = 5 # accent capital other
ASV = 6 # accent small vowel
ASO = 7 # accent small other
CLASS_NUM = 8 # total classes
Latin1_CharToClass = (
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F
OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47
ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F
ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57
ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F
OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67
ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F
ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77
ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F
OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87
OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F
UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97
OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF
ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7
ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF
ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7
ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF
ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7
ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF
ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7
ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF
)
# 0 : illegal
# 1 : very unlikely
# 2 : normal
# 3 : very likely
Latin1ClassModel = (
# UDF OTH ASC ASS ACV ACO ASV ASO
0, 0, 0, 0, 0, 0, 0, 0, # UDF
0, 3, 3, 3, 3, 3, 3, 3, # OTH
0, 3, 3, 3, 3, 3, 3, 3, # ASC
0, 3, 3, 3, 1, 1, 3, 3, # ASS
0, 3, 3, 3, 1, 2, 1, 2, # ACV
0, 3, 3, 3, 3, 3, 3, 3, # ACO
0, 3, 1, 3, 1, 1, 1, 3, # ASV
0, 3, 1, 3, 1, 1, 3, 3, # ASO
)
class Latin1Prober(CharSetProber):
def __init__(self):
CharSetProber.__init__(self)
self.reset()
def reset(self):
self._mLastCharClass = OTH
self._mFreqCounter = [0] * FREQ_CAT_NUM
CharSetProber.reset(self)
def get_charset_name(self):
return "windows-1252"
def feed(self, aBuf):
aBuf = self.filter_with_english_letters(aBuf)
for c in aBuf:
charClass = Latin1_CharToClass[wrap_ord(c)]
freq = Latin1ClassModel[(self._mLastCharClass * CLASS_NUM)
+ charClass]
if freq == 0:
self._mState = eNotMe
break
self._mFreqCounter[freq] += 1
self._mLastCharClass = charClass
return self.get_state()
def get_confidence(self):
if self.get_state() == eNotMe:
return 0.01
total = sum(self._mFreqCounter)
if total < 0.01:
confidence = 0.0
else:
confidence = ((self._mFreqCounter[3] - self._mFreqCounter[1] * 20.0)
/ total)
if confidence < 0.0:
confidence = 0.0
# lower the confidence of latin1 so that other more accurate
# detector can take priority.
confidence = confidence * 0.73
return confidence
|
gpl-3.0
|
erigones/esdc-ce
|
api/mon/node/views.py
|
1
|
5641
|
from api.decorators import api_view, request_data_defaultdc, setting_required
from api.permissions import IsSuperAdmin
from api.mon.node.api_views import NodeSLAView, NodeHistoryView
__all__ = ('mon_node_sla', 'mon_node_history')
#: node_status: GET: Node.STATUS_AVAILABLE_MONITORING
@api_view(('GET',))
@request_data_defaultdc(permissions=(IsSuperAdmin,))
@setting_required('MON_ZABBIX_ENABLED')
@setting_required('MON_ZABBIX_NODE_SLA') # dc1_settings
def mon_node_sla(request, hostname, yyyymm, data=None):
"""
Get (:http:get:`GET </mon/node/(hostname)/sla/(yyyymm)>`) SLA for
requested compute node and month.
.. http:get:: /mon/node/(hostname)/sla/(yyyymm)
:DC-bound?:
* |dc-no|
:Permissions:
* |SuperAdmin|
:Asynchronous?:
* |async-yes| - SLA value is retrieved from monitoring server
* |async-no| - SLA value is cached
:arg hostname: **required** - Node hostname
:type hostname: string
:arg yyyymm: **required** - Time period in YYYYMM format
:type yyyymm: integer
:status 200: SUCCESS
:status 201: PENDING
:status 400: FAILURE
:status 403: Forbidden
:status 404: Node not found
:status 412: Invalid yyyymm
:status 417: Monitoring data not available
:status 423: Node is not operational
"""
return NodeSLAView(request, hostname, yyyymm, data).get()
#: node_status: GET: Node.STATUS_AVAILABLE_MONITORING
@api_view(('GET',))
@request_data_defaultdc(permissions=(IsSuperAdmin,))
@setting_required('MON_ZABBIX_ENABLED')
@setting_required('MON_ZABBIX_NODE_SYNC')
def mon_node_history(request, hostname, graph, data=None):
"""
Get (:http:get:`GET </mon/node/(hostname)/history/(graph)>`) monitoring history
for requested node and graph name.
.. http:get:: /mon/node/(hostname)/history/(graph)
:DC-bound?:
* |dc-no|
:Permissions:
* |SuperAdmin|
:Asynchronous?:
* |async-yes|
:arg hostname: **required** - Compute node hostname
:type hostname: string
:type graph: string
:arg graph: **required** - Graph identificator. One of:
| *cpu-usage* - Total compute node CPU consumed by the Node.
| *cpu-waittime* - Total amount of time spent in CPU run queue by the Node.
| *cpu-load* - 1-minute load average.
| *mem-usage* - Total compute node physical memory consumed by the Node.
| *swap-usage* - Total compute node swap space used.
| *net-bandwidth* - The amount of received and sent network traffic through \
the virtual network interface. *requires data.nic*
| *net-packets* - The amount of received and sent packets through the \
virtual network interface. *requires data.nic*
| *storage-throughput* - The amount of read and written data on the zpool.
| *storage-io* - The amount of I/O read and write operations performed on the zpool.
| *storage-space* - ZFS zpool space usage by type.
| *vm-cpu-usage* - CPU consumed by each virtual machine on the compute node.
| *vm-mem-usage* - Physical memory consumed by each virtual machine on the compute node.
| *vm-disk-logical-throughput-reads* - Amount of data read on the logical layer \
(with acceleration mechanisms included) by each virtual machine on the compute node.
| *vm-disk-logical-throughput-writes* - Amount of data written on the logical layer \
(with acceleration mechanisms included) by each virtual machine on the compute node.
| *vm-disk-logical-io-reads* - Number of read operation performed on the logical layer \
(with acceleration mechanisms included) by each virtual machine on the compute node.
| *vm-disk-logical-io-writes* - Number of write operation performed on the logical layer \
(with acceleration mechanisms included) by each virtual machine on the compute node.
| *vm-disk-physical-throughput-reads* - Amount of data read on the physical (disk) layer \
by each virtual machine on the compute node.
| *vm-disk-physical-throughput-writes* - Amount of data written on the physical (disk) layer \
by each virtual machine on the compute node.
| *vm-disk-physical-io-reads* - Number of read operation performed on the physical (disk) layer \
by each virtual machine on the compute node.
| *vm-disk-physical-io-writes* - Number of write operation performed on the physical (disk) layer \
by each virtual machine on the compute node.
:arg data.since: Return only values that have been received after the given UNIX timestamp \
(default: now - 1 hour)
:type data.since: integer
:arg data.until: Return only values that have been received before the given UNIX timestamp (default: now)
:type data.until: integer
:arg data.nic: only used with *net-bandwidth* and *net-packets* graphs \
to specify name of the NIC for which graph should be retrieved.
:type data.nic: string
:arg data.zpool: only used with *storage-throughput*, *storage-io* and *storage-space* graphs \
to specify name of the zpool for which graph should be retrieved.
:type data.zpool: string
:status 200: SUCCESS
:status 201: PENDING
:status 400: FAILURE
:status 403: Forbidden
:status 404: Node not found
:status 412: Invalid graph
:status 417: Node monitoring disabled
:status 423: Node is not operational
"""
return NodeHistoryView(request, hostname, graph, data).get()
|
apache-2.0
|
o5k/openerp-oemedical-v0.1
|
openerp/addons/account_check_writing/wizard/account_check_batch_printing.py
|
54
|
3821
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.tools.translate import _
from openerp.osv import fields, osv
class account_check_write(osv.osv_memory):
_name = 'account.check.write'
_description = 'Prin Check in Batch'
_columns = {
'check_number': fields.integer('Next Check Number', required=True, help="The number of the next check number to be printed."),
}
def _get_next_number(self, cr, uid, context=None):
dummy, sequence_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'account_check_writing', 'sequence_check_number')
return self.pool.get('ir.sequence').read(cr, uid, sequence_id, ['number_next'])['number_next']
_defaults = {
'check_number': _get_next_number,
}
def print_check_write(self, cr, uid, ids, context=None):
if context is None:
context = {}
voucher_obj = self.pool.get('account.voucher')
ir_sequence_obj = self.pool.get('ir.sequence')
#update the sequence to number the checks from the value encoded in the wizard
dummy, sequence_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'account_check_writing', 'sequence_check_number')
increment = ir_sequence_obj.read(cr, uid, sequence_id, ['number_increment'])['number_increment']
new_value = self.browse(cr, uid, ids[0], context=context).check_number
ir_sequence_obj.write(cr, uid, sequence_id, {'number_next': new_value})
#validate the checks so that they get a number
voucher_ids = context.get('active_ids', [])
for check in voucher_obj.browse(cr, uid, voucher_ids, context=context):
new_value += increment
if check.number:
raise osv.except_osv(_('Error!'),_("One of the printed check already got a number."))
voucher_obj.proforma_voucher(cr, uid, voucher_ids, context=context)
#update the sequence again (because the assignation using next_val was made during the same transaction of
#the first update of sequence)
ir_sequence_obj.write(cr, uid, sequence_id, {'number_next': new_value})
#print the checks
check_layout_report = {
'top' : 'account.print.check.top',
'middle' : 'account.print.check.middle',
'bottom' : 'account.print.check.bottom',
}
check_layout = voucher_obj.browse(cr, uid, voucher_ids[0], context=context).company_id.check_layout
if not check_layout:
check_layout = 'top'
return {
'type': 'ir.actions.report.xml',
'report_name':check_layout_report[check_layout],
'datas': {
'model':'account.voucher',
'ids': voucher_ids,
'report_type': 'pdf'
},
'nodestroy': True
}
account_check_write()
|
agpl-3.0
|
wa3l/mailr
|
mailr.py
|
1
|
2140
|
import os, flask
from helpers import *
from validation import Validator
from email_model import db, Email
from flask.ext.httpauth import HTTPBasicAuth
from sqlalchemy.exc import DatabaseError
from os import environ
app = flask.Flask(__name__)
auth = HTTPBasicAuth()
env = environ['MAILR_ENV'] if environ.has_key('MAILR_ENV') else 'Development'
app.config.from_object('config.{}Config'.format(env))
db.app = app
db.init_app(app)
"""
This is the main point of interaction with the app.
It accepts a json request to send an email. It sends
the email and stores its details in the database.
"""
@app.route('/email', methods=['POST'])
def email():
data = json_data(flask.request)
valid, msg = Validator().validate(data)
if not valid: return abort(msg)
email = Email(data)
for s in get_services(email, app):
email.service = s
resp = send_email(email)
if resp.status_code is 200:
save_email(db, email)
return success(email)
else:
log_error(app.logger, email, resp)
return abort('An error has occurred.', resp.status_code)
"""
Return a json object containing sent emails stored
in our database. The results are paginated and the
page size is fixed to 20 results.
"""
@app.route('/emails/', defaults={'page': 1})
@app.route('/emails/<int:page>', methods=['GET'])
@auth.login_required
def emails(page):
emails = Email.query.paginate(page, 20, False)
resp = {e.id: str(e) for e in emails.items}
return flask.jsonify(page=page, emails=resp)
"""
Error and Basic Auth handling.
"""
@auth.get_password
def get_password(username):
if username == 'api':
return os.environ['MAILR_KEY']
return None
@auth.error_handler
def unauthorized():
return abort('Unauthorized access.', 401)
@app.errorhandler(DatabaseError)
def special_exception_handler(error):
return abort('Database Error occurred.', 500)
@app.errorhandler(404)
def page_not_found(error):
return abort('The requested URL was not found on the server.', 404)
@app.errorhandler(400)
def page_not_found(error):
return abort('Invalid JSON request.', 400)
if __name__ == '__main__':
app.run()
db.create_all()
|
mit
|
benfinkelcbt/CPD200
|
CPD200-Lab07-Python/googleapiclient/sample_tools.py
|
89
|
4047
|
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities for making samples.
Consolidates a lot of code commonly repeated in sample applications.
"""
from __future__ import absolute_import
__author__ = '[email protected] (Joe Gregorio)'
__all__ = ['init']
import argparse
import httplib2
import os
from googleapiclient import discovery
from oauth2client import client
from oauth2client import file
from oauth2client import tools
def init(argv, name, version, doc, filename, scope=None, parents=[], discovery_filename=None):
"""A common initialization routine for samples.
Many of the sample applications do the same initialization, which has now
been consolidated into this function. This function uses common idioms found
in almost all the samples, i.e. for an API with name 'apiname', the
credentials are stored in a file named apiname.dat, and the
client_secrets.json file is stored in the same directory as the application
main file.
Args:
argv: list of string, the command-line parameters of the application.
name: string, name of the API.
version: string, version of the API.
doc: string, description of the application. Usually set to __doc__.
file: string, filename of the application. Usually set to __file__.
parents: list of argparse.ArgumentParser, additional command-line flags.
scope: string, The OAuth scope used.
discovery_filename: string, name of local discovery file (JSON). Use when discovery doc not available via URL.
Returns:
A tuple of (service, flags), where service is the service object and flags
is the parsed command-line flags.
"""
if scope is None:
scope = 'https://www.googleapis.com/auth/' + name
# Parser command-line arguments.
parent_parsers = [tools.argparser]
parent_parsers.extend(parents)
parser = argparse.ArgumentParser(
description=doc,
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=parent_parsers)
flags = parser.parse_args(argv[1:])
# Name of a file containing the OAuth 2.0 information for this
# application, including client_id and client_secret, which are found
# on the API Access tab on the Google APIs
# Console <http://code.google.com/apis/console>.
client_secrets = os.path.join(os.path.dirname(filename),
'client_secrets.json')
# Set up a Flow object to be used if we need to authenticate.
flow = client.flow_from_clientsecrets(client_secrets,
scope=scope,
message=tools.message_if_missing(client_secrets))
# Prepare credentials, and authorize HTTP object with them.
# If the credentials don't exist or are invalid run through the native client
# flow. The Storage object will ensure that if successful the good
# credentials will get written back to a file.
storage = file.Storage(name + '.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(flow, storage, flags)
http = credentials.authorize(http = httplib2.Http())
if discovery_filename is None:
# Construct a service object via the discovery service.
service = discovery.build(name, version, http=http)
else:
# Construct a service object using a local discovery document file.
with open(discovery_filename) as discovery_file:
service = discovery.build_from_document(
discovery_file.read(),
base='https://www.googleapis.com/',
http=http)
return (service, flags)
|
gpl-3.0
|
archf/ansible
|
lib/ansible/modules/cloud/azure/azure.py
|
8
|
24174
|
#!/usr/bin/python
#
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure
short_description: create or terminate a virtual machine in azure
description:
- Creates or terminates azure instances. When created optionally waits for it to be 'running'.
version_added: "1.7"
options:
name:
description:
- name of the virtual machine and associated cloud service.
required: true
default: null
location:
description:
- the azure location to use (e.g. 'East US')
required: true
default: null
subscription_id:
description:
- azure subscription id. Overrides the AZURE_SUBSCRIPTION_ID environment variable.
required: false
default: null
management_cert_path:
description:
- path to an azure management certificate associated with the subscription id. Overrides the AZURE_CERT_PATH environment variable.
required: false
default: null
storage_account:
description:
- the azure storage account in which to store the data disks.
required: true
image:
description:
- system image for creating the virtual machine
(e.g., b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu_DAILY_BUILD-precise-12_04_3-LTS-amd64-server-20131205-en-us-30GB)
required: true
default: null
role_size:
description:
- azure role size for the new virtual machine (e.g., Small, ExtraLarge, A6). You have to pay attention to the fact that instances of
type G and DS are not available in all regions (locations). Make sure if you selected the size and type of instance available in your chosen location.
required: false
default: Small
endpoints:
description:
- a comma-separated list of TCP ports to expose on the virtual machine (e.g., "22,80")
required: false
default: 22
user:
description:
- the unix username for the new virtual machine.
required: false
default: null
password:
description:
- the unix password for the new virtual machine.
required: false
default: null
ssh_cert_path:
description:
- path to an X509 certificate containing the public ssh key to install in the virtual machine.
See http://www.windowsazure.com/en-us/manage/linux/tutorials/intro-to-linux/ for more details.
- if this option is specified, password-based ssh authentication will be disabled.
required: false
default: null
virtual_network_name:
description:
- Name of virtual network.
required: false
default: null
hostname:
description:
- hostname to write /etc/hostname. Defaults to <name>.cloudapp.net.
required: false
default: null
wait:
description:
- wait for the instance to be in state 'running' before returning
required: false
default: "no"
choices: [ "yes", "no" ]
aliases: []
wait_timeout:
description:
- how long before wait gives up, in seconds
default: 600
aliases: []
wait_timeout_redirects:
description:
- how long before wait gives up for redirects, in seconds
default: 300
aliases: []
state:
description:
- create or terminate instances
required: false
default: 'present'
aliases: []
auto_updates:
description:
- Enable Auto Updates on Windows Machines
required: false
version_added: "2.0"
default: "no"
choices: [ "yes", "no" ]
enable_winrm:
description:
- Enable winrm on Windows Machines
required: false
version_added: "2.0"
default: "yes"
choices: [ "yes", "no" ]
os_type:
description:
- The type of the os that is gettings provisioned
required: false
version_added: "2.0"
default: "linux"
choices: [ "windows", "linux" ]
requirements:
- "python >= 2.6"
- "azure >= 0.7.1"
author: "John Whitbeck (@jwhitbeck)"
'''
EXAMPLES = '''
# Note: None of these examples set subscription_id or management_cert_path
# It is assumed that their matching environment variables are set.
- name: Provision virtual machine example
azure:
name: my-virtual-machine
role_size: Small
image: b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu_DAILY_BUILD-precise-12_04_3-LTS-amd64-server-20131205-en-us-30GB
location: East US
user: ubuntu
ssh_cert_path: /path/to/azure_x509_cert.pem
storage_account: my-storage-account
wait: True
state: present
delegate_to: localhost
- name: Terminate virtual machine example
azure:
name: my-virtual-machine
state: absent
delegate_to: localhost
- name: Create windows machine
azure:
name: ben-Winows-23
hostname: win123
os_type: windows
enable_winrm: True
subscription_id: '{{ azure_sub_id }}'
management_cert_path: '{{ azure_cert_path }}'
role_size: Small
image: bd507d3a70934695bc2128e3e5a255ba__RightImage-Windows-2012-x64-v13.5
location: East Asia
password: xxx
storage_account: benooytes
user: admin
wait: True
state: present
virtual_network_name: '{{ vnet_name }}'
delegate_to: localhost
'''
import base64
import datetime
import os
import signal
import time
from urlparse import urlparse
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.facts.timeout import TimeoutError
AZURE_LOCATIONS = ['South Central US',
'Central US',
'East US 2',
'East US',
'West US',
'North Central US',
'North Europe',
'West Europe',
'East Asia',
'Southeast Asia',
'Japan West',
'Japan East',
'Brazil South']
AZURE_ROLE_SIZES = ['ExtraSmall',
'Small',
'Medium',
'Large',
'ExtraLarge',
'A5',
'A6',
'A7',
'A8',
'A9',
'Basic_A0',
'Basic_A1',
'Basic_A2',
'Basic_A3',
'Basic_A4',
'Standard_D1',
'Standard_D2',
'Standard_D3',
'Standard_D4',
'Standard_D11',
'Standard_D12',
'Standard_D13',
'Standard_D14',
'Standard_D1_v2',
'Standard_D2_v2',
'Standard_D3_v2',
'Standard_D4_v2',
'Standard_D5_v2',
'Standard_D11_v2',
'Standard_D12_v2',
'Standard_D13_v2',
'Standard_D14_v2',
'Standard_DS1',
'Standard_DS2',
'Standard_DS3',
'Standard_DS4',
'Standard_DS11',
'Standard_DS12',
'Standard_DS13',
'Standard_DS14',
'Standard_G1',
'Standard_G2',
'Standard_G3',
'Standard_G4',
'Standard_G5']
from distutils.version import LooseVersion
try:
import azure as windows_azure
if hasattr(windows_azure, '__version__') and LooseVersion(windows_azure.__version__) <= "0.11.1":
from azure import WindowsAzureError as AzureException
from azure import WindowsAzureMissingResourceError as AzureMissingException
else:
from azure.common import AzureException as AzureException
from azure.common import AzureMissingResourceHttpError as AzureMissingException
from azure.servicemanagement import (ServiceManagementService, OSVirtualHardDisk, SSH, PublicKeys,
PublicKey, LinuxConfigurationSet, ConfigurationSetInputEndpoints,
ConfigurationSetInputEndpoint, Listener, WindowsConfigurationSet)
HAS_AZURE = True
except ImportError:
HAS_AZURE = False
from types import MethodType
import json
def _wait_for_completion(azure, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
operation_result = azure.get_operation_status(promise.request_id)
time.sleep(5)
if operation_result.status == "Succeeded":
return
raise AzureException('Timed out waiting for async operation ' + msg + ' "' + str(promise.request_id) + '" to complete.')
def _delete_disks_when_detached(azure, wait_timeout, disk_names):
def _handle_timeout(signum, frame):
raise TimeoutError("Timeout reached while waiting for disks to become detached.")
signal.signal(signal.SIGALRM, _handle_timeout)
signal.alarm(wait_timeout)
try:
while len(disk_names) > 0:
for disk_name in disk_names:
disk = azure.get_disk(disk_name)
if disk.attached_to is None:
azure.delete_disk(disk.name, True)
disk_names.remove(disk_name)
except AzureException as e:
raise AzureException("failed to get or delete disk %s, error was: %s" % (disk_name, str(e)))
finally:
signal.alarm(0)
def get_ssh_certificate_tokens(module, ssh_cert_path):
"""
Returns the sha1 fingerprint and a base64-encoded PKCS12 version of the certificate.
"""
# This returns a string such as SHA1 Fingerprint=88:60:0B:13:A9:14:47:DA:4E:19:10:7D:34:92:2B:DF:A1:7D:CA:FF
rc, stdout, stderr = module.run_command(['openssl', 'x509', '-in', ssh_cert_path, '-fingerprint', '-noout'])
if rc != 0:
module.fail_json(msg="failed to generate the key fingerprint, error was: %s" % stderr)
fingerprint = stdout.strip()[17:].replace(':', '')
rc, stdout, stderr = module.run_command(['openssl', 'pkcs12', '-export', '-in', ssh_cert_path, '-nokeys', '-password', 'pass:'])
if rc != 0:
module.fail_json(msg="failed to generate the pkcs12 signature from the certificate, error was: %s" % stderr)
pkcs12_base64 = base64.b64encode(stdout.strip())
return (fingerprint, pkcs12_base64)
def create_virtual_machine(module, azure):
"""
Create new virtual machine
module : AnsibleModule object
azure: authenticated azure ServiceManagementService object
Returns:
True if a new virtual machine and/or cloud service was created, false otherwise
"""
name = module.params.get('name')
os_type = module.params.get('os_type')
hostname = module.params.get('hostname') or name + ".cloudapp.net"
endpoints = module.params.get('endpoints').split(',')
ssh_cert_path = module.params.get('ssh_cert_path')
user = module.params.get('user')
password = module.params.get('password')
location = module.params.get('location')
role_size = module.params.get('role_size')
storage_account = module.params.get('storage_account')
image = module.params.get('image')
virtual_network_name = module.params.get('virtual_network_name')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
changed = False
# Check if a deployment with the same name already exists
cloud_service_name_available = azure.check_hosted_service_name_availability(name)
if cloud_service_name_available.result:
# cloud service does not exist; create it
try:
result = azure.create_hosted_service(service_name=name, label=name, location=location)
_wait_for_completion(azure, result, wait_timeout, "create_hosted_service")
changed = True
except AzureException as e:
module.fail_json(msg="failed to create the new service, error was: %s" % str(e))
try:
# check to see if a vm with this name exists; if so, do nothing
azure.get_role(name, name, name)
except AzureMissingException:
# vm does not exist; create it
if os_type == 'linux':
# Create linux configuration
disable_ssh_password_authentication = not password
vm_config = LinuxConfigurationSet(hostname, user, password, disable_ssh_password_authentication)
else:
# Create Windows Config
vm_config = WindowsConfigurationSet(hostname, password, None, module.params.get('auto_updates'), None, user)
vm_config.domain_join = None
if module.params.get('enable_winrm'):
listener = Listener('Http')
vm_config.win_rm.listeners.listeners.append(listener)
else:
vm_config.win_rm = None
# Add ssh certificates if specified
if ssh_cert_path:
fingerprint, pkcs12_base64 = get_ssh_certificate_tokens(module, ssh_cert_path)
# Add certificate to cloud service
result = azure.add_service_certificate(name, pkcs12_base64, 'pfx', '')
_wait_for_completion(azure, result, wait_timeout, "add_service_certificate")
# Create ssh config
ssh_config = SSH()
ssh_config.public_keys = PublicKeys()
authorized_keys_path = u'/home/%s/.ssh/authorized_keys' % user
ssh_config.public_keys.public_keys.append(PublicKey(path=authorized_keys_path, fingerprint=fingerprint))
# Append ssh config to linux machine config
vm_config.ssh = ssh_config
# Create network configuration
network_config = ConfigurationSetInputEndpoints()
network_config.configuration_set_type = 'NetworkConfiguration'
network_config.subnet_names = []
network_config.public_ips = None
for port in endpoints:
network_config.input_endpoints.append(ConfigurationSetInputEndpoint(name='TCP-%s' % port,
protocol='TCP',
port=port,
local_port=port))
# First determine where to store disk
today = datetime.date.today().strftime('%Y-%m-%d')
disk_prefix = u'%s-%s' % (name, name)
media_link = u'http://%s.blob.core.windows.net/vhds/%s-%s.vhd' % (storage_account, disk_prefix, today)
# Create system hard disk
os_hd = OSVirtualHardDisk(image, media_link)
# Spin up virtual machine
try:
result = azure.create_virtual_machine_deployment(service_name=name,
deployment_name=name,
deployment_slot='production',
label=name,
role_name=name,
system_config=vm_config,
network_config=network_config,
os_virtual_hard_disk=os_hd,
role_size=role_size,
role_type='PersistentVMRole',
virtual_network_name=virtual_network_name)
_wait_for_completion(azure, result, wait_timeout, "create_virtual_machine_deployment")
changed = True
except AzureException as e:
module.fail_json(msg="failed to create the new virtual machine, error was: %s" % str(e))
try:
deployment = azure.get_deployment_by_name(service_name=name, deployment_name=name)
return (changed, urlparse(deployment.url).hostname, deployment)
except AzureException as e:
module.fail_json(msg="failed to lookup the deployment information for %s, error was: %s" % (name, str(e)))
def terminate_virtual_machine(module, azure):
"""
Terminates a virtual machine
module : AnsibleModule object
azure: authenticated azure ServiceManagementService object
Returns:
True if a new virtual machine was deleted, false otherwise
"""
# Whether to wait for termination to complete before returning
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
name = module.params.get('name')
delete_empty_services = module.params.get('delete_empty_services')
changed = False
deployment = None
public_dns_name = None
disk_names = []
try:
deployment = azure.get_deployment_by_name(service_name=name, deployment_name=name)
except AzureMissingException as e:
pass # no such deployment or service
except AzureException as e:
module.fail_json(msg="failed to find the deployment, error was: %s" % str(e))
# Delete deployment
if deployment:
changed = True
try:
# gather disk info
results = []
for role in deployment.role_list:
role_props = azure.get_role(name, deployment.name, role.role_name)
if role_props.os_virtual_hard_disk.disk_name not in disk_names:
disk_names.append(role_props.os_virtual_hard_disk.disk_name)
except AzureException as e:
module.fail_json(msg="failed to get the role %s, error was: %s" % (role.role_name, str(e)))
try:
result = azure.delete_deployment(name, deployment.name)
_wait_for_completion(azure, result, wait_timeout, "delete_deployment")
except AzureException as e:
module.fail_json(msg="failed to delete the deployment %s, error was: %s" % (deployment.name, str(e)))
# It's unclear when disks associated with terminated deployment get detached.
# Thus, until the wait_timeout is reached, we continue to delete disks as they
# become detached by polling the list of remaining disks and examining the state.
try:
_delete_disks_when_detached(azure, wait_timeout, disk_names)
except (AzureException, TimeoutError) as e:
module.fail_json(msg=str(e))
try:
# Now that the vm is deleted, remove the cloud service
result = azure.delete_hosted_service(service_name=name)
_wait_for_completion(azure, result, wait_timeout, "delete_hosted_service")
except AzureException as e:
module.fail_json(msg="failed to delete the service %s, error was: %s" % (name, str(e)))
public_dns_name = urlparse(deployment.url).hostname
return changed, public_dns_name, deployment
def get_azure_creds(module):
# Check module args for credentials, then check environment vars
subscription_id = module.params.get('subscription_id')
if not subscription_id:
subscription_id = os.environ.get('AZURE_SUBSCRIPTION_ID', None)
if not subscription_id:
module.fail_json(msg="No subscription_id provided. Please set 'AZURE_SUBSCRIPTION_ID' or use the 'subscription_id' parameter")
management_cert_path = module.params.get('management_cert_path')
if not management_cert_path:
management_cert_path = os.environ.get('AZURE_CERT_PATH', None)
if not management_cert_path:
module.fail_json(msg="No management_cert_path provided. Please set 'AZURE_CERT_PATH' or use the 'management_cert_path' parameter")
return subscription_id, management_cert_path
def main():
module = AnsibleModule(
argument_spec=dict(
ssh_cert_path=dict(),
name=dict(),
hostname=dict(),
os_type=dict(default='linux', choices=['linux', 'windows']),
location=dict(choices=AZURE_LOCATIONS),
role_size=dict(choices=AZURE_ROLE_SIZES),
subscription_id=dict(no_log=True),
storage_account=dict(),
management_cert_path=dict(),
endpoints=dict(default='22'),
user=dict(),
password=dict(no_log=True),
image=dict(),
virtual_network_name=dict(default=None),
state=dict(default='present'),
wait=dict(type='bool', default=False),
wait_timeout=dict(default=600),
wait_timeout_redirects=dict(default=300),
auto_updates=dict(type='bool', default=False),
enable_winrm=dict(type='bool', default=True),
)
)
if not HAS_AZURE:
module.fail_json(msg='azure python module required for this module')
# create azure ServiceManagementService object
subscription_id, management_cert_path = get_azure_creds(module)
wait_timeout_redirects = int(module.params.get('wait_timeout_redirects'))
if hasattr(windows_azure, '__version__') and LooseVersion(windows_azure.__version__) <= "0.8.0":
# wrapper for handling redirects which the sdk <= 0.8.0 is not following
azure = Wrapper(ServiceManagementService(subscription_id, management_cert_path), wait_timeout_redirects)
else:
azure = ServiceManagementService(subscription_id, management_cert_path)
cloud_service_raw = None
if module.params.get('state') == 'absent':
(changed, public_dns_name, deployment) = terminate_virtual_machine(module, azure)
elif module.params.get('state') == 'present':
# Changed is always set to true when provisioning new instances
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for new instance')
if not module.params.get('image'):
module.fail_json(msg='image parameter is required for new instance')
if not module.params.get('user'):
module.fail_json(msg='user parameter is required for new instance')
if not module.params.get('location'):
module.fail_json(msg='location parameter is required for new instance')
if not module.params.get('storage_account'):
module.fail_json(msg='storage_account parameter is required for new instance')
if not (module.params.get('password') or module.params.get('ssh_cert_path')):
module.fail_json(msg='password or ssh_cert_path parameter is required for new instance')
(changed, public_dns_name, deployment) = create_virtual_machine(module, azure)
module.exit_json(changed=changed, public_dns_name=public_dns_name, deployment=json.loads(json.dumps(deployment, default=lambda o: o.__dict__)))
class Wrapper(object):
def __init__(self, obj, wait_timeout):
self.other = obj
self.wait_timeout = wait_timeout
def __getattr__(self, name):
if hasattr(self.other, name):
func = getattr(self.other, name)
return lambda *args, **kwargs: self._wrap(func, args, kwargs)
raise AttributeError(name)
def _wrap(self, func, args, kwargs):
if isinstance(func, MethodType):
result = self._handle_temporary_redirects(lambda: func(*args, **kwargs))
else:
result = self._handle_temporary_redirects(lambda: func(self.other, *args, **kwargs))
return result
def _handle_temporary_redirects(self, f):
wait_timeout = time.time() + self.wait_timeout
while wait_timeout > time.time():
try:
return f()
except AzureException as e:
if not str(e).lower().find("temporary redirect") == -1:
time.sleep(5)
pass
else:
raise e
if __name__ == '__main__':
main()
|
gpl-3.0
|
zooko/egtp_new
|
egtp/EGTPConstants.py
|
1
|
1283
|
# Copyright (c) 2001 Autonomous Zone Industries
# Copyright (c) 2002 Bryce "Zooko" Wilcox-O'Hearn
# This file is licensed under the
# GNU Lesser General Public License v2.1.
# See the file COPYING or visit http://www.gnu.org/ for details.
__revision__ = "$Id: EGTPConstants.py,v 1.2 2002/12/02 19:58:47 myers_carpenter Exp $"
# length of RSA public moduli in 8-bit bytes (octets)
# Note that it is allowable for some of the high order bits to be 0. It is even
# allowable for more than 8 of those bits to be 0 without changing the "length" of the
# modulus. This is really then the log-base-2 of the size of the space from which we
# randomly choose such values, rather than the "length" of the binary encoding of
# any particular value.
SIZE_OF_MODULAR_VALUES = 1024/8
# Your code should probably be written to work with any public exponent. It is best not to use
# this constant. But it is here because mesgen uses it currently.
HARDCODED_RSA_PUBLIC_EXPONENT = 3
# size of ids, secrets, random numbers, salt and other things that must be universally unique
# in 8-bit bytes (octets)
# You absolutely cannot change this number. In fact, it is just being hardcoded in all over the place
# and this variable is useful only as documentation.
SIZE_OF_UNIQS = 20
|
lgpl-2.1
|
jounex/hue
|
desktop/core/ext-py/Django-1.6.10/tests/file_uploads/tests.py
|
37
|
16007
|
#! -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
import base64
import errno
import hashlib
import json
import os
import shutil
import tempfile as sys_tempfile
from django.core.files import temp as tempfile
from django.core.files.uploadedfile import SimpleUploadedFile
from django.http.multipartparser import MultiPartParser
from django.test import TestCase, client
from django.test.utils import override_settings
from django.utils.encoding import force_bytes
from django.utils.six import StringIO
from django.utils import unittest
from . import uploadhandler
from .models import FileModel
UNICODE_FILENAME = 'test-0123456789_中文_Orléans.jpg'
MEDIA_ROOT = sys_tempfile.mkdtemp(dir=os.environ['DJANGO_TEST_TEMP_DIR'])
UPLOAD_TO = os.path.join(MEDIA_ROOT, 'test_upload')
@override_settings(MEDIA_ROOT=MEDIA_ROOT)
class FileUploadTests(TestCase):
@classmethod
def setUpClass(cls):
if not os.path.isdir(MEDIA_ROOT):
os.makedirs(MEDIA_ROOT)
@classmethod
def tearDownClass(cls):
shutil.rmtree(MEDIA_ROOT)
def test_simple_upload(self):
with open(__file__, 'rb') as fp:
post_data = {
'name': 'Ringo',
'file_field': fp,
}
response = self.client.post('/file_uploads/upload/', post_data)
self.assertEqual(response.status_code, 200)
def test_large_upload(self):
tdir = tempfile.gettempdir()
file1 = tempfile.NamedTemporaryFile(suffix=".file1", dir=tdir)
file1.write(b'a' * (2 ** 21))
file1.seek(0)
file2 = tempfile.NamedTemporaryFile(suffix=".file2", dir=tdir)
file2.write(b'a' * (10 * 2 ** 20))
file2.seek(0)
post_data = {
'name': 'Ringo',
'file_field1': file1,
'file_field2': file2,
}
for key in list(post_data):
try:
post_data[key + '_hash'] = hashlib.sha1(post_data[key].read()).hexdigest()
post_data[key].seek(0)
except AttributeError:
post_data[key + '_hash'] = hashlib.sha1(force_bytes(post_data[key])).hexdigest()
response = self.client.post('/file_uploads/verify/', post_data)
self.assertEqual(response.status_code, 200)
def _test_base64_upload(self, content):
payload = client.FakePayload("\r\n".join([
'--' + client.BOUNDARY,
'Content-Disposition: form-data; name="file"; filename="test.txt"',
'Content-Type: application/octet-stream',
'Content-Transfer-Encoding: base64',
'',]))
payload.write(b"\r\n" + base64.b64encode(force_bytes(content)) + b"\r\n")
payload.write('--' + client.BOUNDARY + '--\r\n')
r = {
'CONTENT_LENGTH': len(payload),
'CONTENT_TYPE': client.MULTIPART_CONTENT,
'PATH_INFO': "/file_uploads/echo_content/",
'REQUEST_METHOD': 'POST',
'wsgi.input': payload,
}
response = self.client.request(**r)
received = json.loads(response.content.decode('utf-8'))
self.assertEqual(received['file'], content)
def test_base64_upload(self):
self._test_base64_upload("This data will be transmitted base64-encoded.")
def test_big_base64_upload(self):
self._test_base64_upload("Big data" * 68000) # > 512Kb
def test_unicode_file_name(self):
tdir = sys_tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, tdir, True)
# This file contains chinese symbols and an accented char in the name.
with open(os.path.join(tdir, UNICODE_FILENAME), 'w+b') as file1:
file1.write(b'b' * (2 ** 10))
file1.seek(0)
post_data = {
'file_unicode': file1,
}
response = self.client.post('/file_uploads/unicode_name/', post_data)
self.assertEqual(response.status_code, 200)
def test_dangerous_file_names(self):
"""Uploaded file names should be sanitized before ever reaching the view."""
# This test simulates possible directory traversal attacks by a
# malicious uploader We have to do some monkeybusiness here to construct
# a malicious payload with an invalid file name (containing os.sep or
# os.pardir). This similar to what an attacker would need to do when
# trying such an attack.
scary_file_names = [
"/tmp/hax0rd.txt", # Absolute path, *nix-style.
"C:\\Windows\\hax0rd.txt", # Absolute path, win-syle.
"C:/Windows/hax0rd.txt", # Absolute path, broken-style.
"\\tmp\\hax0rd.txt", # Absolute path, broken in a different way.
"/tmp\\hax0rd.txt", # Absolute path, broken by mixing.
"subdir/hax0rd.txt", # Descendant path, *nix-style.
"subdir\\hax0rd.txt", # Descendant path, win-style.
"sub/dir\\hax0rd.txt", # Descendant path, mixed.
"../../hax0rd.txt", # Relative path, *nix-style.
"..\\..\\hax0rd.txt", # Relative path, win-style.
"../..\\hax0rd.txt" # Relative path, mixed.
]
payload = client.FakePayload()
for i, name in enumerate(scary_file_names):
payload.write('\r\n'.join([
'--' + client.BOUNDARY,
'Content-Disposition: form-data; name="file%s"; filename="%s"' % (i, name),
'Content-Type: application/octet-stream',
'',
'You got pwnd.\r\n'
]))
payload.write('\r\n--' + client.BOUNDARY + '--\r\n')
r = {
'CONTENT_LENGTH': len(payload),
'CONTENT_TYPE': client.MULTIPART_CONTENT,
'PATH_INFO': "/file_uploads/echo/",
'REQUEST_METHOD': 'POST',
'wsgi.input': payload,
}
response = self.client.request(**r)
# The filenames should have been sanitized by the time it got to the view.
recieved = json.loads(response.content.decode('utf-8'))
for i, name in enumerate(scary_file_names):
got = recieved["file%s" % i]
self.assertEqual(got, "hax0rd.txt")
def test_filename_overflow(self):
"""File names over 256 characters (dangerous on some platforms) get fixed up."""
name = "%s.txt" % ("f"*500)
payload = client.FakePayload("\r\n".join([
'--' + client.BOUNDARY,
'Content-Disposition: form-data; name="file"; filename="%s"' % name,
'Content-Type: application/octet-stream',
'',
'Oops.'
'--' + client.BOUNDARY + '--',
'',
]))
r = {
'CONTENT_LENGTH': len(payload),
'CONTENT_TYPE': client.MULTIPART_CONTENT,
'PATH_INFO': "/file_uploads/echo/",
'REQUEST_METHOD': 'POST',
'wsgi.input': payload,
}
got = json.loads(self.client.request(**r).content.decode('utf-8'))
self.assertTrue(len(got['file']) < 256, "Got a long file name (%s characters)." % len(got['file']))
def test_truncated_multipart_handled_gracefully(self):
"""
If passed an incomplete multipart message, MultiPartParser does not
attempt to read beyond the end of the stream, and simply will handle
the part that can be parsed gracefully.
"""
payload_str = "\r\n".join([
'--' + client.BOUNDARY,
'Content-Disposition: form-data; name="file"; filename="foo.txt"',
'Content-Type: application/octet-stream',
'',
'file contents'
'--' + client.BOUNDARY + '--',
'',
])
payload = client.FakePayload(payload_str[:-10])
r = {
'CONTENT_LENGTH': len(payload),
'CONTENT_TYPE': client.MULTIPART_CONTENT,
'PATH_INFO': '/file_uploads/echo/',
'REQUEST_METHOD': 'POST',
'wsgi.input': payload,
}
got = json.loads(self.client.request(**r).content.decode('utf-8'))
self.assertEqual(got, {})
def test_empty_multipart_handled_gracefully(self):
"""
If passed an empty multipart message, MultiPartParser will return
an empty QueryDict.
"""
r = {
'CONTENT_LENGTH': 0,
'CONTENT_TYPE': client.MULTIPART_CONTENT,
'PATH_INFO': '/file_uploads/echo/',
'REQUEST_METHOD': 'POST',
'wsgi.input': client.FakePayload(b''),
}
got = json.loads(self.client.request(**r).content.decode('utf-8'))
self.assertEqual(got, {})
def test_custom_upload_handler(self):
# A small file (under the 5M quota)
smallfile = tempfile.NamedTemporaryFile()
smallfile.write(b'a' * (2 ** 21))
smallfile.seek(0)
# A big file (over the quota)
bigfile = tempfile.NamedTemporaryFile()
bigfile.write(b'a' * (10 * 2 ** 20))
bigfile.seek(0)
# Small file posting should work.
response = self.client.post('/file_uploads/quota/', {'f': smallfile})
got = json.loads(response.content.decode('utf-8'))
self.assertTrue('f' in got)
# Large files don't go through.
response = self.client.post("/file_uploads/quota/", {'f': bigfile})
got = json.loads(response.content.decode('utf-8'))
self.assertTrue('f' not in got)
def test_broken_custom_upload_handler(self):
f = tempfile.NamedTemporaryFile()
f.write(b'a' * (2 ** 21))
f.seek(0)
# AttributeError: You cannot alter upload handlers after the upload has been processed.
self.assertRaises(
AttributeError,
self.client.post,
'/file_uploads/quota/broken/',
{'f': f}
)
def test_fileupload_getlist(self):
file1 = tempfile.NamedTemporaryFile()
file1.write(b'a' * (2 ** 23))
file1.seek(0)
file2 = tempfile.NamedTemporaryFile()
file2.write(b'a' * (2 * 2 ** 18))
file2.seek(0)
file2a = tempfile.NamedTemporaryFile()
file2a.write(b'a' * (5 * 2 ** 20))
file2a.seek(0)
response = self.client.post('/file_uploads/getlist_count/', {
'file1': file1,
'field1': 'test',
'field2': 'test3',
'field3': 'test5',
'field4': 'test6',
'field5': 'test7',
'file2': (file2, file2a)
})
got = json.loads(response.content.decode('utf-8'))
self.assertEqual(got.get('file1'), 1)
self.assertEqual(got.get('file2'), 2)
def test_file_error_blocking(self):
"""
The server should not block when there are upload errors (bug #8622).
This can happen if something -- i.e. an exception handler -- tries to
access POST while handling an error in parsing POST. This shouldn't
cause an infinite loop!
"""
class POSTAccessingHandler(client.ClientHandler):
"""A handler that'll access POST during an exception."""
def handle_uncaught_exception(self, request, resolver, exc_info):
ret = super(POSTAccessingHandler, self).handle_uncaught_exception(request, resolver, exc_info)
p = request.POST
return ret
# Maybe this is a little more complicated that it needs to be; but if
# the django.test.client.FakePayload.read() implementation changes then
# this test would fail. So we need to know exactly what kind of error
# it raises when there is an attempt to read more than the available bytes:
try:
client.FakePayload(b'a').read(2)
except Exception as err:
reference_error = err
# install the custom handler that tries to access request.POST
self.client.handler = POSTAccessingHandler()
with open(__file__, 'rb') as fp:
post_data = {
'name': 'Ringo',
'file_field': fp,
}
try:
response = self.client.post('/file_uploads/upload_errors/', post_data)
except reference_error.__class__ as err:
self.assertFalse(
str(err) == str(reference_error),
"Caught a repeated exception that'll cause an infinite loop in file uploads."
)
except Exception as err:
# CustomUploadError is the error that should have been raised
self.assertEqual(err.__class__, uploadhandler.CustomUploadError)
def test_filename_case_preservation(self):
"""
The storage backend shouldn't mess with the case of the filenames
uploaded.
"""
# Synthesize the contents of a file upload with a mixed case filename
# so we don't have to carry such a file in the Django tests source code
# tree.
vars = {'boundary': 'oUrBoUnDaRyStRiNg'}
post_data = [
'--%(boundary)s',
'Content-Disposition: form-data; name="file_field"; '
'filename="MiXeD_cAsE.txt"',
'Content-Type: application/octet-stream',
'',
'file contents\n'
'',
'--%(boundary)s--\r\n',
]
response = self.client.post(
'/file_uploads/filename_case/',
'\r\n'.join(post_data) % vars,
'multipart/form-data; boundary=%(boundary)s' % vars
)
self.assertEqual(response.status_code, 200)
id = int(response.content)
obj = FileModel.objects.get(pk=id)
# The name of the file uploaded and the file stored in the server-side
# shouldn't differ.
self.assertEqual(os.path.basename(obj.testfile.path), 'MiXeD_cAsE.txt')
@override_settings(MEDIA_ROOT=MEDIA_ROOT)
class DirectoryCreationTests(TestCase):
"""
Tests for error handling during directory creation
via _save_FIELD_file (ticket #6450)
"""
@classmethod
def setUpClass(cls):
if not os.path.isdir(MEDIA_ROOT):
os.makedirs(MEDIA_ROOT)
@classmethod
def tearDownClass(cls):
shutil.rmtree(MEDIA_ROOT)
def setUp(self):
self.obj = FileModel()
def test_readonly_root(self):
"""Permission errors are not swallowed"""
os.chmod(MEDIA_ROOT, 0o500)
self.addCleanup(os.chmod, MEDIA_ROOT, 0o700)
try:
self.obj.testfile.save('foo.txt', SimpleUploadedFile('foo.txt', b'x'))
except OSError as err:
self.assertEqual(err.errno, errno.EACCES)
except Exception:
self.fail("OSError [Errno %s] not raised." % errno.EACCES)
def test_not_a_directory(self):
"""The correct IOError is raised when the upload directory name exists but isn't a directory"""
# Create a file with the upload directory name
open(UPLOAD_TO, 'wb').close()
self.addCleanup(os.remove, UPLOAD_TO)
with self.assertRaises(IOError) as exc_info:
self.obj.testfile.save('foo.txt', SimpleUploadedFile('foo.txt', b'x'))
# The test needs to be done on a specific string as IOError
# is raised even without the patch (just not early enough)
self.assertEqual(exc_info.exception.args[0],
"%s exists and is not a directory." % UPLOAD_TO)
class MultiParserTests(unittest.TestCase):
def test_empty_upload_handlers(self):
# We're not actually parsing here; just checking if the parser properly
# instantiates with empty upload handlers.
parser = MultiPartParser({
'CONTENT_TYPE': 'multipart/form-data; boundary=_foo',
'CONTENT_LENGTH': '1'
}, StringIO('x'), [], 'utf-8')
|
apache-2.0
|
stefanbraun-private/pyVisiToolkit
|
src/misc/Visi_Snake.py
|
1
|
4382
|
#!/usr/bin/env python
# encoding: utf-8
"""
misc.Visi_Snake.py
"Snake"-game implemented in ProMos NT(c).
Based on code from https://gist.github.com/sanchitgangwar/2158089
with special thanks to Sanchit Gangwar | https://github.com/sanchitgangwar
notes:
-textfields in GE can simulate a character-based display:
-font "Courier New" is monospaced (every character uses same space)
-direction of text "center justified" (only this way you get multiline text)
-initialisation is a DMS-key with datatype STR and formatstring "%s"
-every row is a substring(filled with space for getting same length),
these rows were separated by "\n" for getting the whole string representing the whole display
-researches showed that it's possible to store a lot of data in a DMS-datapoint with type STR:
-maximal 999 characters gets serialized in DMS file, but it seems that 79 chars + \NULL is maximum by design.
-during runtime it can store much more
-GE crashes when you're trying to display more than 8k characters in a textfield
Copyright (C) 2017 Stefan Braun
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
import dms.dmspipe
import misc.Curses_emulator as curses
import sys
import locale
import argparse
import random
ENCODING_STDOUT1 = ''
ENCODING_STDOUT2 = ''
ENCODING_LOCALE = ''
ENCODING_FILENAMES = sys.getfilesystemencoding()
DEBUGGING = True
# floating point precision in ProMoS NT(c)
NOF_DECIMAL_DIGITS = 6
NOF_ROWS = 32
NOF_COLUMNS = 32
def get_encoding():
global ENCODING_STDOUT1
# following code seems only to work in IDE...
ENCODING_STDOUT1= sys.stdout.encoding or sys.getfilesystemencoding()
# hint from http://stackoverflow.com/questions/9226516/python-windows-console-and-encodings-cp-850-vs-cp1252
# following code doesn't work in IDE because PyCharm uses UTF-8 and not encoding from Windows command prompt...
global ENCODING_STDOUT2
ENCODING_STDOUT2 = locale.getpreferredencoding()
# another try: using encoding-guessing of IDE "IDLE"
# hint from https://mail.python.org/pipermail/tkinter-discuss/2010-December/002602.html
global ENCODING_LOCALE
import idlelib.IOBinding
ENCODING_LOCALE = idlelib.IOBinding.encoding
print(u'Using encoding "' + ENCODING_LOCALE + u'" for input and trying "' + ENCODING_STDOUT1 + u'" or "' + ENCODING_STDOUT2 + u'" for STDOUT')
def my_print(line):
"""
wrapper for print()-function:
-does explicit conversion from unicode to encoded byte-string
-when parameter is already a encoded byte-string, then convert to unicode using "encoding cookie" and then to encoded byte-string
"""
unicode_line = u''
if type(line) == str:
# conversion byte-string -> unicode -> byte-string
# (first encoding is source file encoding, second one is encoding of console)
if not ENCODING_LOCALE:
get_encoding()
unicode_line = line.decode(ENCODING_LOCALE)
else:
# assuming unicode-string (we don't care about other situations, when called with wrong datatype then print() will throw exception)
unicode_line = line
if not (ENCODING_STDOUT1 and ENCODING_STDOUT2):
get_encoding()
# when a character isn't available in given ENCODING, then it gets replaced by "?". Other options:
# http://stackoverflow.com/questions/3224268/python-unicode-encode-error
try:
bytestring_line = unicode_line.encode(ENCODING_STDOUT1, errors='strict')
except UnicodeEncodeError:
bytestring_line = unicode_line.encode(ENCODING_STDOUT2, errors='strict')
print(bytestring_line)
class VisiSnake(object):
VISISNAKE_DMS_KEY = "Visi_Snake"
def main(filename, argv=None):
get_encoding()
my_print(u'misc.Visi_Snake.py')
my_print(u'******************')
curr_snake = VisiSnake()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Snake game for ProMoS NT(c).')
args = parser.parse_args()
status = main()
sys.exit(status)
|
gpl-3.0
|
vikatory/kbengine
|
kbe/src/lib/python/Lib/sysconfig.py
|
84
|
24509
|
"""Access to Python's configuration information."""
import os
import sys
from os.path import pardir, realpath
__all__ = [
'get_config_h_filename',
'get_config_var',
'get_config_vars',
'get_makefile_filename',
'get_path',
'get_path_names',
'get_paths',
'get_platform',
'get_python_version',
'get_scheme_names',
'parse_config_h',
]
_INSTALL_SCHEMES = {
'posix_prefix': {
'stdlib': '{installed_base}/lib/python{py_version_short}',
'platstdlib': '{platbase}/lib/python{py_version_short}',
'purelib': '{base}/lib/python{py_version_short}/site-packages',
'platlib': '{platbase}/lib/python{py_version_short}/site-packages',
'include':
'{installed_base}/include/python{py_version_short}{abiflags}',
'platinclude':
'{installed_platbase}/include/python{py_version_short}{abiflags}',
'scripts': '{base}/bin',
'data': '{base}',
},
'posix_home': {
'stdlib': '{installed_base}/lib/python',
'platstdlib': '{base}/lib/python',
'purelib': '{base}/lib/python',
'platlib': '{base}/lib/python',
'include': '{installed_base}/include/python',
'platinclude': '{installed_base}/include/python',
'scripts': '{base}/bin',
'data': '{base}',
},
'nt': {
'stdlib': '{installed_base}/Lib',
'platstdlib': '{base}/Lib',
'purelib': '{base}/Lib/site-packages',
'platlib': '{base}/Lib/site-packages',
'include': '{installed_base}/Include',
'platinclude': '{installed_base}/Include',
'scripts': '{base}/Scripts',
'data': '{base}',
},
'nt_user': {
'stdlib': '{userbase}/Python{py_version_nodot}',
'platstdlib': '{userbase}/Python{py_version_nodot}',
'purelib': '{userbase}/Python{py_version_nodot}/site-packages',
'platlib': '{userbase}/Python{py_version_nodot}/site-packages',
'include': '{userbase}/Python{py_version_nodot}/Include',
'scripts': '{userbase}/Scripts',
'data': '{userbase}',
},
'posix_user': {
'stdlib': '{userbase}/lib/python{py_version_short}',
'platstdlib': '{userbase}/lib/python{py_version_short}',
'purelib': '{userbase}/lib/python{py_version_short}/site-packages',
'platlib': '{userbase}/lib/python{py_version_short}/site-packages',
'include': '{userbase}/include/python{py_version_short}',
'scripts': '{userbase}/bin',
'data': '{userbase}',
},
'osx_framework_user': {
'stdlib': '{userbase}/lib/python',
'platstdlib': '{userbase}/lib/python',
'purelib': '{userbase}/lib/python/site-packages',
'platlib': '{userbase}/lib/python/site-packages',
'include': '{userbase}/include',
'scripts': '{userbase}/bin',
'data': '{userbase}',
},
}
_SCHEME_KEYS = ('stdlib', 'platstdlib', 'purelib', 'platlib', 'include',
'scripts', 'data')
# FIXME don't rely on sys.version here, its format is an implementation detail
# of CPython, use sys.version_info or sys.hexversion
_PY_VERSION = sys.version.split()[0]
_PY_VERSION_SHORT = sys.version[:3]
_PY_VERSION_SHORT_NO_DOT = _PY_VERSION[0] + _PY_VERSION[2]
_PREFIX = os.path.normpath(sys.prefix)
_BASE_PREFIX = os.path.normpath(sys.base_prefix)
_EXEC_PREFIX = os.path.normpath(sys.exec_prefix)
_BASE_EXEC_PREFIX = os.path.normpath(sys.base_exec_prefix)
_CONFIG_VARS = None
_USER_BASE = None
def _safe_realpath(path):
try:
return realpath(path)
except OSError:
return path
if sys.executable:
_PROJECT_BASE = os.path.dirname(_safe_realpath(sys.executable))
else:
# sys.executable can be empty if argv[0] has been changed and Python is
# unable to retrieve the real program name
_PROJECT_BASE = _safe_realpath(os.getcwd())
if os.name == "nt" and "pcbuild" in _PROJECT_BASE[-8:].lower():
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir))
# PC/VS7.1
if os.name == "nt" and "\\pc\\v" in _PROJECT_BASE[-10:].lower():
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir))
# PC/AMD64
if os.name == "nt" and "\\pcbuild\\amd64" in _PROJECT_BASE[-14:].lower():
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir))
# set for cross builds
if "_PYTHON_PROJECT_BASE" in os.environ:
_PROJECT_BASE = _safe_realpath(os.environ["_PYTHON_PROJECT_BASE"])
def _is_python_source_dir(d):
for fn in ("Setup.dist", "Setup.local"):
if os.path.isfile(os.path.join(d, "Modules", fn)):
return True
return False
_sys_home = getattr(sys, '_home', None)
if _sys_home and os.name == 'nt' and \
_sys_home.lower().endswith(('pcbuild', 'pcbuild\\amd64')):
_sys_home = os.path.dirname(_sys_home)
if _sys_home.endswith('pcbuild'): # must be amd64
_sys_home = os.path.dirname(_sys_home)
def is_python_build(check_home=False):
if check_home and _sys_home:
return _is_python_source_dir(_sys_home)
return _is_python_source_dir(_PROJECT_BASE)
_PYTHON_BUILD = is_python_build(True)
if _PYTHON_BUILD:
for scheme in ('posix_prefix', 'posix_home'):
_INSTALL_SCHEMES[scheme]['include'] = '{srcdir}/Include'
_INSTALL_SCHEMES[scheme]['platinclude'] = '{projectbase}/.'
def _subst_vars(s, local_vars):
try:
return s.format(**local_vars)
except KeyError:
try:
return s.format(**os.environ)
except KeyError as var:
raise AttributeError('{%s}' % var)
def _extend_dict(target_dict, other_dict):
target_keys = target_dict.keys()
for key, value in other_dict.items():
if key in target_keys:
continue
target_dict[key] = value
def _expand_vars(scheme, vars):
res = {}
if vars is None:
vars = {}
_extend_dict(vars, get_config_vars())
for key, value in _INSTALL_SCHEMES[scheme].items():
if os.name in ('posix', 'nt'):
value = os.path.expanduser(value)
res[key] = os.path.normpath(_subst_vars(value, vars))
return res
def _get_default_scheme():
if os.name == 'posix':
# the default scheme for posix is posix_prefix
return 'posix_prefix'
return os.name
def _getuserbase():
env_base = os.environ.get("PYTHONUSERBASE", None)
def joinuser(*args):
return os.path.expanduser(os.path.join(*args))
if os.name == "nt":
base = os.environ.get("APPDATA") or "~"
if env_base:
return env_base
else:
return joinuser(base, "Python")
if sys.platform == "darwin":
framework = get_config_var("PYTHONFRAMEWORK")
if framework:
if env_base:
return env_base
else:
return joinuser("~", "Library", framework, "%d.%d" %
sys.version_info[:2])
if env_base:
return env_base
else:
return joinuser("~", ".local")
def _parse_makefile(filename, vars=None):
"""Parse a Makefile-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary.
"""
# Regexes needed for parsing Makefile (and similar syntaxes,
# like old-style Setup files).
import re
_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)")
_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)")
_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}")
if vars is None:
vars = {}
done = {}
notdone = {}
with open(filename, errors="surrogateescape") as f:
lines = f.readlines()
for line in lines:
if line.startswith('#') or line.strip() == '':
continue
m = _variable_rx.match(line)
if m:
n, v = m.group(1, 2)
v = v.strip()
# `$$' is a literal `$' in make
tmpv = v.replace('$$', '')
if "$" in tmpv:
notdone[n] = v
else:
try:
v = int(v)
except ValueError:
# insert literal `$'
done[n] = v.replace('$$', '$')
else:
done[n] = v
# do variable interpolation here
variables = list(notdone.keys())
# Variables with a 'PY_' prefix in the makefile. These need to
# be made available without that prefix through sysconfig.
# Special care is needed to ensure that variable expansion works, even
# if the expansion uses the name without a prefix.
renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS')
while len(variables) > 0:
for name in tuple(variables):
value = notdone[name]
m = _findvar1_rx.search(value) or _findvar2_rx.search(value)
if m is not None:
n = m.group(1)
found = True
if n in done:
item = str(done[n])
elif n in notdone:
# get it on a subsequent round
found = False
elif n in os.environ:
# do it like make: fall back to environment
item = os.environ[n]
elif n in renamed_variables:
if (name.startswith('PY_') and
name[3:] in renamed_variables):
item = ""
elif 'PY_' + n in notdone:
found = False
else:
item = str(done['PY_' + n])
else:
done[n] = item = ""
if found:
after = value[m.end():]
value = value[:m.start()] + item + after
if "$" in after:
notdone[name] = value
else:
try:
value = int(value)
except ValueError:
done[name] = value.strip()
else:
done[name] = value
variables.remove(name)
if name.startswith('PY_') \
and name[3:] in renamed_variables:
name = name[3:]
if name not in done:
done[name] = value
else:
# bogus variable reference (e.g. "prefix=$/opt/python");
# just drop it since we can't deal
done[name] = value
variables.remove(name)
# strip spurious spaces
for k, v in done.items():
if isinstance(v, str):
done[k] = v.strip()
# save the results in the global dictionary
vars.update(done)
return vars
def get_makefile_filename():
"""Return the path of the Makefile."""
if _PYTHON_BUILD:
return os.path.join(_sys_home or _PROJECT_BASE, "Makefile")
if hasattr(sys, 'abiflags'):
config_dir_name = 'config-%s%s' % (_PY_VERSION_SHORT, sys.abiflags)
else:
config_dir_name = 'config'
return os.path.join(get_path('stdlib'), config_dir_name, 'Makefile')
def _generate_posix_vars():
"""Generate the Python module containing build-time variables."""
import pprint
vars = {}
# load the installed Makefile:
makefile = get_makefile_filename()
try:
_parse_makefile(makefile, vars)
except OSError as e:
msg = "invalid Python installation: unable to open %s" % makefile
if hasattr(e, "strerror"):
msg = msg + " (%s)" % e.strerror
raise OSError(msg)
# load the installed pyconfig.h:
config_h = get_config_h_filename()
try:
with open(config_h) as f:
parse_config_h(f, vars)
except OSError as e:
msg = "invalid Python installation: unable to open %s" % config_h
if hasattr(e, "strerror"):
msg = msg + " (%s)" % e.strerror
raise OSError(msg)
# On AIX, there are wrong paths to the linker scripts in the Makefile
# -- these paths are relative to the Python source, but when installed
# the scripts are in another directory.
if _PYTHON_BUILD:
vars['BLDSHARED'] = vars['LDSHARED']
# There's a chicken-and-egg situation on OS X with regards to the
# _sysconfigdata module after the changes introduced by #15298:
# get_config_vars() is called by get_platform() as part of the
# `make pybuilddir.txt` target -- which is a precursor to the
# _sysconfigdata.py module being constructed. Unfortunately,
# get_config_vars() eventually calls _init_posix(), which attempts
# to import _sysconfigdata, which we won't have built yet. In order
# for _init_posix() to work, if we're on Darwin, just mock up the
# _sysconfigdata module manually and populate it with the build vars.
# This is more than sufficient for ensuring the subsequent call to
# get_platform() succeeds.
name = '_sysconfigdata'
if 'darwin' in sys.platform:
import types
module = types.ModuleType(name)
module.build_time_vars = vars
sys.modules[name] = module
pybuilddir = 'build/lib.%s-%s' % (get_platform(), sys.version[:3])
if hasattr(sys, "gettotalrefcount"):
pybuilddir += '-pydebug'
os.makedirs(pybuilddir, exist_ok=True)
destfile = os.path.join(pybuilddir, name + '.py')
with open(destfile, 'w', encoding='utf8') as f:
f.write('# system configuration generated and used by'
' the sysconfig module\n')
f.write('build_time_vars = ')
pprint.pprint(vars, stream=f)
# Create file used for sys.path fixup -- see Modules/getpath.c
with open('pybuilddir.txt', 'w', encoding='ascii') as f:
f.write(pybuilddir)
def _init_posix(vars):
"""Initialize the module as appropriate for POSIX systems."""
# _sysconfigdata is generated at build time, see _generate_posix_vars()
from _sysconfigdata import build_time_vars
vars.update(build_time_vars)
def _init_non_posix(vars):
"""Initialize the module as appropriate for NT"""
# set basic install directories
vars['LIBDEST'] = get_path('stdlib')
vars['BINLIBDEST'] = get_path('platstdlib')
vars['INCLUDEPY'] = get_path('include')
vars['EXT_SUFFIX'] = '.pyd'
vars['EXE'] = '.exe'
vars['VERSION'] = _PY_VERSION_SHORT_NO_DOT
vars['BINDIR'] = os.path.dirname(_safe_realpath(sys.executable))
#
# public APIs
#
def parse_config_h(fp, vars=None):
"""Parse a config.h-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary.
"""
if vars is None:
vars = {}
import re
define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n")
undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n")
while True:
line = fp.readline()
if not line:
break
m = define_rx.match(line)
if m:
n, v = m.group(1, 2)
try:
v = int(v)
except ValueError:
pass
vars[n] = v
else:
m = undef_rx.match(line)
if m:
vars[m.group(1)] = 0
return vars
def get_config_h_filename():
"""Return the path of pyconfig.h."""
if _PYTHON_BUILD:
if os.name == "nt":
inc_dir = os.path.join(_sys_home or _PROJECT_BASE, "PC")
else:
inc_dir = _sys_home or _PROJECT_BASE
else:
inc_dir = get_path('platinclude')
return os.path.join(inc_dir, 'pyconfig.h')
def get_scheme_names():
"""Return a tuple containing the schemes names."""
return tuple(sorted(_INSTALL_SCHEMES))
def get_path_names():
"""Return a tuple containing the paths names."""
return _SCHEME_KEYS
def get_paths(scheme=_get_default_scheme(), vars=None, expand=True):
"""Return a mapping containing an install scheme.
``scheme`` is the install scheme name. If not provided, it will
return the default scheme for the current platform.
"""
if expand:
return _expand_vars(scheme, vars)
else:
return _INSTALL_SCHEMES[scheme]
def get_path(name, scheme=_get_default_scheme(), vars=None, expand=True):
"""Return a path corresponding to the scheme.
``scheme`` is the install scheme name.
"""
return get_paths(scheme, vars, expand)[name]
def get_config_vars(*args):
"""With no arguments, return a dictionary of all configuration
variables relevant for the current platform.
On Unix, this means every variable defined in Python's installed Makefile;
On Windows it's a much smaller set.
With arguments, return a list of values that result from looking up
each argument in the configuration variable dictionary.
"""
global _CONFIG_VARS
if _CONFIG_VARS is None:
_CONFIG_VARS = {}
# Normalized versions of prefix and exec_prefix are handy to have;
# in fact, these are the standard versions used most places in the
# Distutils.
_CONFIG_VARS['prefix'] = _PREFIX
_CONFIG_VARS['exec_prefix'] = _EXEC_PREFIX
_CONFIG_VARS['py_version'] = _PY_VERSION
_CONFIG_VARS['py_version_short'] = _PY_VERSION_SHORT
_CONFIG_VARS['py_version_nodot'] = _PY_VERSION[0] + _PY_VERSION[2]
_CONFIG_VARS['installed_base'] = _BASE_PREFIX
_CONFIG_VARS['base'] = _PREFIX
_CONFIG_VARS['installed_platbase'] = _BASE_EXEC_PREFIX
_CONFIG_VARS['platbase'] = _EXEC_PREFIX
_CONFIG_VARS['projectbase'] = _PROJECT_BASE
try:
_CONFIG_VARS['abiflags'] = sys.abiflags
except AttributeError:
# sys.abiflags may not be defined on all platforms.
_CONFIG_VARS['abiflags'] = ''
if os.name == 'nt':
_init_non_posix(_CONFIG_VARS)
if os.name == 'posix':
_init_posix(_CONFIG_VARS)
# For backward compatibility, see issue19555
SO = _CONFIG_VARS.get('EXT_SUFFIX')
if SO is not None:
_CONFIG_VARS['SO'] = SO
# Setting 'userbase' is done below the call to the
# init function to enable using 'get_config_var' in
# the init-function.
_CONFIG_VARS['userbase'] = _getuserbase()
# Always convert srcdir to an absolute path
srcdir = _CONFIG_VARS.get('srcdir', _PROJECT_BASE)
if os.name == 'posix':
if _PYTHON_BUILD:
# If srcdir is a relative path (typically '.' or '..')
# then it should be interpreted relative to the directory
# containing Makefile.
base = os.path.dirname(get_makefile_filename())
srcdir = os.path.join(base, srcdir)
else:
# srcdir is not meaningful since the installation is
# spread about the filesystem. We choose the
# directory containing the Makefile since we know it
# exists.
srcdir = os.path.dirname(get_makefile_filename())
_CONFIG_VARS['srcdir'] = _safe_realpath(srcdir)
# OS X platforms require special customization to handle
# multi-architecture, multi-os-version installers
if sys.platform == 'darwin':
import _osx_support
_osx_support.customize_config_vars(_CONFIG_VARS)
if args:
vals = []
for name in args:
vals.append(_CONFIG_VARS.get(name))
return vals
else:
return _CONFIG_VARS
def get_config_var(name):
"""Return the value of a single variable using the dictionary returned by
'get_config_vars()'.
Equivalent to get_config_vars().get(name)
"""
if name == 'SO':
import warnings
warnings.warn('SO is deprecated, use EXT_SUFFIX', DeprecationWarning, 2)
return get_config_vars().get(name)
def get_platform():
"""Return a string that identifies the current platform.
This is used mainly to distinguish platform-specific build directories and
platform-specific built distributions. Typically includes the OS name
and version and the architecture (as supplied by 'os.uname()'),
although the exact information included depends on the OS; eg. for IRIX
the architecture isn't particularly important (IRIX only runs on SGI
hardware), but for Linux the kernel version isn't particularly
important.
Examples of returned values:
linux-i586
linux-alpha (?)
solaris-2.6-sun4u
irix-5.3
irix64-6.2
Windows will return one of:
win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc)
win-ia64 (64bit Windows on Itanium)
win32 (all others - specifically, sys.platform is returned)
For other non-POSIX platforms, currently just returns 'sys.platform'.
"""
if os.name == 'nt':
# sniff sys.version for architecture.
prefix = " bit ("
i = sys.version.find(prefix)
if i == -1:
return sys.platform
j = sys.version.find(")", i)
look = sys.version[i+len(prefix):j].lower()
if look == 'amd64':
return 'win-amd64'
if look == 'itanium':
return 'win-ia64'
return sys.platform
if os.name != "posix" or not hasattr(os, 'uname'):
# XXX what about the architecture? NT is Intel or Alpha
return sys.platform
# Set for cross builds explicitly
if "_PYTHON_HOST_PLATFORM" in os.environ:
return os.environ["_PYTHON_HOST_PLATFORM"]
# Try to distinguish various flavours of Unix
osname, host, release, version, machine = os.uname()
# Convert the OS name to lowercase, remove '/' characters
# (to accommodate BSD/OS), and translate spaces (for "Power Macintosh")
osname = osname.lower().replace('/', '')
machine = machine.replace(' ', '_')
machine = machine.replace('/', '-')
if osname[:5] == "linux":
# At least on Linux/Intel, 'machine' is the processor --
# i386, etc.
# XXX what about Alpha, SPARC, etc?
return "%s-%s" % (osname, machine)
elif osname[:5] == "sunos":
if release[0] >= "5": # SunOS 5 == Solaris 2
osname = "solaris"
release = "%d.%s" % (int(release[0]) - 3, release[2:])
# We can't use "platform.architecture()[0]" because a
# bootstrap problem. We use a dict to get an error
# if some suspicious happens.
bitness = {2147483647:"32bit", 9223372036854775807:"64bit"}
machine += ".%s" % bitness[sys.maxsize]
# fall through to standard osname-release-machine representation
elif osname[:4] == "irix": # could be "irix64"!
return "%s-%s" % (osname, release)
elif osname[:3] == "aix":
return "%s-%s.%s" % (osname, version, release)
elif osname[:6] == "cygwin":
osname = "cygwin"
import re
rel_re = re.compile(r'[\d.]+')
m = rel_re.match(release)
if m:
release = m.group()
elif osname[:6] == "darwin":
import _osx_support
osname, release, machine = _osx_support.get_platform_osx(
get_config_vars(),
osname, release, machine)
return "%s-%s-%s" % (osname, release, machine)
def get_python_version():
return _PY_VERSION_SHORT
def _print_dict(title, data):
for index, (key, value) in enumerate(sorted(data.items())):
if index == 0:
print('%s: ' % (title))
print('\t%s = "%s"' % (key, value))
def _main():
"""Display all information sysconfig detains."""
if '--generate-posix-vars' in sys.argv:
_generate_posix_vars()
return
print('Platform: "%s"' % get_platform())
print('Python version: "%s"' % get_python_version())
print('Current installation scheme: "%s"' % _get_default_scheme())
print()
_print_dict('Paths', get_paths())
print()
_print_dict('Variables', get_config_vars())
if __name__ == '__main__':
_main()
|
lgpl-3.0
|
kenshay/ImageScripter
|
ProgramData/SystemFiles/Python/Lib/site-packages/numpy/lib/tests/test_financial.py
|
57
|
6779
|
from __future__ import division, absolute_import, print_function
import numpy as np
from numpy.testing import (
run_module_suite, TestCase, assert_, assert_almost_equal,
assert_allclose, assert_equal
)
class TestFinancial(TestCase):
def test_rate(self):
assert_almost_equal(np.rate(10, 0, -3500, 10000),
0.1107, 4)
def test_irr(self):
v = [-150000, 15000, 25000, 35000, 45000, 60000]
assert_almost_equal(np.irr(v), 0.0524, 2)
v = [-100, 0, 0, 74]
assert_almost_equal(np.irr(v), -0.0955, 2)
v = [-100, 39, 59, 55, 20]
assert_almost_equal(np.irr(v), 0.28095, 2)
v = [-100, 100, 0, -7]
assert_almost_equal(np.irr(v), -0.0833, 2)
v = [-100, 100, 0, 7]
assert_almost_equal(np.irr(v), 0.06206, 2)
v = [-5, 10.5, 1, -8, 1]
assert_almost_equal(np.irr(v), 0.0886, 2)
# Test that if there is no solution then np.irr returns nan
# Fixes gh-6744
v = [-1, -2, -3]
assert_equal(np.irr(v), np.nan)
def test_pv(self):
assert_almost_equal(np.pv(0.07, 20, 12000, 0), -127128.17, 2)
def test_fv(self):
assert_almost_equal(np.fv(0.075, 20, -2000, 0, 0), 86609.36, 2)
def test_pmt(self):
res = np.pmt(0.08/12, 5*12, 15000)
tgt = -304.145914
assert_allclose(res, tgt)
# Test the edge case where rate == 0.0
res = np.pmt(0.0, 5*12, 15000)
tgt = -250.0
assert_allclose(res, tgt)
# Test the case where we use broadcast and
# the arguments passed in are arrays.
res = np.pmt([[0.0, 0.8],[0.3, 0.8]],[12, 3],[2000, 20000])
tgt = np.array([[-166.66667, -19311.258],[-626.90814, -19311.258]])
assert_allclose(res, tgt)
def test_ppmt(self):
np.round(np.ppmt(0.1/12, 1, 60, 55000), 2) == 710.25
def test_ipmt(self):
np.round(np.ipmt(0.1/12, 1, 24, 2000), 2) == 16.67
def test_nper(self):
assert_almost_equal(np.nper(0.075, -2000, 0, 100000.),
21.54, 2)
def test_nper2(self):
assert_almost_equal(np.nper(0.0, -2000, 0, 100000.),
50.0, 1)
def test_npv(self):
assert_almost_equal(
np.npv(0.05, [-15000, 1500, 2500, 3500, 4500, 6000]),
122.89, 2)
def test_mirr(self):
val = [-4500, -800, 800, 800, 600, 600, 800, 800, 700, 3000]
assert_almost_equal(np.mirr(val, 0.08, 0.055), 0.0666, 4)
val = [-120000, 39000, 30000, 21000, 37000, 46000]
assert_almost_equal(np.mirr(val, 0.10, 0.12), 0.126094, 6)
val = [100, 200, -50, 300, -200]
assert_almost_equal(np.mirr(val, 0.05, 0.06), 0.3428, 4)
val = [39000, 30000, 21000, 37000, 46000]
assert_(np.isnan(np.mirr(val, 0.10, 0.12)))
def test_when(self):
#begin
assert_almost_equal(np.rate(10, 20, -3500, 10000, 1),
np.rate(10, 20, -3500, 10000, 'begin'), 4)
#end
assert_almost_equal(np.rate(10, 20, -3500, 10000),
np.rate(10, 20, -3500, 10000, 'end'), 4)
assert_almost_equal(np.rate(10, 20, -3500, 10000, 0),
np.rate(10, 20, -3500, 10000, 'end'), 4)
# begin
assert_almost_equal(np.pv(0.07, 20, 12000, 0, 1),
np.pv(0.07, 20, 12000, 0, 'begin'), 2)
# end
assert_almost_equal(np.pv(0.07, 20, 12000, 0),
np.pv(0.07, 20, 12000, 0, 'end'), 2)
assert_almost_equal(np.pv(0.07, 20, 12000, 0, 0),
np.pv(0.07, 20, 12000, 0, 'end'), 2)
# begin
assert_almost_equal(np.fv(0.075, 20, -2000, 0, 1),
np.fv(0.075, 20, -2000, 0, 'begin'), 4)
# end
assert_almost_equal(np.fv(0.075, 20, -2000, 0),
np.fv(0.075, 20, -2000, 0, 'end'), 4)
assert_almost_equal(np.fv(0.075, 20, -2000, 0, 0),
np.fv(0.075, 20, -2000, 0, 'end'), 4)
# begin
assert_almost_equal(np.pmt(0.08/12, 5*12, 15000., 0, 1),
np.pmt(0.08/12, 5*12, 15000., 0, 'begin'), 4)
# end
assert_almost_equal(np.pmt(0.08/12, 5*12, 15000., 0),
np.pmt(0.08/12, 5*12, 15000., 0, 'end'), 4)
assert_almost_equal(np.pmt(0.08/12, 5*12, 15000., 0, 0),
np.pmt(0.08/12, 5*12, 15000., 0, 'end'), 4)
# begin
assert_almost_equal(np.ppmt(0.1/12, 1, 60, 55000, 0, 1),
np.ppmt(0.1/12, 1, 60, 55000, 0, 'begin'), 4)
# end
assert_almost_equal(np.ppmt(0.1/12, 1, 60, 55000, 0),
np.ppmt(0.1/12, 1, 60, 55000, 0, 'end'), 4)
assert_almost_equal(np.ppmt(0.1/12, 1, 60, 55000, 0, 0),
np.ppmt(0.1/12, 1, 60, 55000, 0, 'end'), 4)
# begin
assert_almost_equal(np.ipmt(0.1/12, 1, 24, 2000, 0, 1),
np.ipmt(0.1/12, 1, 24, 2000, 0, 'begin'), 4)
# end
assert_almost_equal(np.ipmt(0.1/12, 1, 24, 2000, 0),
np.ipmt(0.1/12, 1, 24, 2000, 0, 'end'), 4)
assert_almost_equal(np.ipmt(0.1/12, 1, 24, 2000, 0, 0),
np.ipmt(0.1/12, 1, 24, 2000, 0, 'end'), 4)
# begin
assert_almost_equal(np.nper(0.075, -2000, 0, 100000., 1),
np.nper(0.075, -2000, 0, 100000., 'begin'), 4)
# end
assert_almost_equal(np.nper(0.075, -2000, 0, 100000.),
np.nper(0.075, -2000, 0, 100000., 'end'), 4)
assert_almost_equal(np.nper(0.075, -2000, 0, 100000., 0),
np.nper(0.075, -2000, 0, 100000., 'end'), 4)
def test_broadcast(self):
assert_almost_equal(np.nper(0.075, -2000, 0, 100000., [0, 1]),
[21.5449442, 20.76156441], 4)
assert_almost_equal(np.ipmt(0.1/12, list(range(5)), 24, 2000),
[-17.29165168, -16.66666667, -16.03647345,
-15.40102862, -14.76028842], 4)
assert_almost_equal(np.ppmt(0.1/12, list(range(5)), 24, 2000),
[-74.998201, -75.62318601, -76.25337923,
-76.88882405, -77.52956425], 4)
assert_almost_equal(np.ppmt(0.1/12, list(range(5)), 24, 2000, 0,
[0, 0, 1, 'end', 'begin']),
[-74.998201, -75.62318601, -75.62318601,
-76.88882405, -76.88882405], 4)
if __name__ == "__main__":
run_module_suite()
|
gpl-3.0
|
jeffmahoney/supybot
|
plugins/URL/config.py
|
9
|
2464
|
###
# Copyright (c) 2005, Jeremiah Fincher
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
import supybot.conf as conf
import supybot.registry as registry
def configure(advanced):
# This will be called by supybot to configure this module. advanced is
# a bool that specifies whether the user identified himself as an advanced
# user or not. You should effect your configuration by manipulating the
# registry as appropriate.
from supybot.questions import expect, anything, something, yn
conf.registerPlugin('URL', True)
URL = conf.registerPlugin('URL')
conf.registerChannelValue(URL, 'nonSnarfingRegexp',
registry.Regexp(None, """Determines what URLs are not to be snarfed and
stored in the database for the channel; URLs matching the given regexp will
not be snarfed. Give the empty string if you have no URLs that you'd like
to exclude from being snarfed."""))
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
|
bsd-3-clause
|
kholidfu/django
|
tests/urlpatterns_reverse/namespace_urls.py
|
109
|
2555
|
from django.conf.urls import include, url
from . import views
from .tests import URLObject
testobj1 = URLObject('testapp', 'test-ns1')
testobj2 = URLObject('testapp', 'test-ns2')
default_testobj = URLObject('testapp', 'testapp')
otherobj1 = URLObject('nodefault', 'other-ns1')
otherobj2 = URLObject('nodefault', 'other-ns2')
newappobj1 = URLObject('newapp')
urlpatterns = [
url(r'^normal/$', views.empty_view, name='normal-view'),
url(r'^normal/(?P<arg1>[0-9]+)/(?P<arg2>[0-9]+)/$', views.empty_view, name='normal-view'),
url(r'^resolver_match/$', views.pass_resolver_match_view, name='test-resolver-match'),
url(r'^\+\\\$\*/$', views.empty_view, name='special-view'),
url(r'^mixed_args/([0-9]+)/(?P<arg2>[0-9]+)/$', views.empty_view, name='mixed-args'),
url(r'^no_kwargs/([0-9]+)/([0-9]+)/$', views.empty_view, name='no-kwargs'),
url(r'^view_class/(?P<arg1>[0-9]+)/(?P<arg2>[0-9]+)/$', views.view_class_instance, name='view-class'),
url(r'^unnamed/normal/(?P<arg1>[0-9]+)/(?P<arg2>[0-9]+)/$', views.empty_view),
url(r'^unnamed/view_class/(?P<arg1>[0-9]+)/(?P<arg2>[0-9]+)/$', views.view_class_instance),
url(r'^test1/', include(testobj1.urls)),
url(r'^test2/', include(testobj2.urls)),
url(r'^default/', include(default_testobj.urls)),
url(r'^other1/', include(otherobj1.urls)),
url(r'^other[246]/', include(otherobj2.urls)),
url(r'^newapp1/', include(newappobj1.app_urls, 'new-ns1')),
url(r'^new-default/', include(newappobj1.app_urls)),
url(r'^app-included[135]/', include('urlpatterns_reverse.included_app_urls', namespace='app-ns1')),
url(r'^app-included2/', include('urlpatterns_reverse.included_app_urls', namespace='app-ns2')),
url(r'^ns-included[135]/', include('urlpatterns_reverse.included_namespace_urls', namespace='inc-ns1')),
url(r'^ns-included2/', include('urlpatterns_reverse.included_namespace_urls', namespace='inc-ns2')),
url(r'^app-included/', include('urlpatterns_reverse.included_namespace_urls', 'inc-app', 'inc-app')),
url(r'^included/', include('urlpatterns_reverse.included_namespace_urls')),
url(r'^inc(?P<outer>[0-9]+)/', include('urlpatterns_reverse.included_urls', namespace='inc-ns5')),
url(r'^included/([0-9]+)/', include('urlpatterns_reverse.included_namespace_urls')),
url(
r'^ns-outer/(?P<outer>[0-9]+)/',
include('urlpatterns_reverse.included_namespace_urls', namespace='inc-outer')
),
url(r'^\+\\\$\*/', include('urlpatterns_reverse.namespace_urls', namespace='special')),
]
|
bsd-3-clause
|
pcrews/kewpie
|
xtrabackup_disabled/bug810269_test.py
|
24
|
6930
|
#! /usr/bin/env python
# -*- mode: python; indent-tabs-mode: nil; -*-
# vim:expandtab:shiftwidth=2:tabstop=2:smarttab:
#
# Copyright (C) 2011 Patrick Crews
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import os
import shutil
import tarfile
from lib.util.mysqlBaseTestCase import mysqlBaseTestCase
server_requirements = [["--innodb_strict_mode --innodb_file_per_table --innodb_file_format=Barracuda"]]
servers = []
server_manager = None
test_executor = None
# we explicitly use the --no-timestamp option
# here. We will be using a generic / vanilla backup dir
backup_path = None
def skip_checks(system_manager):
if not system_manager.code_manager.test_tree.innodb_version:
return True, "Test requires XtraDB or Innodb plugin."
return False, ''
class basicTest(mysqlBaseTestCase):
def setUp(self):
master_server = servers[0] # assumption that this is 'master'
backup_path = os.path.join(master_server.vardir, '_xtrabackup')
# remove backup paths
for del_path in [backup_path]:
if os.path.exists(del_path):
shutil.rmtree(del_path)
def load_table(self, table_name, row_count, server):
queries = []
for i in range(row_count):
queries.append("INSERT INTO %s VALUES (%d, %d)" %(table_name,i, row_count))
retcode, result = self.execute_queries(queries, server)
self.assertEqual(retcode, 0, msg=result)
def test_bug810269(self):
""" Bug #665210: tar4ibd does not support innodb row_format=compressed
Bug #810269: tar4ibd does not check for doublewrite buffer pages
"""
self.servers = servers
master_server = servers[0]
logging = test_executor.logging
innobackupex = test_executor.system_manager.innobackupex_path
xtrabackup = test_executor.system_manager.xtrabackup_path
backup_path = os.path.join(master_server.vardir, '_xtrabackup')
tar_file_path = os.path.join(backup_path,'out.tar')
output_path = os.path.join(master_server.vardir, 'innobackupex.out')
exec_path = os.path.dirname(innobackupex)
table_name = "t1"
# populate our server with a test bed
queries = ["DROP TABLE IF EXISTS %s" %(table_name)
,("CREATE TABLE %s "
"(`a` int(11) DEFAULT NULL, "
"`number` int(11) DEFAULT NULL) "
" ENGINE=InnoDB DEFAULT CHARSET=latin1"
%(table_name)
)
# compress tables
,("ALTER TABLE %s ENGINE=InnoDB "
"ROW_FORMAT=compressed KEY_BLOCK_SIZE=4"
%(table_name)
)
]
retcode, result = self.execute_queries(queries, master_server)
self.assertEqual(retcode, 0, msg = result)
row_count = 10000
self.load_table(table_name, row_count, master_server)
# get a checksum that we'll compare against post-restore
query = "CHECKSUM TABLE %s" %table_name
retcode, orig_checksum = self.execute_query(query, master_server)
self.assertEqual(retcode, 0, orig_checksum)
# take a backup
try:
os.mkdir(backup_path)
except OSError:
pass
cmd = [ innobackupex
, "--defaults-file=%s" %master_server.cnf_file
, "--stream=tar"
, "--user=root"
, "--port=%d" %master_server.master_port
, "--host=127.0.0.1"
, "--no-timestamp"
, "--ibbackup=%s" %xtrabackup
, "%s > %s" %(backup_path,tar_file_path)
]
cmd = " ".join(cmd)
retcode, output = self.execute_cmd(cmd, output_path, exec_path, True)
self.assertTrue(retcode==0,output)
# stop the server
master_server.stop()
# extract our backup tarball
cmd = "tar -ivxf %s" %tar_file_path
retcode, output = self.execute_cmd(cmd, output_path, backup_path, True)
self.assertEqual(retcode,0,output)
# Check for Bug 723318 - seems quicker than separate test case
self.assertTrue('xtrabackup_binary' in os.listdir(backup_path)
, msg = "Bug723318: xtrabackup_binary not included in tar archive when streaming")
# do prepare on backup
cmd = [ innobackupex
, "--apply-log"
, "--no-timestamp"
, "--use-memory=500M"
, "--ibbackup=%s" %xtrabackup
, backup_path
]
cmd = " ".join(cmd)
retcode, output = self.execute_cmd(cmd, output_path, exec_path, True)
self.assertEqual(retcode,0,output)
# remove old datadir
shutil.rmtree(master_server.datadir)
os.mkdir(master_server.datadir)
# restore from backup
cmd = [ innobackupex
, "--defaults-file=%s" %master_server.cnf_file
, "--copy-back"
, "--ibbackup=%s" %(xtrabackup)
, backup_path
]
cmd = " ".join(cmd)
retcode, output = self.execute_cmd(cmd, output_path, exec_path, True)
self.assertEqual(retcode,0, output)
# restart server (and ensure it doesn't crash)
master_server.start()
self.assertEqual(master_server.status,1, 'Server failed restart from restored datadir...')
# Check the server is ok
# get a checksum that we'll compare against pre-restore
query = "CHECKSUM TABLE %s" %table_name
retcode, restored_checksum = self.execute_query(query, master_server)
self.assertEqual(retcode, 0, restored_checksum)
self.assertEqual(orig_checksum, restored_checksum, "%s || %s" %(orig_checksum, restored_checksum))
|
apache-2.0
|
synologix/enigma2
|
lib/python/Components/TunerInfo.py
|
63
|
4265
|
from GUIComponent import GUIComponent
from enigma import eLabel, eSlider, iFrontendInformation
from math import log
class TunerInfo(GUIComponent):
SNR = 0
SNR_DB = 1
AGC = 2
BER = 3
SNR_PERCENTAGE = 0
AGC_PERCENTAGE = 2
BER_VALUE = 3
SNR_BAR = 4
AGC_BAR = 5
BER_BAR = 6
LOCK_STATE = 7
SYNC_STATE = 8
LOCK = 9
def __init__(self, type, servicefkt = None, frontendfkt = None, statusDict = None):
GUIComponent.__init__(self)
self.instance = None
self.message = None
self.value = None
self.servicefkt = servicefkt
self.frontendfkt = frontendfkt
self.statusDict = statusDict
self.type = type
self.update()
def setText(self, text):
self.message = text
if self.instance:
self.instance.setText(self.message)
def setValue(self, value):
self.value = value
if self.instance:
self.instance.setValue(self.value)
def calc(self,val):
if not val:
return 0
if val < 2500:
return long(log(val)/log(2))
return val*100/65535
def update(self):
if self.type == self.SNR_DB:
value = self.getValue(self.SNR_DB)
elif self.type == self.SNR_PERCENTAGE or self.type == self.SNR_BAR:
value = self.getValue(self.SNR) * 100 / 65536
elif self.type == self.AGC_PERCENTAGE or self.type == self.AGC_BAR:
value = self.getValue(self.AGC) * 100 / 65536
elif self.type == self.BER_VALUE or self.type == self.BER_BAR:
value = self.getValue(self.BER)
elif self.type == self.LOCK_STATE:
value = self.getValue(self.LOCK)
if self.type == self.SNR_DB:
if value is not None and value != 0x12345678:
self.setText("%3.02f dB" % (value / 100.0))
else:
self.setText("")
elif self.type == self.SNR_PERCENTAGE or self.type == self.AGC_PERCENTAGE:
self.setText("%d%%" % value)
elif self.type == self.BER_VALUE:
self.setText("%d" % value)
elif self.type == self.SNR_BAR or self.type == self.AGC_BAR:
self.setValue(value)
elif self.type == self.BER_BAR:
self.setValue(self.calc(value))
elif self.type == self.LOCK_STATE:
if value == 1:
self.setText(_("locked"))
else:
self.setText(_("not locked"))
def getValue(self, what):
if self.statusDict:
if what == self.SNR_DB:
return self.statusDict.get("tuner_signal_quality_db", 0x12345678)
elif what == self.SNR:
return self.statusDict.get("tuner_signal_quality", 0)
elif what == self.AGC:
return self.statusDict.get("tuner_signal_power", 0)
elif what == self.BER:
return self.statusDict.get("tuner_bit_error_rate", 0)
elif what == self.LOCK:
return self.statusDict.get("tuner_locked", 0)
elif self.servicefkt:
service = self.servicefkt()
if service is not None:
feinfo = service.frontendInfo()
if feinfo is not None:
if what == self.SNR_DB:
return feinfo.getFrontendInfo(iFrontendInformation.signalQualitydB)
elif what == self.SNR:
return feinfo.getFrontendInfo(iFrontendInformation.signalQuality)
elif what == self.AGC:
return feinfo.getFrontendInfo(iFrontendInformation.signalPower)
elif what == self.BER:
return feinfo.getFrontendInfo(iFrontendInformation.bitErrorRate)
elif what == self.LOCK:
return feinfo.getFrontendInfo(iFrontendInformation.lockState)
elif self.frontendfkt:
frontend = self.frontendfkt()
if frontend:
if what == self.SNR_DB:
return frontend.readFrontendData(iFrontendInformation.signalQualitydB)
elif what == self.SNR:
return frontend.readFrontendData(iFrontendInformation.signalQuality)
elif what == self.AGC:
return frontend.readFrontendData(iFrontendInformation.signalPower)
elif what == self.BER:
return frontend.readFrontendData(iFrontendInformation.bitErrorRate)
elif what == self.LOCK:
return frontend.readFrontendData(iFrontendInformation.lockState)
return 0
def createWidget(self, parent):
if self.SNR_PERCENTAGE <= self.type <= self.BER_VALUE or self.type == self.LOCK_STATE:
return eLabel(parent)
elif self.SNR_BAR <= self.type <= self.BER_BAR:
self.g = eSlider(parent)
self.g.setRange(0, 100)
return self.g
def postWidgetCreate(self, instance):
if instance is None:
return
if self.message is not None:
instance.setText(self.message)
elif self.value is not None:
instance.setValue(self.value)
|
gpl-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.